Recent advances in generative AI raise the question of whether general-purpose image editing models can serve as unified solutions for image restoration. In this work, we conduct a systematic evaluation of Nano Banana 2 for image restoration across diverse scenes and degradation types. Our results show that prompt design plays a critical role, where concise prompts with explicit fidelity constraints achieve the best trade-off between reconstruction accuracy and perceptual quality. Compared with state-of-the-art restoration models, Nano Banana 2 achieves superior performance in full-reference metrics while remaining competitive in perceptual quality, which is further supported by user studies. We also observe strong generalization in challenging scenarios, such as small faces, dense crowds, and severe degradations. However, the model remains sensitive to prompt formulation and may require iterative refinement for optimal results. Overall, our findings suggest that general-purpose generative models hold strong potential as unified image restoration solvers, while highlighting the importance of controllability and robustness. All test results are available on https://github.com/yxyuanxiao/NanoBanana2TestOnIR.
This book explores the integration of circular economy principles into architectural conservation, restoration, and rehabilitation (CRR). Addressing the environmental, cultural, and regulatory challenges of the built environment, it frames CRR as a natural application of circular strategies that prioritize resource conservation, material reuse, and lifecycle extension. The volume critically examines European and Portuguese policy frameworks, adaptive reuse methodologies, and material circularity in historic structures, while offering practical guidelines for architects and decision-makers. Core themes include design for disassembly, urban mining, regenerative design, and the role of digital tools in documentation and lifecycle management. By aligning passive design strategies with circular thinking, the book highlights synergies between environmental performance and cultural preservation. It also discusses the implementation of building passports, stakeholder engagement, and the significance of embodied carbon in heritage contexts. Through a multidisciplinary lens, the work proposes a systems approach that connects material flows, policy mechanisms, and design strategies. The author emphasizes the role of architects as agents of circular transitions and encourages the integration of circular economy frameworks in architectural education. This book serves as a foundational reference for professionals, students, and policymakers engaged in sustainable transformation of the built heritage.
Integrating renewable energy sources into the grid not only reduces global carbon emissions, but also facilitates distribution system (DS) blackstart restoration. This process leverages renewable energy, inverters, situational awareness and distribution automation to initiate blackstart at the DS level, obtaining a fast response and bottom-up restoration. In this Review, we survey the latest technological advances for DS blackstart restoration using renewable energy. We first present mathematical models for distributed energy resources (DERs), network topology, and load dynamics. We then discuss how the situational awareness can help improve restoration performance through real-time monitoring and forecasting. Next, the DS blackstart restoration problem, including objectives, constraints, and existing methodologies for decision-making are provided. Lastly, we outline remaining challenges, and highlight the opportunities and future research directions.
Niki Nezakati, Arnab Ghosh, Amit Roy-Chowdhury
et al.
Denoising diffusion models have achieved state-of-the-art performance in image restoration by modeling the process as sequential denoising steps. However, most approaches assume independent and identically distributed (i.i.d.) Gaussian noise, while real-world sensors often exhibit spatially correlated noise due to readout mechanisms, limiting their practical effectiveness. We introduce Correlation Aware Restoration with Diffusion (CARD), a training-free extension of DDRM that explicitly handles correlated Gaussian noise. CARD first whitens the noisy observation, which converts the noise into an i.i.d. form. Then, the diffusion restoration steps are replaced with noise-whitened updates, which inherits DDRM's closed-form sampling efficiency while now being able to handle correlated noise. To emphasize the importance of addressing correlated noise, we contribute CIN-D, a novel correlated noise dataset captured across diverse illumination conditions to evaluate restoration methods on real rolling-shutter sensor noise. This dataset fills a critical gap in the literature for experimental evaluation with real-world correlated noise. Experiments on standard benchmarks with synthetic correlated noise and on CIN-D demonstrate that CARD consistently outperforms existing methods across denoising, deblurring, and super-resolution tasks.
Blind Face Restoration (BFR) encounters inherent challenges in exploring its large solution space, leading to common artifacts like missing details and identity ambiguity in the restored images. To tackle these challenges, we propose a Likelihood-Regularized Policy Optimization (LRPO) framework, the first to apply online reinforcement learning (RL) to the BFR task. LRPO leverages rewards from sampled candidates to refine the policy network, increasing the likelihood of high-quality outputs while improving restoration performance on low-quality inputs. However, directly applying RL to BFR creates incompatibility issues, producing restoration results that deviate significantly from the ground truth. To balance perceptual quality and fidelity, we propose three key strategies: 1) a composite reward function tailored for face restoration assessment, 2) ground-truth guided likelihood regularization, and 3) noise-level advantage assignment. Extensive experiments demonstrate that our proposed LRPO significantly improves the face restoration quality over baseline methods and achieves state-of-the-art performance.
Diffusion models have been widely utilized for image restoration. However, previous blind image restoration methods still need to assume the type of degradation model while leaving the parameters to be optimized, limiting their real-world applications. Therefore, we aim to tame generative diffusion prior for universal blind image restoration dubbed BIR-D, which utilizes an optimizable convolutional kernel to simulate the degradation model and dynamically update the parameters of the kernel in the diffusion steps, enabling it to achieve blind image restoration results even in various complex situations. Besides, based on mathematical reasoning, we have provided an empirical formula for the chosen of adaptive guidance scale, eliminating the need for a grid search for the optimal parameter. Experimentally, Our BIR-D has demonstrated superior practicality and versatility than off-the-shelf unsupervised methods across various tasks both on real-world and synthetic datasets, qualitatively and quantitatively. BIR-D is able to fulfill multi-guidance blind image restoration. Moreover, BIR-D can also restore images that undergo multiple and complicated degradations, demonstrating the practical applications.
La reutilización del patrimonio constru es una de las estrategias para dar respuesta a los problemas vigentes de sostenibilidad en el sector de la construcción. Los edificios religiosos que han sido desacralizados y abandonados figuran en este contexto. El objetivo de este se centra en el estudio de las soluciones de readaptación de edificios religiosos a nuevos usos, sopesando la relación entre valorización y reutilización a partir del análisis de los períodos de reutilización, las circunstancias sociales y económicas que los motivaron, y los nuevas programas propuestos, así como las estrategias de intervención utilizadas atendiendo al perfil de la actual propiedad, ya sea pública o privada.
Conservation and restoration of prints, Architectural drawing and design
Images or videos captured by the Under-Display Camera (UDC) suffer from severe degradation, such as saturation degeneration and color shift. While restoration for UDC has been a critical task, existing works of UDC restoration focus only on images. UDC video restoration (UDC-VR) has not been explored in the community. In this work, we first propose a GAN-based generation pipeline to simulate the realistic UDC degradation process. With the pipeline, we build the first large-scale UDC video restoration dataset called PexelsUDC, which includes two subsets named PexelsUDC-T and PexelsUDC-P corresponding to different displays for UDC. Using the proposed dataset, we conduct extensive benchmark studies on existing video restoration methods and observe their limitations on the UDC-VR task. To this end, we propose a novel transformer-based baseline method that adaptively enhances degraded videos. The key components of the method are a spatial branch with local-aware transformers, a temporal branch embedded temporal transformers, and a spatial-temporal fusion module. These components drive the model to fully exploit spatial and temporal information for UDC-VR. Extensive experiments show that our method achieves state-of-the-art performance on PexelsUDC. The benchmark and the baseline method are expected to promote the progress of UDC-VR in the community, which will be made public.
The goal of image restoration (IR), a fundamental issue in computer vision, is to restore a high-quality (HQ) image from its degraded low-quality (LQ) observation. Multiple HQ solutions may correspond to an LQ input in this poorly posed problem, creating an ambiguous solution space. This motivates the investigation and incorporation of prior knowledge in order to effectively constrain the solution space and enhance the quality of the restored images. In spite of the pervasive use of hand-crafted and learned priors in IR, limited attention has been paid to the incorporation of knowledge from large-scale foundation models. In this paper, we for the first time leverage the prior knowledge of the state-of-the-art segment anything model (SAM) to boost the performance of existing IR networks in an parameter-efficient tuning manner. In particular, the choice of SAM is based on its robustness to image degradations, such that HQ semantic masks can be extracted from it. In order to leverage semantic priors and enhance restoration quality, we propose a lightweight SAM prior tuning (SPT) unit. This plug-and-play component allows us to effectively integrate semantic priors into existing IR networks, resulting in significant improvements in restoration quality. As the only trainable module in our method, the SPT unit has the potential to improve both efficiency and scalability. We demonstrate the effectiveness of the proposed method in enhancing a variety of methods across multiple tasks, such as image super-resolution and color image denoising.
This paper presents a novel method for restoring digital videos via a Deep Plug-and-Play (PnP) approach. Under a Bayesian formalism, the method consists in using a deep convolutional denoising network in place of the proximal operator of the prior in an alternating optimization scheme. We distinguish ourselves from prior PnP work by directly applying that method to restore a digital video from a degraded video observation. This way, a network trained once for denoising can be repurposed for other video restoration tasks. Our experiments in video deblurring, super-resolution, and interpolation of random missing pixels all show a clear benefit to using a network specifically designed for video denoising, as it yields better restoration performance and better temporal stability than a single image network with similar denoising performance using the same PnP formulation. Moreover, our method compares favorably to applying a different state-of-the-art PnP scheme separately on each frame of the sequence. This opens new perspectives in the field of video restoration.
Rubbing restorations are significant for preserving world cultural history. In this paper, we propose the RubbingGAN model for restoring incomplete rubbing characters. Specifically, we collect characters from the Zhang Menglong Bei and build up the first rubbing restoration dataset. We design the first generative adversarial network for rubbing restoration. Based on the dataset we collect, we apply the RubbingGAN to learn the Zhang Menglong Bei font style and restore the characters. The results of experiments show that RubbingGAN can repair both slightly and severely incomplete rubbing characters fast and effectively.
Denis Lacroix, Edgar Andres Ruiz Guzman, Pooja Siwach
We discuss here some aspects related to the symmetries of a quantum many-body problem when trying to treat it on a quantum computer. Several features related to symmetry conservation, symmetry breaking, and possible symmetry restoration are reviewed. After briefly discussing some of the standard symmetries relevant for many-particle systems, we discuss the advantage of encoding some symmetries directly in quantum ansätze, especially to reduce the quantum register size. It is, however, well-known that the use of symmetry-breaking states can also be a unique way to incorporate specific internal correlations when a spontaneous symmetry breaking occurs. These aspects are discussed in the quantum computing context. Ultimately, an accurate description of quantum systems can be achieved only when the initially broken symmetries are properly restored. We review several methods explored previously to perform symmetry restoration on a quantum computer, for instance, the ones based on symmetry filtering by quantum phase estimation and by an iterative independent set of Hadamard tests. We propose novel methods that pave the new directions to perform symmetry restoration, like those based on the purification of the state employing the linear combination of unitaries (LCU) approach.
Prashant Gupta, Yiran Guo, Narasimha Boddeti
et al.
We explore efficient optimization of toolpaths based on multiple criteria for large instances of 3D printing problems. We first show that the minimum turn cost 3D printing problem is NP-hard, even when the region is a simple polygon. We develop SFCDecomp, a space filling curve based decomposition framework to solve large instances of 3D printing problems efficiently by solving these optimization subproblems independently. For the Buddha model, our framework builds toolpaths over a total of 799,716 nodes across 169 layers, and for the Bunny model it builds toolpaths over 812,733 nodes across 360 layers. Building on SFCDecomp, we develop a multicriteria optimization approach for toolpath planning. We demonstrate the utility of our framework by maximizing or minimizing tool path edge overlap between adjacent layers, while jointly minimizing turn costs. Strength testing of a tensile test specimen printed with tool paths that maximize or minimize adjacent layer edge overlaps reveal significant differences in tensile strength between the two classes of prints.
Almost all existing methods for image restoration are based on optimizing the mean squared error (MSE), even though it is known that the best estimate in terms of MSE may yield a highly atypical image due to the fact that there are many plausible restorations for a given noisy image. In this paper, we show how to combine explicit priors on patches of natural images in order to sample from the posterior probability of a full image given a degraded image. We prove that our algorithm generates correct samples from the distribution $p(x|y) \propto \exp(-E(x|y))$ where $E(x|y)$ is the cost function minimized in previous patch-based approaches that compute a single restoration. Unlike previous approaches that computed a single restoration using MAP or MMSE, our method makes explicit the uncertainty in the restored images and guarantees that all patches in the restored images will be typical given the patch prior. Unlike previous approaches that used implicit priors on fixed-size images, our approach can be used with images of any size. Our experimental results show that posterior sampling using patch priors yields images of high perceptual quality and high PSNR on a range of challenging image restoration problems.
The future intervention on the Cathedral of Notre-Dame in Paris has generated a debate that, despite appearing to be modern, remains open in an irresolvable way for decades in the field of architectural heritage restoration. The reconstruction of the St. Mark’s “Campanile” after its collapse, the recovery of the great European monuments damaged during the wars of the 20th century, such as the Cathedral of Reims, the Abbey of Montecassino or the “Frauenkirche”, and the identical restitution of symbolic buildings such as the “Gran Teatre del Liceu” in Barcelona or “La Fenice” in Venice, are just some of the cases that have followed the principles of ‘historical restoration’. The recent event of Notre-Dame opens a new window to contemporary thoughts on what could be the most appropriate way to intervene on Cultural Heritage, testing the validity of the widespread philosophy of “dov’era e com’era”.
Conservation and restoration of prints, Architectural drawing and design
Disaster recovery is widely regarded as the least understood phase of the disaster cycle. In particular, the literature around lifeline infrastructure restoration modeling frequently mentions the lack of empirical quantitative data available. Despite limitations, there is a growing body of research on modeling lifeline infrastructure restoration, often developed using empirical quantitative data. This study reviews this body of literature and identifies the data collection and usage patterns present across modeling approaches to inform future efforts using empirical quantitative data. We classify the modeling approaches into simulation, optimization, and statistical modeling. The number of publications in this domain has increased over time with the most rapid growth of statistical modeling. Electricity infrastructure restoration is most frequently modeled, followed by the restoration of multiple infrastructures, water infrastructure, and transportation infrastructure. Interdependency between multiple infrastructures is increasingly considered in recent literature. Researchers gather the data from a variety of sources, including collaborations with utility companies, national databases, and post-event damage and restoration reports. This study provides discussion and recommendations around data usage practices within the lifeline restoration modeling field. Following the recommendations would facilitate the development of a community of practice around restoration modeling and provide greater opportunities for future data sharing.
Face restoration is an inherently ill-posed problem, where additional prior constraints are typically considered crucial for mitigating such pathology. However, real-world image prior are often hard to simulate with precise mathematical models, which inevitably limits the performance and generalization ability of existing prior-regularized restoration methods. In this paper, we study the problem of face restoration under a more practical ``dual blind'' setting, i.e., without prior assumptions or hand-crafted regularization terms on the degradation profile or image contents. To this end, a novel implicit subspace prior learning (ISPL) framework is proposed as a generic solution to dual-blind face restoration, with two key elements: 1) an implicit formulation to circumvent the ill-defined restoration mapping and 2) a subspace prior decomposition and fusion mechanism to dynamically handle inputs at varying degradation levels with consistent high-quality restoration results. Experimental results demonstrate significant perception-distortion improvement of ISPL against existing state-of-the-art methods for a variety of restoration subtasks, including a 3.69db PSNR and 45.8% FID gain against ESRGAN, the 2018 NTIRE SR challenge winner. Overall, we prove that it is possible to capture and utilize prior knowledge without explicitly formulating it, which will help inspire new research paradigms towards low-level vision tasks.
Varying only the in-plane or out-of-plane dimensions of nanostructures produces a wide range of colourful elements in metasurfaces and thin films. However, achieving shades of grey and control of colour saturation remains challenging. Here, we introduce a hybrid approach to colour generation based on the tuning of nanostructure geometry in all three dimensions. Through two-photon polymerization lithography, we systematically investigated colour generation from the simple single nanopillar geometry made of low-refractive-index material; realizing grayscale and full colour palettes with control of hue, saturation, brightness through tuning of height, diameter, and periodicity of nanopillars. Arbitrary colourful and grayscale images were painted by mapping desired prints to precisely controllable parameters during 3D printing. We extend our understanding of the scattering properties of the low-refractive-index nanopillar to demonstrate grayscale inversion and colour desaturation, with steganography at the level of single nanopillars.