Towards Comprehensive Benchmarking Infrastructure for LLMs In Software Engineering
Daniel Rodriguez-Cardenas, Xiaochang Li, Marcos Macedo
et al.
Large language models for code are advancing fast, yet our ability to evaluate them lags behind. Current benchmarks focus on narrow tasks and single metrics, which hide critical gaps in robustness, interpretability, fairness, efficiency, and real-world usability. They also suffer from inconsistent data engineering practices, limited software engineering context, and widespread contamination issues. To understand these problems and chart a path forward, we combined an in-depth survey of existing benchmarks with insights gathered from a dedicated community workshop. We identified three core barriers to reliable evaluation: the absence of software-engineering-rich datasets, overreliance on ML-centric metrics, and the lack of standardized, reproducible data pipelines. Building on these findings, we introduce BEHELM, a holistic benchmarking infrastructure that unifies software-scenario specification with multi-metric evaluation. BEHELM provides a structured way to assess models across tasks, languages, input and output granularities, and key quality dimensions. Our goal is to reduce the overhead currently required to construct benchmarks while enabling a fair, realistic, and future-proof assessment of LLMs in software engineering.
Research on a Comprehensive Performance Analysis Method for Building-Integrated Photovoltaics Considering Global Climate Change
Ran Wang, Caibo Tang, Yuge Ma
et al.
Building-integrated photovoltaics (BIPVs) represent a pivotal technology for enhancing the utilization of renewable energy in buildings. However, challenges persist, including the lack of integrated design models, limited analytical dimensions, and insufficient consideration of climate change impacts. This study proposes a comprehensive performance assessment framework for BIPV that incorporates global climate change factors. An integrated simulation model is developed using EnergyPlus8.9.0, Optics6, and WINDOW7.7 to evaluate BIPV configurations such as photovoltaic facades, shading systems, and roofs. A multi-criteria evaluation system is established, encompassing global warming potential (GWP), power generation, energy flexibility, and economic cost. Future hourly weather data for the 2050s and 2080s are generated using CCWorldWeatherGen under representative climate scenarios. Monte Carlo simulations are conducted to assess performance across variable combinations, supplemented by sensitivity and uncertainty analyses to identify key influencing factors. Results indicate (1) critical design parameters—including building orientation, wall thermal absorptance, window-to-wall ratios, PV shading angle, glazing optical properties, equipment and lighting power density, and occupancy—significantly affect overall performance. Equipment and lighting densities most influence carbon emissions and flexibility, whereas envelope thermal properties dominate cost impacts. PV shading outperforms other forms in power generation. (2) Under intensified climate change, GWP and life cycle costs increase, while energy flexibility declines, imposing growing pressure on system performance. However, under certain mid-century climate conditions, BIPV power generation potential improves due to altered solar radiation. The study recommends integrating climate-adaptive design strategies with energy systems such as PEDF (photovoltaic, energy storage, direct current, and flexibility), refining policy mechanisms, and advancing BIPV deployment with climate-resilient approaches to support building decarbonization and enhance adaptive capacity.
Knowledge-Based Aerospace Engineering -- A Systematic Literature Review
Tim Wittenborg, Ildar Baimuratov, Ludvig Knöös Franzén
et al.
The aerospace industry operates at the frontier of technological innovation while maintaining high standards regarding safety and reliability. In this environment, with an enormous potential for re-use and adaptation of existing solutions and methods, Knowledge-Based Engineering (KBE) has been applied for decades. The objective of this study is to identify and examine state-of-the-art knowledge management practices in the field of aerospace engineering. Our contributions include: 1) A SWARM-SLR of over 1,000 articles with qualitative analysis of 164 selected articles, supported by two aerospace engineering domain expert surveys. 2) A knowledge graph of over 700 knowledge-based aerospace engineering processes, software, and data, formalized in the interoperable Web Ontology Language (OWL) and mapped to Wikidata entries where possible. The knowledge graph is represented on the Open Research Knowledge Graph (ORKG), and an aerospace Wikibase, for reuse and continuation of structuring aerospace engineering knowledge exchange. 3) Our resulting intermediate and final artifacts of the knowledge synthesis, available as a Zenodo dataset. This review sets a precedent for structured, semantic-based approaches to managing aerospace engineering knowledge. By advancing these principles, research, and industry can achieve more efficient design processes, enhanced collaboration, and a stronger commitment to sustainable aviation.
Ten Simple Rules for Catalyzing Collaborations and Building Bridges between Research Software Engineers and Software Engineering Researchers
Nasir U. Eisty, Jeffrey C. Carver, Johanna Cohoon
et al.
In the evolving landscape of scientific and scholarly research, effective collaboration between Research Software Engineers (RSEs) and Software Engineering Researchers (SERs) is pivotal for advancing innovation and ensuring the integrity of computational methodologies. This paper presents ten strategic guidelines aimed at fostering productive partnerships between these two distinct yet complementary communities. The guidelines emphasize the importance of recognizing and respecting the cultural and operational differences between RSEs and SERs, proactively initiating and nurturing collaborations, and engaging within each other's professional environments. They advocate for identifying shared challenges, maintaining openness to emerging problems, ensuring mutual benefits, and serving as advocates for one another. Additionally, the guidelines highlight the necessity of vigilance in monitoring collaboration dynamics, securing institutional support, and defining clear, shared objectives. By adhering to these principles, RSEs and SERs can build synergistic relationships that enhance the quality and impact of research outcomes.
Work in Progress: AI-Powered Engineering-Bridging Theory and Practice
Oz Levy, Ilya Dikman, Natan Levy
et al.
This paper explores how generative AI can help automate and improve key steps in systems engineering. It examines AI's ability to analyze system requirements based on INCOSE's "good requirement" criteria, identifying well-formed and poorly written requirements. The AI does not just classify requirements but also explains why some do not meet the standards. By comparing AI assessments with those of experienced engineers, the study evaluates the accuracy and reliability of AI in identifying quality issues. Additionally, it explores AI's ability to classify functional and non-functional requirements and generate test specifications based on these classifications. Through both quantitative and qualitative analysis, the research aims to assess AI's potential to streamline engineering processes and improve learning outcomes. It also highlights the challenges and limitations of AI, ensuring its safe and ethical use in professional and academic settings.
Extending Behavioral Software Engineering: Decision-Making and Collaboration in Human-AI Teams for Responsible Software Engineering
Lekshmi Murali Rani
The study of behavioral and social dimensions of software engineering (SE) tasks characterizes behavioral software engineering (BSE);however, the increasing significance of human-AI collaboration (HAIC) brings new directions in BSE by presenting new challenges and opportunities. This PhD research focuses on decision-making (DM) for SE tasks and collaboration within human-AI teams, aiming to promote responsible software engineering through a cognitive partnership between humans and AI. The goal of the research is to identify the challenges and nuances in HAIC from a cognitive perspective, design and optimize collaboration/partnership (human-AI team) that enhance collective intelligence and promote better, responsible DM in SE through human-centered approaches. The research addresses HAIC and its impact on individual, team, and organizational level aspects of BSE.
A Systematic Review of Common Beginner Programming Mistakes in Data Engineering
Max Neuwinger, Dirk Riehle
The design of effective programming languages, libraries, frameworks, tools, and platforms for data engineering strongly depends on their ease and correctness of use. Anyone who ignores that it is humans who use these tools risks building tools that are useless, or worse, harmful. To ensure our data engineering tools are based on solid foundations, we performed a systematic review of common programming mistakes in data engineering. We focus on programming beginners (students) by analyzing both the limited literature specific to data engineering mistakes and general programming mistakes in languages commonly used in data engineering (Python, SQL, Java). Through analysis of 21 publications spanning from 2003 to 2024, we synthesized these complementary sources into a comprehensive classification that captures both general programming challenges and domain-specific data engineering mistakes. This classification provides an empirical foundation for future tool development and educational strategies. We believe our systematic categorization will help researchers, practitioners, and educators better understand and address the challenges faced by novice data engineers.
Integrating Merkle Trees with Transformer Networks for Secure Financial Computation
Xinyue Wang, Weifan Lin, Weiting Zhang
et al.
In this paper, the Merkle-Transformer model is introduced as an innovative approach designed for financial data processing, which combines the data integrity verification mechanism of Merkle trees with the data processing capabilities of the Transformer model. A series of experiments on key tasks, such as financial behavior detection and stock price prediction, were conducted to validate the effectiveness of the model. The results demonstrate that the Merkle-Transformer significantly outperforms existing deep learning models (such as RoBERTa and BERT) across performance metrics, including precision, recall, accuracy, and F1 score. In particular, in the task of stock price prediction, the performance is notable, with nearly all evaluation metrics scoring above 0.9. Moreover, the performance of the model across various hardware platforms, as well as the security performance of the proposed method, were investigated. The Merkle-Transformer exhibits exceptional performance and robust data security even in resource-constrained environments across diverse hardware configurations. This research offers a new perspective, underscoring the importance of considering data security in financial data processing and confirming the superiority of integrating data verification mechanisms in deep learning models for handling financial data. The core contribution of this work is the first proposition and empirical demonstration of a financial data analysis model that fuses data integrity verification with efficient data processing, providing a novel solution for the fintech domain. It is believed that the widespread adoption and application of the Merkle-Transformer model will greatly advance innovation in the financial industry and lay a solid foundation for future research on secure financial data processing.
Technology, Engineering (General). Civil engineering (General)
Requirements Engineering for Research Software: A Vision
Adrian Bajraktari, Michelle Binder, Andreas Vogelsang
Modern science is relying on software more than ever. The behavior and outcomes of this software shape the scientific and public discourse on important topics like climate change, economic growth, or the spread of infections. Most researchers creating software for scientific purposes are not trained in Software Engineering. As a consequence, research software is often developed ad hoc without following stringent processes. With this paper, we want to characterize research software as a new application domain that needs attention from the Requirements Engineering community. We conducted an exploratory study based on 8 interviews with 12 researchers who develop software. We describe how researchers elicit, document, and analyze requirements for research software and what processes they follow. From this, we derive specific challenges and describe a vision of Requirements Engineering for research software.
Impact of solvents on doctor blade coatings and bathocuproine cathode interlayer for large-area organic solar cell modules
Soonil Hong, Byoungwook Park, Chandran Balamurugan
et al.
Efforts to commercialize organic solar cells (OSCs) by developing roll-to-roll compatible modules have encountered challenges in optimizing printing processes to attain laboratory-level performance in fully printable OSC architectures. In this study, we present efficient OSC modules fabricated solely through printing methods. We systematically evaluated the impact of processing solvents on the morphology of crucial layers, such as the hole transport, photoactive, and electron transport layers, applied using the doctor blade coating method, with a particular focus on processability. Notably, deposition of charge transport layer using printing techniques is still a challenging task, mainly due to the hydrophobic characteristic of the organic photoactive layer. To overcome this issue, we investigated the solvent effect of a well-studied cathode interlayer, bathocuproine (BCP). We were able to form a uniform thin BCP film (∼10 nm) on a non-fullerene based organic photoactive layer using the doctor bladed coating method. Our results showed that the use of volatile alcohols in the BCP processing required a delicate balance between wettability and vaporization, which contrasted with the results for spin-coated films. These findings provide important insights into improving the efficiency of printing techniques for depositing charge transport layers. The fully printed OSC modules, featuring uniform and continuous BCP layer formation, achieved an impressive power conversion efficiency of 10.8% with a total area of 10.0 cm2 and a geometrical fill factor of 86.5%.
Science (General), Social sciences (General)
Bio-Template Synthesis of V<sub>2</sub>O<sub>3</sub>@Carbonized Dictyophora Composites for Advanced Aqueous Zinc-Ion Batteries
Wei Zhou, Guilin Zeng, Haotian Jin
et al.
In terms of new-generation energy-storing devices, aqueous zinc-ion batteries (AZIBs) are becoming the prime candidates because of their inexpensive nature, inherent safety, environmental benignity and abundant resources. Nevertheless, due to a restrained selection of cathodes, AZIBs often perform unsatisfactorily under long-life cycling and high-rate conditions. Consequently, we propose a facile evaporation-induced self-assembly technique for preparing V<sub>2</sub>O<sub>3</sub>@carbonized dictyophora (V<sub>2</sub>O<sub>3</sub>@CD) composites, utilizing economical and easily available biomass dictyophora as carbon sources and NH<sub>4</sub>VO<sub>3</sub> as metal sources. When assembled in AZIBs, the V<sub>2</sub>O<sub>3</sub>@CD exhibits a high initial discharge capacity of 281.9 mAh g<sup>−1</sup> at 50 mA g<sup>−1</sup>. The discharge capacity is still up to 151.9 mAh g<sup>−1</sup> after 1000 cycles at 1 A g<sup>−1</sup>, showing excellent long-cycle durability. The extraordinary high electrochemical effectiveness of V<sub>2</sub>O<sub>3</sub>@CD could be mainly attributed to the formation of porous carbonized dictyophora frame. The formed porous carbon skeleton can ensure efficient electron transport and prevent V<sub>2</sub>O<sub>3</sub> from losing electrical contact due to volume changes caused by Zn<sup>2+</sup> intercalation/deintercalation. The strategy of metal-oxide-filled carbonized biomass material may provide insights into developing high-performance AZIBs and other potential energy storage devices, with a wide application range.
Berlin Pankow: a 15-min city for everyone? A case study combining accessibility, traffic noise, air pollution, and socio-structural data
Jan-Peter Glock, Julia Gerlach
Abstract Cars are dominating urban traffic in cities around the world, even though daily trips in many cities are often realized with active modes of transportation or public transport. Urban transport planning processes need to adapt to this reality and the necessity of climate change mitigation. Against this background, the research project “Mobility Reporting”, a joint undertaking of the district Pankow in Berlin and researchers from TU Berlin and TU Dresden, established a new, goal-driven, and participative planning process. The process identified local mobility as one of the central planning goals. The 15-min city (FMC) was thus adduced as a benchmark to analyze the district’s current mobility system and development potential. We conducted extensive accessibility analyses to examine the status quo concerning the FMC. We calculated travel times to essential destinations in daily life by foot, public transport, and car. This analysis was accompanied by a mixed online and paper–pencil survey conducted to evaluate the perceived accessibility of people in Pankow. The survey results shed light on the question of which walking time thresholds constitute a “very good” or “good” accessibility. Further analyses included environmental and social variables, allowing us to check whether areas with different accessibility levels also differ regarding the socio-economic characteristics of their inhabitants. For example, do socially advantaged neighborhoods have better local accessibility? Is there a trade-off between exposure to environmental pollution and good accessibility? With this contribution, we shed light on what an FMC is and ought to be. Results from the survey support the normative and political vision of the FMC. Pankow generally offers the merits of a walkable city, showing the expected travel time differences between the dense inner city and the outskirts. Socially disadvantaged neighborhoods are not consistently less accessible. However, there seems to be a trade-off between good accessibility (especially PT accessibility) and correlated externalities of transport, namely air pollution and noise.
Transportation engineering, Transportation and communications
Structural Features Promoting Photocatalytic Degradation of Contaminants of Emerging Concern: Insights into Degradation Mechanism Employing QSA/PR Modeling
Antonija Tomic, Marin Kovacic, Hrvoje Kusic
et al.
Although heterogeneous photocatalysis has shown promising results in degradation of contaminants of emerging concern (CECs), the mechanistic implications related to structural diversity of chemicals, affecting oxidative (by HO•) or reductive (by O<sub>2</sub>•<sup>−</sup>) degradation pathways are still scarce. In this study, the degradation extents and rates of selected organics in the absence and presence of common scavengers for reactive oxygen species (ROS) generated during photocatalytic treatment were determined. The obtained values were then brought into correlation as <i>K</i> coefficients (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><msub><mi>M</mi><mrow><mi>HO</mi><mo>•</mo></mrow></msub></mrow></semantics></math></inline-formula>/<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><msub><mi>M</mi><mrow><msub><mi mathvariant="normal">O</mi><mn>2</mn></msub><msup><mo>•</mo><mo>−</mo></msup></mrow></msub></mrow></semantics></math></inline-formula>), denoting the ratio of organics degraded by two occurring mechanisms: oxidation and reduction via HO• and O<sub>2</sub>•<sup>−</sup>. The compounds possessing <i>K</i> >> 1 favor oxidative degradation over HO•, and vice versa for reductive degradation (i.e., if <i>K</i> << 1 compounds undergo reductive reactions driven by O<sub>2</sub>•<sup>−</sup>). Such empirical values were brought into correlation with structural features of CECs, represented by molecular descriptors, employing a quantitative structure activity/property relationship (QSA/PR) modeling. The functional stability and predictive power of the resulting QSA/PR model was confirmed by internal and external cross-validation. The most influential descriptors were found to be the size of the molecule and presence/absence of particular molecular fragments such as C − O and C − Cl bonds; the latter favors HO•-driven reaction, while the former the reductive pathway. The developed QSA/PR models can be considered robust predictive tools for evaluating distribution between degradation mechanisms occurring in photocatalytic treatment.
Time-domain image processing using photonic reservoir computing
Sunada Satoshi, Yamaguchi Tomoya
Photonic computing has attracted much attention due to its great potential to accelerate artificial neural network operations. However, the processing of a large amount of data, such as image data, basically requires large-scale photonic circuits and is still challenging due to its low scalability of the photonic integration. Here, we propose a scalable image processing approach, which uses a temporal degree of freedom of photons. In the proposed approach, the spatial information of a target object is compressively transformed to a time-domain signal using a gigahertz-rate random pattern projection technique. The time-domain signal is optically acquired at a single-input channel and processed with a microcavity-based photonic reservoir computer. We experimentally demonstrate that this photonic approach is capable of image recognition at gigahertz rates.
Framework for continuous transition to Agile Systems Engineering in the Automotive Industry
Jan Heine, Herbert Palm
The increasing pressure within VUCA (volatility, uncertainty, complexity and ambiguity) driven environments causes traditional, plan-driven Systems Engineering approaches to no longer suffice. Agility is then changing from a "nice-to-have" to a "must-have" capability for successful system developing organisations. The current state of the art, however, does not provide clear answers on how to map this need in terms of processes, methods, tools and competencies (PMTC) and how to successfully manage the transition within established industries. In this paper, we propose an agile Systems Engineering (SE) Framework for the automotive industry to meet the new agility demand. In addition to the methodological background, we present results of a pilot project in the chassis development department of a German automotive manufacturer and demonstrate the effectiveness of the newly proposed framework. By adopting the described agile SE Framework, companies can foster innovation and collaboration based on a learning, continuous improvement and self-reinforcing base.
Analysis of Software Engineering Practices in General Software and Machine Learning Startups
Bishal Lakha, Kalyan Bhetwal, Nasir U. Eisty
Context: On top of the inherent challenges startup software companies face applying proper software engineering practices, the non-deterministic nature of machine learning techniques makes it even more difficult for machine learning (ML) startups. Objective: Therefore, the objective of our study is to understand the whole picture of software engineering practices followed by ML startups and identify additional needs. Method: To achieve our goal, we conducted a systematic literature review study on 37 papers published in the last 21 years. We selected papers on both general software startups and ML startups. We collected data to understand software engineering (SE) practices in five phases of the software development life-cycle: requirement engineering, design, development, quality assurance, and deployment. Results: We find some interesting differences in software engineering practices in ML startups and general software startups. The data management and model learning phases are the most prominent among them. Conclusion: While ML startups face many similar challenges to general software startups, the additional difficulties of using stochastic ML models require different strategies in using software engineering practices to produce high-quality products.
Application of Convolutional Neural Network for Fault Diagnosis of Bearing Scratch of an Induction Motor
Shrinathan Esaki Muthu Pandara Kone, Kenichi Yatsugi, Yukio Mizuno
et al.
The demand for the condition monitoring of induction motors is increasing in various fields, such as industry, transportation, and daily life. Bearing faults are the most common faults, and many fault diagnosis methods have been proposed using artificial pitting as the fault factor in most cases. However, the validity of a fault diagnosis method for other kinds of faults does not seem to be evaluated. Considering onsite scenarios and other possibilities of faults, this paper introduces scratches on the outer raceways of bearings. A study was performed on the detection of several kinds of bearing scratches using a proposed method that was based on an auto-tuning convolutional neural network. The developed approach was also compared with other diagnostic methods for validation. The results showed that the proposed technique provides the possibility of diagnosing several kinds of scratches with acceptable accuracy rates.
Technology, Engineering (General). Civil engineering (General)
Analysis of the status and framework design of intelligentcoal mine auxiliary transportation system
CHANG Kai, LIU Zhigeng, YUAN Xiaoming
et al.
This paper introduces the development and application status of intelligent auxiliary transportation technology in open-pit coal mine and underground coal mines at home and abroad. The intelligent auxiliary transportation system of open-pit coal mine has realized the functions of unmanned driving, automatic loading, automatic unloading, active obstacle avoidance and intelligent dispatching of mining trucks in fixed sections. And the system has achieved good application results in engineering practice. At present, the auxiliary transportation intelligence of underground coal mine is still in the development stage of single machine intelligence of equipment. The intelligent auxiliary transportation system integrating vehicle scheduling, operation status monitoring, traffic command, material control and other functions has not yet been formed. The main problems of intelligent auxiliary transportation system in underground coal mine are analyzed. The underground positioning system has low precision and poor real-time performance. The dispatching system function lacks effective integration. The driving assistance system module is not perfect. The unmanned driving technology lags behind and the test conditions are lacking. Based on the relevant requirements of intelligent auxiliary transportation in Coal Mine Intelligent Construction Guide (2021 edition), this paper puts forward the overall goal of the construction of intelligent auxiliary transportation. According to the overall goal, the intelligent coal mine auxiliary transportation system framework is designed. ① Coding and centralized loading transportation of materials realizes the whole process information management and control of materials from storage, coding, loading, transportation, unloading and recycling. ② Automatic loading and unloading and automatic connection realizes the automatic transfer and connection of materials among rail locomotives, monorail cranes, trackless and other different auxiliary transportation modes. ③ Accurate positioning and intelligent navigation achieves accurate real-time positioning, route planning and real-time navigation of personnel and transportation equipment. ④ Intelligent vehicle dispatching realizes the functions of auxiliary transportation comprehensive information display, data transmission, status monitoring, dispatching command and health management. ⑤ Driving assistance system builds several intelligent subsystems, such as anti fatigue driving warning, 360° panoramic look around monitoring, collision prevention, traffic sign identification, auxiliary braking for downhill driving, adaptive lighting, etc. Driving assistance system improves the safety of locomotive operation. ⑥ The auxiliary operation robot realizes the robot automatic operation of underground auxiliary operation scenes. The auxiliary operation robot reduces the number of personnel and improves the overall automation level of auxiliary operation. ⑦ Unmanned driving realizes the normal unmanned driving operation of locomotives in underground coal mine. The research can provide reference for the construction and development of intelligent auxiliary transportation system.
Mining engineering. Metallurgy
An Approach for System Analysis with MBSE and Graph Data Engineering
Florian Schummer, Maximilian Hyba
Model-Based Systems Engineering aims at creating a model of a system under development, covering the complete system with a level of detail that allows to define and understand its behavior and enables to define any interface and workpackage based on the model. Once such a model is established, further benefits can be reaped, such as the analysis of complex technical correlations within the system. Various insights can be gained by displaying the model as a formal graph and querying it. To enable such queries, a graph schema needs to be designed, which allows to transfer the model into a graph database. In the course of this paper, we discuss the design of a graph schema and MBSE modelling approach, enabling deep going system analysis and anomaly resolution in complex embedded systems. The schema and modelling approach are designed to answer questions such as what happens if there is an electrical short in a component? Which other components are now offline and which data cannot be gathered anymore? Or if a condition cannot be met, which alternative routes can be established to reach a certain state of the system. We build on the use case of qualification and operations of a small spacecraft. Structural and behavioral elements of the MBSE model are transferred to a graph database where analyses are conducted on the system. The schema is implemented by an adapter for MagicDraw to Neo4j. A selection of complex analyses are shown on the example of the MOVE-II space mission.
Face hallucination based on cluster consistent dictionary learning
Minqi Li, Xiangjian He, Kin‐Man Lam
et al.
Abstract Face hallucination is a super‐resolution technique specially designed to reconstruct high‐resolution faces from low‐resolution faces. Most state‐of‐the‐art algorithms leverage position‐patch prior knowledge of human faces to better super‐resolve face images. However, most of them assume the training face dataset is sufficiently large, well cropped or aligned. This paper, proposes a novel example‐based face hallucination method, based on cluster consistent dictionary learning with the assumption that human faces have similar facial structures. In this method, the paired face image patches are firstly labelled as face areas including eyes, nose, mouth and other parts, as well as non‐face areas without requiring the training face images cropped and aligned. Then, the training patches are clustered according their labels and textures. The cluster consistent dictionary is learned to represent the low‐resolution patches and the high‐resolution patches. Finally, the high‐resolution patches of the input low‐resolution face image can be efficiently generated by using the adjusted anchored neighbourhood regression. As utilizing the labelled facial parts prior knowledge, the proposed method represents more details in the reconstruction. Experimental results demonstrate that the authors' algorithm outperforms many state‐of‐the‐art techniques for face hallucination under different datasets.
Photography, Computer software