Vitrectomy is a common clinical treatment for fundus disease. Due to the non-renewable nature of the vitreous, artificial vitreous are usually required to replace natural vitreous to perform functions post operation. Silicone oil and gas, as the most commonly used vitreous substitutes, have obvious drawbacks, which may lead to postoperative posture maintenance, visual impairment, cataract formation and secondary surgery. In this study, an in situ cross-linked bionic hydrogel (OAHA-CDHA/Col) based on hyaluronic acid (HA) and collagen (Col) with available gelling time for clinical operation, excellent self-healing and fatigue resistance, as well as suitable mechanical and optical properties is constructed. The compatibility and degradability of OAHA-CDHA/Col hydrogel are verified, as well as the feasibility as vitreous substitute in rabbit vitrectomy model. Notably, the hydrogel demonstrates improved intraocular tolerance compared with silicone oil, with no cataracts, endophthalmitis, fundus lesions and other complications observed. These findings position the OAHA-CDHA/Col hydrogel as a promising candidate for an ideal vitreous substitute.
Materials of engineering and construction. Mechanics of materials
This study examines the crystallization kinetics of Ni _50−x Mn _39 Sn _11 Fe _x (x = 0, 0.5, 2, 4 at.%) amorphous thin films prepared by DC magnetron sputtering. SEM and XRD confirm their amorphous structure. Non-isothermal DSC results show that the crystallization peak temperature increases from 542.7 K to 568.0 K as Fe content rises, while the apparent activation energy increases from 96.69 to 152.93 kJ mol ^−1 , indicating enhanced resistance to crystallization. Isothermal analysis yields Avrami exponents of 1.15–1.41 (average ≈1.2), corresponding to diffusion-controlled one-dimensional growth. Local activation-energy evaluation further reveals composition-dependent differences in nucleation and growth during various stages. These quantitative kinetic parameters clarify the role of Fe in altering crystallization behavior and support the optimization of annealing conditions for Ni-Mn-Sn-based functional thin films.
Materials of engineering and construction. Mechanics of materials, Chemical technology
Empirical research in reverse engineering and software protection is crucial for evaluating the efficacy of methods designed to protect software against unauthorized access and tampering. However, conducting such studies with professional reverse engineers presents significant challenges, including access to professionals and affordability. This paper explores the use of students as participants in empirical reverse engineering experiments, examining their suitability and the necessary training; the design of appropriate challenges; strategies for ensuring the rigor and validity of the research and its results; ways to maintain students' privacy, motivation, and voluntary participation; and data collection methods. We present a systematic literature review of existing reverse engineering experiments and user studies, a discussion of related work from the broader domain of software engineering that applies to reverse engineering experiments, an extensive discussion of our own experience running experiments ourselves in the context of a master-level software hacking and protection course, and recommendations based on this experience. Our findings aim to guide future empirical studies in RE, balancing practical constraints with the need for meaningful, reproducible results.
Esteban Parra, Sonia Haiduc, Preetha Chatterjee
et al.
Peer review is the main mechanism by which the software engineering community assesses the quality of scientific results. However, the rapid growth of paper submissions in software engineering venues has outpaced the availability of qualified reviewers, creating a growing imbalance that risks constraining and negatively impacting the long-term growth of the Software Engineering (SE) research community. Our vision of the Future of the SE research landscape involves a more scalable, inclusive, and resilient peer review process that incorporates additional mechanisms for: 1) attracting and training newcomers to serve as high-quality reviewers, 2) incentivizing more community members to serve as peer reviewers, and 3) cautiously integrating AI tools to support a high-quality review process.
The discussion around AI-Engineering, that is, Software Engineering (SE) for AI-enabled Systems, cannot ignore a crucial class of software systems that are increasingly becoming AI-enhanced: Those used to enable or support the SE process, such as Computer-Aided SE (CASE) tools and Integrated Development Environments (IDEs). In this paper, we study the energy efficiency of these systems. As AI becomes seamlessly available in these tools and, in many cases, is active by default, we are entering a new era with significant implications for energy consumption patterns throughout the Software Development Lifecycle (SDLC). We focus on advanced Machine Learning (ML) capabilities provided by Large Language Models (LLMs). Our proposed approach combines Retrieval-Augmented Generation (RAG) with Prompt Engineering Techniques (PETs) to enhance both the quality and energy efficiency of LLM-based code generation. We present a comprehensive framework that measures real-time energy consumption and inference time across diverse model architectures ranging from 125M to 7B parameters, including GPT-2, CodeLlama, Qwen 2.5, and DeepSeek Coder. These LLMs, chosen for practical reasons, are sufficient to validate the core ideas and provide a proof of concept for more in-depth future analysis.
The heterogeneity in the organization of software engineering (SE) research historically exists, i.e., funded research model and hands-on model, which makes software engineering become a thriving interdisciplinary field in the last 50 years. However, the funded research model is becoming dominant in SE research recently, indicating such heterogeneity has been seriously and systematically threatened. In this essay, we first explain why the heterogeneity is needed in the organization of SE research, then present the current trend of SE research nowadays, as well as the consequences and potential futures. The choice is at our hands, and we urge our community to seriously consider maintaining the heterogeneity in the organization of software engineering research.
The software engineering researchers from countries with smaller economies, particularly non-English speaking ones, represent valuable minorities within the software engineering community. As researchers from Poland, we represent such a country. We analyzed the ICSE FOSE (Future of Software Engineering) community survey through reflexive thematic analysis to show our viewpoint on key software community issues. We believe that the main problem is the growing research-industry gap, which particularly impacts smaller communities and small local companies. Based on this analysis and our experiences, we present a set of recommendations for improvements that would enhance software engineering research and industrial collaborations in smaller economies.
Amidst the rapid global expansion of smart grids, ensuring the safety and reliability of power transmission systems has become paramount. Insulators are critical components of high-voltage transmission lines, providing both electrical insulation and mechanical support. However, their exposure to electrical, mechanical, and environmental stressors renders them vulnerable point within the system. Defective insulators are a major cause of failures in power transmission systems. Consequently, the early and accurate detection of these defects is pivotal for maintaining the integrity and reliability of the power grid. To address this challenge, this study proposes InsDD-YOLO, a novel object detection architecture enhanced from the YOLOv13 framework. The model incorporates a suite of strategic enhancements, including an improved DSConv (IDSConv) module for robust feature extraction, a streamlined Neck architecture augmented with a feature stream from a shallower layer (B2) to improve small-target detection, and a direct Head connection mechanism to maximize the preservation of fine-grained details. Experimental results demonstrate that InsDD-YOLO achieves superior performance, reaching an mAP0.5 of 90.1% and an mAP<inline-formula> <tex-math notation="LaTeX">${}_{0.5:0.95}$ </tex-math></inline-formula> of 46.4%, outperforming the baseline YOLOv13 model by a significant 5.0% in mAP0.5. With an inference time of just 5.4 ms, the proposed model not only establishes a new benchmark for accuracy but also demonstrates an effective trade-off between performance and speed, underscoring its significant potential for deployment in real-time, automated power grid monitoring systems.
Agile software development relies on self-organized teams, underlining the importance of individual responsibility. How developers take responsibility and build ownership are influenced by external factors such as architecture and development methods. This paper examines the existing literature on ownership in software engineering and in psychology, and argues that a more comprehensive view of ownership in software engineering has a great potential in improving software team's work. Initial positions on the issue are offered for discussion and to lay foundations for further research.
Large Language Models (LLMs) are increasingly integrated into software applications, giving rise to a broad class of prompt-enabled systems, in which prompts serve as the primary 'programming' interface for guiding system behavior. Building on this trend, a new software paradigm, promptware, has emerged, which treats natural language prompts as first-class software artifacts for interacting with LLMs. Unlike traditional software, which relies on formal programming languages and deterministic runtime environments, promptware is based on ambiguous, unstructured, and context-dependent natural language and operates on LLMs as runtime environments, which are probabilistic and non-deterministic. These fundamental differences introduce unique challenges in prompt development. In practice, prompt development remains largely ad hoc and relies heavily on time-consuming trial-and-error, a challenge we term the promptware crisis. To address this, we propose promptware engineering, a new methodology that adapts established Software Engineering (SE) principles to prompt development. Drawing on decades of success in traditional SE, we envision a systematic framework encompassing prompt requirements engineering, design, implementation, testing, debugging, evolution, deployment, and monitoring. Our framework re-contextualizes emerging prompt-related challenges within the SE lifecycle, providing principled guidance beyond ad-hoc practices. Without the SE discipline, prompt development is likely to remain mired in trial-and-error. This paper outlines a comprehensive roadmap for promptware engineering, identifying key research directions and offering actionable insights to advance the development of prompt-enabled systems.
The rapid emergence of generative AI models like Large Language Models (LLMs) has demonstrated its utility across various activities, including within Requirements Engineering (RE). Ensuring the quality and accuracy of LLM-generated output is critical, with prompt engineering serving as a key technique to guide model responses. However, existing literature provides limited guidance on how prompt engineering can be leveraged, specifically for RE activities. The objective of this study is to explore the applicability of existing prompt engineering guidelines for the effective usage of LLMs within RE. To achieve this goal, we began by conducting a systematic review of primary literature to compile a non-exhaustive list of prompt engineering guidelines. Then, we conducted interviews with RE experts to present the extracted guidelines and gain insights on the advantages and limitations of their application within RE. Our literature review indicates a shortage of prompt engineering guidelines for domain-specific activities, specifically for RE. Our proposed mapping contributes to addressing this shortage. We conclude our study by identifying an important future line of research within this field.
Large language model-specific inference engines (in short as \emph{LLM inference engines}) have become a fundamental component of modern AI infrastructure, enabling the deployment of LLM-powered applications (LLM apps) across cloud and local devices. Despite their critical role, LLM inference engines are prone to bugs due to the immense resource demands of LLMs and the complexities of cross-platform compatibility. However, a systematic understanding of these bugs remains lacking. To bridge this gap, we present the first empirical study on bugs in LLM inference engines. We mine official repositories of 5 widely adopted LLM inference engines, constructing a comprehensive dataset of 929 real-world bugs. Through a rigorous open coding process, we analyze these bugs to uncover their symptoms, root causes, commonality, fix effort, fix strategies, and temporal evolution. Our findings reveal six bug symptom types and a taxonomy of 28 root causes, shedding light on the key challenges in bug detection and location within LLM inference engines. Based on these insights, we propose a series of actionable implications for researchers, inference engine vendors, and LLM app developers, along with general guidelines for developing LLM inference engines.
Dispersion in optical coherence tomography (OCT) poses a challenge that is exacerbated by the increased spectral bandwidth, which leads to image blur and feature loss. In this paper, we present a straightforward and cost-effective approach for dispersion compensation in OCT. To achieve this, we employed a pixel-to-pixel (Pix2Pix) generative adversarial network (GAN) architecture customized for image-to-image translation. Two data groups with varying amounts of training image data and epochs were used. The Pix2Pix GAN was trained to generate clear OCT images from the corresponding dispersion-affected OCT images in paired datasets. According to the experimental results, the Pix2Pix GAN technique demonstrated a substantial improvement over the basic GAN. Specifically, it increases the peak signal-to-noise ratio (PSNR) by 159%, structural similarity index (SSIM) by 370%, and Fréchet inception distance (FID) by 274%. These outcomes indicate that the proposed model can generate images with resilience and effectiveness, particularly when dealing with dispersion-affected OCT data.
While current research predominantly focuses on image-based colorization, the domain of video-based colorization remains relatively unexplored. Many existing video colorization techniques operate frame-by-frame, often overlooking the critical aspect of temporal coherence between successive frames. This approach can result in inconsistencies across frames, leading to undesirable effects like flickering or abrupt color transitions between frames. To address these challenges, we combine the generative capabilities of a fine-tuned latent diffusion model with an autoregressive conditioning mechanism to ensure temporal consistency in automatic speaker video colorization. We demonstrate strong improvements on established quality metrics compared to existing methods, namely, PSNR, SSIM, FID, FVD, NIQE and BRISQUE. Specifically, we achieve an 18% improvement in performance when FVD is employed as the evaluation metric. Furthermore, we performed a subjective study, where users preferred LatentColorization to the existing state-of-the-art DeOldify 80% of the time. Our dataset combines conventional datasets and videos from television/movies. A short demonstration of our results can be seen in some example videos available at <uri>https://youtu.be/vDbzsZdFuxM</uri>.
The cohesion of an object-oriented class refers to the relatedness of its methods and attributes. Constructors, destructors, and access methods are special types of methods featuring unique characteristics that can artificially affect class cohesion quantification. Methods within a class can also directly or transitively invoke each other, representing another cohesion aspect not considered by most existing cohesion measures. The impact of considering special methods (SPs) and transitive relations (TRs) in cohesion measurement on the abilities of the measures to predict inheritance reusability has yet to be investigated. In this paper, we empirically explored this effect. We applied a statistical technique to test the significance of the cohesion value changes across seven scenarios of ignoring or considering SPs and TRs. In addition, we applied a machine learning-based technique to build inheritance reusability prediction models using each of the considered measures and scenarios, evaluated the classification performance of the prediction models, and statistically compared the inheritance reusability prediction results. The results show that for most of the considered measures, the ignorance/consideration of SPs and TRs changed the cohesion values and the corresponding prediction significantly. Based on the study findings, when building inheritance reusability prediction models, software engineers are advised to 1) combine cohesion with other quality factors; 2) exclude the TRs from cohesion quantification; and 3) decide whether to consider or ignore SPs in cohesion quantification based on the selected measure(s) to be used in the prediction model, as this decision differs from one measure to another.
Abstract Potential evaluation to assist demand response decisions has garnered significant attention with the development of new power systems. However, existing data‐driven methods are challenging to properly exploit multivariate features and the process of response potential evaluation is unclear. Therefore, the authors propose an evaluation method that fuses expert features with multi‐image inputs and analyses the model evaluation process based on gradient. First, typical load profiles are extracted by the proposed procedure. Next, features derived from expert knowledge are calculated from the perspectives of adjustability, regularity, and sensitivity of electricity usage. Additionally, the typical load profile's recurrence plot, Markov leapfrog field, and Gramian angle field are created and incorporated into the colourful image as inputs. Then, the evaluation results are obtained by a two‐stream neural network fusing multivariate features. In the experiments, the proposed method is validated and discussed by comparing with many existing methods using London household users' data under the time‐of‐use price, providing new insights for demand response potential evaluation.
Rudrajit Choudhuri, Ambareesh Ramakrishnan, Amreeta Chatterjee
et al.
Generative AI (genAI) tools (e.g., ChatGPT, Copilot) have become ubiquitous in software engineering (SE). As SE educators, it behooves us to understand the consequences of genAI usage among SE students and to create a holistic view of where these tools can be successfully used. Through 16 reflective interviews with SE students, we explored their academic experiences of using genAI tools to complement SE learning and implementations. We uncover the contexts where these tools are helpful and where they pose challenges, along with examining why these challenges arise and how they impact students. We validated our findings through member checking and triangulation with instructors. Our findings provide practical considerations of where and why genAI should (not) be used in the context of supporting SE students.
In this study, the mechanical models of a multilayer combined extrusion cylinder and a steel-wire-winding extrusion cylinder were established and compared using a finite element simulation and existing experimental cases. This work provides theoretical support for the selection of an ultrahigh-pressure extrusion cylinder. Comparative analysis of an ultrahigh-pressure extrusion structure was carried out. The mathematical optimization model is established based on the mechanical model, and the ultimate bearing capacities of the schemes are compared. Additionally, the winding mode and the number of core layers of the extrusion cylinder are compared and analyzed, which provides a theoretical basis for the parameter design of the steel-wire-winding ultrahigh-pressure extrusion cylinder. This work holds good theoretical significance and practical value for the promotion and application of ultrahigh-pressure hydrostatic extrusion technology.
Materials of engineering and construction. Mechanics of materials, Production of electric energy or power. Powerplants. Central stations