Requirements Volatility in Software Architecture Design: An Exploratory Case Study
Sanja Aaramaa, Sandun Dasanayake, Markku Oivo
et al.
Requirements volatility is a major issue in software (SW) development, causing problems such as project delays and cost overruns. Even though there is a considerable amount of research related to requirement volatility, the majority of it is inclined toward project management aspects. The relationship between SW architecture design and requirements volatility has not been researched widely, even though changing requirements may for example lead to higher defect density during testing. An exploratory case study was conducted to study how requirements volatility affects SW architecture design. Fifteen semi-structured, thematic interviews were conducted in the case company, which provides the selection of software products for business customers and consumers. The research revealed the factors, such as requirements uncertainty and dynamic business environment, causing requirements volatility in the case company. The study identified the challenges that requirements volatility posed to SW architecture design, including scheduling and architectural technical debt. In addition, this study discusses means of mitigating the factors that cause requirements volatility and addressing the challenges posed by requirements volatility. SW architects are strongly influenced by requirement volatility. Thus understanding the factors causing requirements volatility as well as means to mitigate the challenges has high industrial relevance.
Effect of Pixel Offset Adjustments for XY Plane Dimensional Compensation in Digital Light Processing 3D Printing on the Surface Trueness and Fit of Zirconia Crowns
KeunBaDa Son, Ji-Min Lee, Kyoung-Jun Jang
et al.
This study aimed to evaluate the effect of pixel offset adjustments in digital light processing (DLP) three-dimensional (3D) printing on the marginal and internal fit and surface trueness of zirconia crowns. Zirconia crowns were designed using dental computer-aided design software (Dentbird; Imagoworks) and fabricated with a vat photopolymerization DLP 3D printer (TD6+; 3D Controls) under three pixel offset conditions (−1, 0, and 1). Pixel offset refers to the controlled modification of the outermost pixels in the XY plane during printing to compensate for potential dimensional inaccuracies. The marginal and internal fit was assessed using a triple-scan protocol and quantified using root mean square (RMS) values. Surface trueness was evaluated by measuring RMS, positive and negative errors between the designed and fabricated crowns. Statistical analyses included one-way ANOVA and Pearson correlation analysis (α = 0.05). The Pixel offset had a significant effect on fit accuracy and surface trueness (<i>p</i> < 0.05). Higher pixel offsets increased marginal discrepancies (<i>p</i> = 0.004), with the marginal gap exceeding 120 µm at a pixel offset of 1 (114.5 ± 14.6 µm), while a pixel offset of −1 (85.5 ± 18.6 µm) remained within acceptable limits (<i>p</i> = 0.003). Surface trueness worsened with increasing pixel offset, showing greater positive errors (<i>p</i> < 0.001). Optimizing pixel offset in DLP 3D printing is crucial to ensuring clinically acceptable zirconia crowns. Improper settings may increase marginal discrepancies and surface errors, compromising restoration accuracy.
Biotechnology, Medicine (General)
FE-DARFormer:Image Desnowing Model Based on Frequency Enhancement and Degradation- aware Routing Transformer
QIN Yi, ZHAN Pengxiang, XIAN Feng, LIU Chenlong, WANG Minghui
The goal of image desnowing is to restore clear scene information from images degraded by complex snowy scenes.Unlike the regularity and semi-transparency of rain,snow exhibits various forms and scales of degradation,with severely degraded regions often obstructing important scene details.Recent methods have employed self-attention mechanisms to address different degradation phenomena.However,global self-attention computation across all image regions is computationally expensive,leading these methods to restrict attention to smaller windows.Yet,due to the occlusion effects in severely degraded areas,the recovery of these regions relies heavily on capturing information from surrounding areas,which results in a receptive field bottleneck,limi-ting the ability to aggregate sufficient information.As a result,these methods struggle to effectively restore large-scale degraded regions.To improve desnowing performance,this paper proposes a novel approach,introducing a new network architecture called FE-DARFormer,which combines a Degradation-Aware Routing Transformer and a Dual-Frequency Enhancement Transformer.FE-DARFormer dynamically routes and applies global self-attention to severely degraded regions,enabling a global receptive field for effective restoration of large degraded areas while reducing computational cost.Additionally,it uses discrete wavelet decomposition to handle multi-scale snow degradation,enhancing the recovery of diverse snowflake shapes and textures.
Computer software, Technology (General)
Curating Model Problems for Software Designing
Mary Shaw, Marian Petre
Many disciplines use standard examples for education and to share and compare research results. The examples are rich enough to study from multiple points of view; they are often called model problems. Software design lacks such a community resource. We propose an activity for Designing 2025 in which participants improve some existing model problem descriptions and initiate new ones -- with a focus on use in software design education, plus potential utility in research.
Analyzing the Evolution and Maintenance of Quantum Software Repositories
Krishna Upadhyay, Vinaik Chhetri, A. B. Siddique
et al.
Quantum computing is rapidly advancing, but quantum software development faces significant challenges, including a steep learning curve, high hardware error rates, and a lack of mature engineering practices. This study conducts a large-scale mining analysis of over 21,000 GitHub repositories, containing 1.2 million commits from more than 10,000 developers, to examine the evolution and maintenance of quantum software. We analyze repository growth, programming language and framework adoption, and contributor trends, revealing a 200% increase in repositories and a 150% rise in contributors since 2017. Additionally, we investigate software development and maintenance practices, showing that perfective commits dominate (51.76%), while the low occurrence of corrective commits (18.54%) indicates potential gaps in bug resolution. Furthermore, 34% of reported issues are quantum-specific, highlighting the need for specialized debugging tools beyond conventional software engineering approaches. This study provides empirical insights into the software engineering challenges of quantum computing, offering recommendations to improve development workflows, tooling, and documentation. We are also open-sourcing our dataset to support further analysis by the community and to guide future research and tool development for quantum computing. The dataset is available at: https://github.com/kriss-u/QRepoAnalysis-Paper
Tracing the Lifecycle of Architecture Technical Debt in Software Systems: A Dependency Approach
Edi Sutoyo, Paris Avgeriou, Andrea Capiluppi
Architectural technical debt (ATD) represents trade-offs in software architecture that accelerate initial development but create long-term maintenance challenges. ATD, in particular when self-admitted, impacts the foundational structure of software, making it difficult to detect and resolve. This study investigates the lifecycle of ATD, focusing on how it affects i) the connectivity between classes and ii) the frequency of file modifications. We aim to understand how ATD evolves from introduction to repayment and its implications on software architectures. Our empirical approach was applied to a dataset of SATD items extracted from various software artifacts. We isolated ATD instances, filtered for architectural indicators, and calculated dependencies at different lifecycle stages using FAN-IN and FAN-OUT metrics. Statistical analyses, including the Mann-Whitney U test and Cliff's Delta, were used to assess the significance and effect size of connectivity and dependency changes over time. We observed that ATD repayment increased class connectivity, with FAN-IN increasing by 57.5% on average and FAN-OUT by 26.7%, suggesting a shift toward centralization and increased architectural complexity after repayment. Moreover, ATD files were modified less frequently than Non-ATD files, with changes accumulated in high-dependency portions of the code. Our study shows that resolving ATD improves software quality in the short-term, but can make the architecture more complex by centralizing dependencies. Also, even if dependency metrics (like FAN-IN and FAN-OUT) can help understand the impact of ATD, they should be combined with other measures to capture other effects of ATD on software maintainability.
Propagation-Based Vulnerability Impact Assessment for Software Supply Chains
Bonan Ruan, Zhiwei Lin, Jiahao Liu
et al.
Identifying the impact scope and scale is critical for software supply chain vulnerability assessment. However, existing studies face substantial limitations. First, prior studies either work at coarse package-level granularity, producing many false positives, or fail to accomplish whole-ecosystem vulnerability propagation analysis. Second, although vulnerability assessment indicators like CVSS characterize individual vulnerabilities, no metric exists to specifically quantify the dynamic impact of vulnerability propagation across software supply chains. To address these limitations and enable accurate and comprehensive vulnerability impact assessment, we propose a novel approach: (i) a hierarchical worklist-based algorithm for whole-ecosystem and call-graph-level vulnerability propagation analysis and (ii) the Vulnerability Propagation Scoring System (VPSS), a dynamic metric to quantify the scope and evolution of vulnerability impacts in software supply chains. We implement a prototype of our approach in the Java Maven ecosystem and evaluate it on 100 real-world vulnerabilities. Experimental results demonstrate that our approach enables effective ecosystem-wide vulnerability propagation analysis, and provides a practical, quantitative measure of vulnerability impact through VPSS.
A Functional Software Reference Architecture for LLM-Integrated Systems
Alessio Bucaioni, Martin Weyssow, Junda He
et al.
The integration of large language models into software systems is transforming capabilities such as natural language understanding, decision-making, and autonomous task execution. However, the absence of a commonly accepted software reference architecture hinders systematic reasoning about their design and quality attributes. This gap makes it challenging to address critical concerns like privacy, security, modularity, and interoperability, which are increasingly important as these systems grow in complexity and societal impact. In this paper, we describe our \textit{emerging} results for a preliminary functional reference architecture as a conceptual framework to address these challenges and guide the design, evaluation, and evolution of large language model-integrated systems. We identify key architectural concerns for these systems, informed by current research and practice. We then evaluate how the architecture addresses these concerns and validate its applicability using three open-source large language model-integrated systems in computer vision, text processing, and coding.
Deep Learning–Based Automated Imaging Classification of ADPKD
Youngwoo Kim, Seonah Bu, Cheng Tao
et al.
Introduction: The Mayo imaging classification model (MICM) requires a prestep qualitative assessment to determine whether a patient is in class 1 (typical) or class 2 (atypical), where patients assigned to class 2 are excluded from the MICM application. Methods: We developed a deep learning–based method to automatically classify class 1 and 2 from magnetic resonance (MR) images and provide classification confidence utilizing abdominal T2-weighted MR images from 486 subjects, where transfer learning was applied. In addition, the explainable artificial intelligence (XAI) method was illustrated to enhance the explainability of the automated classification results. For performance evaluations, confusion matrices were generated, and receiver operating characteristic curves were drawn to measure the area under the curve. Results: The proposed method showed excellent performance for the classification of class 1 (97.7%) and 2 (100%), where the combined test accuracy was 98.01%. The precision and recall for predicting class 1 were 1.00 and 0.98, respectively, with F1-score of 0.99; whereas those for predicting class 2 were 0.87 and 1.00, respectively, with F1-score of 0.93. The weighted averages of precision and recall were 0.98 and 0.98, respectively, showing the classification confidence scores whereas the XAI method well-highlighted contributing regions for the classification. Conclusion: The proposed automated method can classify class 1 and 2 cases as accurately as the level of a human expert. This method may be a useful tool to facilitate clinical trials investigating different types of kidney morphology and for clinical management of patients with autosomal dominant polycystic kidney disease (ADPKD).
Diseases of the genitourinary system. Urology
Bayesian optimization acquisition functions for accelerated search of cluster expansion convex hull of multi-component alloys
Dongsheng Wen, Victoria Tucker, Michael S. Titus
Abstract Atomistic simulations are crucial for predicting material properties and understanding phase stability, essential for materials selection and development. However, the high computational cost of density functional theory calculations challenges the design of materials with complex structures and composition. This study introduces new data acquisition strategies using Bayesian-Gaussian optimization that efficiently integrate the geometry of the convex hull to optimize the yield of batch experiments. We developed uncertainty-based acquisition functions to prioritize the computation tasks of configurations of multi-component alloys, enhancing our ability to identify the ground-state line. Our methods were validated across diverse materials systems including Co-Ni alloys, Zr-O compounds, Ni-Al-Cr ternary alloys, and a planar defect system in intermetallic (Ni1−x , Co x )3Al. Compared to traditional genetic algorithms, our strategies reduce training parameters and user interaction, cutting the number of experiments needed to accurately determine the ground-state line by over 30%. These approaches can be expanded to multi-component systems and integrated with cost functions to further optimize experimental designs.
Materials of engineering and construction. Mechanics of materials, Computer software
Two Stage Rumor Blocking Method Based on EHEM in Social Networks
LIU Wei, WU Fei, GUO Zhen, CHEN Ling
Therise of online social networks has brought about a series of challenges and risks,including the spread of false and malicious rumors,which can mislead the public and disrupt social stability.Therefore,blocking the spread of rumors has become a hot topic in the field of social networks.While significant efforts have been made in rumor blocking,there still exist limitations in accurately describing information propagation in social networks.To address this issue,this paper proposes a novel model,the extended heat energy model(EHEM),to characterize information propagation.EHEM fully takes into consideration several key aspects of information propagation,including the dynamic adjustment mechanism of node activation probabilities,the cascading mechanism of information propagation,and the dynamic transition mechanism of node states.By incorporating these factors,the EHEM provides a more precise representation of the explosive and complex nature of information propagation.Furthermore,ta-king into account the possibility of belief transition from rumors to truth for nodes that initially believe in rumors in the real world,this paper introduces a correction threshold to determine whether a node undergoes belief transformation.Additionally,the importance of nodes determines their influence spreading.Therefore,a multidimensional quality measure of nodes is proposed to assess their importance.Finally,a two stage rumor containment(TSRC) algorithm is proposed,which first prunes the network using the multidimensional quality measure of nodes and then selects the optimal set of positive seeds through simulations.Expe-rimental results on four real-world datasets demonstrate that the proposed algorithm outperforms six other comparative algorithms,including Random,Betweenness,MD,PR,PWD,and ContrId on multiple metrics.
Computer software, Technology (General)
Assessment of Apple's object capture photogrammetry API for rapidly creating research quality cultural heritage 3D models.
Stance Hurst, Lauren Franklin, Eileen Johnson
Photogrammetry is a significant tool museums utilize to produce high-quality 3D models for research and exhibit content. As advancements in computer hardware and software continue, it is crucial to assess the effectiveness of photogrammetry software in producing research-quality 3D models. This study evaluates the efficacy of Apple's Object Capture photogrammetry API to create high-quality 3D models. The results indicate that Object Capture is a viable option to create research-quality models efficiently for a variety of natural and cultural heritage objects. Object Capture is notable for its minimal need for masking backgrounds within images and its ability to create models with fewer than 100 images and process 3D models in under 10 minutes.
Extending and Applying Automated HERMES Software Publication Workflows
Sophie Kernchen, Michael Meinel, Stephan Druskat
et al.
Research software is an important output of research and must be published according to the FAIR Principles for Research Software. This can be achieved by publishing software with metadata under a persistent identifier. HERMES is a tool that leverages continuous integration to automate the publication of software with rich metadata. In this work, we describe the HERMES workflow itself, and how to extend it to meet the needs of specific research software metadata or infrastructure. We introduce the HERMES plugin architecture and provide the example of creating a new HERMES plugin that harvests metadata from a metadata source in source code repositories. We show how to use HERMES as an end user, both via the command line interface, and as a step in a continuous integration pipeline. Finally, we report three informal case studies whose results provide a preliminary evaluation of the feasibility and applicability of HERMES workflows, and the extensibility of the hermes software package.
Finite element model of reinforced concrete interior beamcolumn joints subjected to cyclic loading
Maulana Hafiz, Syaifa Lala, Nabila Waode Ulya
et al.
This paper presents the finite elements of beam-column joints subjected to cyclic loads. This study aims to numerically obtain the beam-column joints' capacity without shear reinforcement in the joints. The variable used in the specimen is the beam's longitudinal reinforcement ratio. While the analytical study was carried out using ATENA 2D software, a computer program based on the non-linear finite element method. In this analytical study, the beam-column joints are loaded cyclically to obtain the envelope curve of the hysterical response. The results of this numerical analysis are then compared with the test results. The comparison results show that the model used in ATENA 2D can approach the test results well. In addition, the crack pattern obtained from the analysis shows a pattern close to the test results.
Mundo Bit Byte - A digital mobile game to disseminate female personalities that made history in Computing
Aleteia Araujo, Ana Júlia Luziano Briceño, Ana Sofia S. Silvestre
et al.
There are great female personalities in the history of computing who have played an important role in the historical achievements of this area. However, their contributions are often poorly publicized and/or credit for those contributions is denied to the true authors. Thus, this paper proposes a game called Mundo Bit Byte, created by a team of female undergraduates and high school girls. The story is based on five prominent female personalities in the field of Computing. Each phase of the game is inspired by the life of one of these women, showing, in a playful and fun way, their achievements and other relevant aspects of their lives. A demo version of the game containing two phases was evaluated by 511 people. In the first test, 234 responses were obtained, and in the second test, 277. Most respondents (97.4% in first test and 98.2% in second one) reported that they would like to meet other important women in computing after playing Mundo Bit Byte. The results indicated that games like this can be powerful tools to reduce stereotypes in the Computing area.
Computer software, Computer engineering. Computer hardware
Iris Image Watermarking Technique for Security and Manipulation Reveal
Rasha Thabit, Saad M. Shukr
Providing security while storing or sharing iris images has been considered as an interesting research topic and accordingly different iris image watermarking techniques have been presented. Most of the available techniques have been presented to ensure the attachment of the secret data to their related iris images or to hide a logo which can be used for copyright purposes. The previous security techniques can successfully meet their aims; however, they cannot reveal the manipulations in the iris region. This paper presents an iris image watermarking technique that can provide security and reveal manipulations in the iris region. At the sender side, the proposed technique divides the image into two regions (i.e., iris region and non-iris region) and generates the manipulation reveal data from the iris region then embeds it in the non-iris region. At the receiver side, the secret data is extracted from the non-iris region and compared with calculated data from the iris region to reveal manipulations if exist. Different experiments have been conducted to evaluate the performance of the proposed technique which proved its efficiency not only in providing security but also in revealing any manipulations in the iris region.
Digitalization and study of graphic disciplines in English: current state and prospects
Fisunova Elena
The article discusses the possibility of effective use of various software in the study of graphic disciplines in English in order to compare the quality of classical teaching and learning using various graphic packages in the study of geometric-graphic disciplines in English and its convergence with science. The ways of using graphic editors for training specialists in various industries are described. The teaching methodology is considered, which allows a creative approach to the implementation of two-dimensional and three-dimensional objects. The article describes the use of various software products in the modernized laboratories of “Computer prototyping and reverse engineering of high complexity.” In this regard, the study of digitalization of graphic education in English and its social consequences seems to be a very relevant area of research.
Object Detection in Remote Sensing Images Based on Improved SSD Algorithm
ZHANG Yan, DU Huijuan, SUN Yemei, LI Xianguo
In the field of object detection in remote sensing images, most of the existing object detection algorithms perform poorly for small objects.This paper proposes an algorithm that fuses multi-scale features for object detection in remote sensing images.The features are first extracted by using the basic network of the SSD algorithm to form a feature map pyramid.Then the feature map fusion module is designed to fuse the position information of the shallow feature map and the semantic information of the deep feature map, retaining rich context information.Finally, a module to remove redundant information is designed, and the features in the feature map are further extracted through the convolution operation.The feature information is also screened to reduce the aliasing effect brought by the fusion of the feature maps.The experimental results on NWPU VHR-10, a dataset of remote sensing images, show that the proposed algorithm achieves an average detection accuracy of 93.9%, demonstrating that it outperforms Faster R-CNN, SSD and other algorithms in detection of small objects in remote sensing images.
Computer engineering. Computer hardware, Computer software
Size matters? Or not: A/B testing with limited sample in automotive embedded software
Yuchu Liu, David Issa Mattos, Jan Bosch
et al.
A/B testing is gaining attention in the automotive sector as a promising tool to measure causal effects from software changes. Different from the web-facing businesses, where A/B testing has been well-established, the automotive domain often suffers from limited eligible users to participate in online experiments. To address this shortcoming, we present a method for designing balanced control and treatment groups so that sound conclusions can be drawn from experiments with considerably small sample sizes. While the Balance Match Weighted method has been used in other domains such as medicine, this is the first paper to apply and evaluate it in the context of software development. Furthermore, we describe the Balance Match Weighted method in detail and we conduct a case study together with an automotive manufacturer to apply the group design method in a fleet of vehicles. Finally, we present our case study in the automotive software engineering domain, as well as a discussion on the benefits and limitations of the A/B group design method.
Recommending API Function Calls and Code Snippets to Support Software Development
Phuong T. Nguyen, Juri Di Rocco, Claudio Di Sipio
et al.
Software development activity has reached a high degree of complexity, guided by the heterogeneity of the components, data sources, and tasks. The proliferation of open-source software (OSS) repositories has stressed the need to reuse available software artifacts efficiently. To this aim, it is necessary to explore approaches to mine data from software repositories and leverage it to produce helpful recommendations. We designed and implemented FOCUS as a novel approach to provide developers with API calls and source code while they are programming. The system works on the basis of a context-aware collaborative filtering technique to extract API usages from OSS projects. In this work, we show the suitability of FOCUS for Android programming by evaluating it on a dataset of 2,600 mobile apps. The empirical evaluation results show that our approach outperforms two state-of-the-art API recommenders, UP-Miner and PAM, in terms of prediction accuracy. We also point out that there is no significant relationship between the categories for apps defined in Google Play and their API usages. Finally, we show that participants of a user study positively perceive the API and source code recommended by FOCUS as relevant to the current development context.