C. Garland, J. Nibler, D. Shoemaker
Hasil untuk "Computer software"
Menampilkan 19 dari ~8152458 hasil · dari DOAJ, arXiv, CrossRef, Semantic Scholar
Junsan Zhang, Yudie Yan, Junxiao Han et al.
Abstract Code summarization is an important task in software engineering that helps developers understand and maintain code by generating natural language summaries. Existing approaches predominantly rely on single models, facing a dilemma: directly deploying large language models (LLMs) incurs high training costs, while lightweight models specialized for summarization are constrained by the quality of training data and their ability to capture the complex structural semantics of code. This highlights the urgent need for synergistic collaboration between large and small models in cloud computing environments. To address these issues, this paper proposes a cloud-assisted code summarization framework. First, we achieve code enhancement by invoking cloud-deployed LLM services. The specific workflow involves using preset prompt templates to guide the model in evaluating code quality and automatically repairing defects based on its feedback, thereby constructing high-quality datasets Java-QE and Python-QE. Second, for efficient edge deployment, we introduce HiSum: AST Hierarchy-Aware Code Summarization model, a lightweight model. HiSum transforms code AST into Directed Syntax Graphs (DSG) to preserve structural semantics, encodes them via a directed graph convolutional network and decode to improve summary quality. Experimental results show that our framework significantly enhances code summarization performance. On the constructed Java-QE and Python-QE datasets, the HiSum model achieves notable improvements over state-of-the-art baselines in BLEU, METEOR, and ROUGE-L metrics (increases of 1.06%, 1.98%, 3.12% for Java-QE, and 1.46%, 3.24%, 2.20% for Python-QE, respectively). This research provides a solution that utilizes cloud LLM-assisted data enhancement to empower a lightweight hierarchical-aware model.
Kellen O'Brien, Maya Amouzegar, Won Chan Lee et al.
Quantum spin models are ubiquitous in solid-state physics, but classical simulation of them remains extremely challenging. Experimental testbed systems with a variety of spin-spin interactions and measurement channels are therefore needed. One promising potential route to such testbeds is provided by microwave-photon-mediated interactions between superconducting qubits, where native strong light-matter coupling enables significant interactions even for virtual-photon-mediated processes. In this approach, the spin-model connectivity is set by the photonic mode structure, rather than the spatial structure of the qubit. Lattices of coplanar-waveguide (CPW) resonators have been demonstrated to allow extremely flexible connectivities and can therefore host a huge variety of photon-mediated spin models. However, large-scale CPW lattices with nontrivial band structures have never before been successfully combined with superconducting qubits. Here we present the first such device featuring a quasi-1D CPW lattice with multiple transmon qubits. We demonstrate that superconducting-qubit readout and diagnostic techniques can be generalized to this highly multimode environment and observe the effective qubit-qubit interaction mediated by the bands of the resonator lattice. This device completes the toolkit needed to realize CPW lattices with qubits in one or two Euclidean dimensions, or negatively curved hyperbolic space, and paves the way to driven-dissipative spin models with a large variety of connectivities.
Victor Travassos Sarinho
Background: There are several studies focused on identifying and defining gamification strategies in software development processes. These strategies are also applied by agile methods, which can create a context of recognition and reward for the completion of activities in a software project. Purpose: This paper presents a reinterpretation of the Extreme Programming (XP) practices and process stages in order to provide a “playable mode” for the XP development. Methods: XP practices and process stages are linked to terms and activities applied in digital games, enabling a reinterpretation from a playable and gamified perspective. Results: Gamified XP practices and process stages are explained and exemplified, demonstrating the feasibility of the proposed gamified reinterpretation for the XP software development. Conclusion: A software development methodology based on agile gameplays obtained by the XP reinterpretation was proposed, becoming a possible solution to improve the flow state in XP developers.
Peter A. Spring, Luka Milanovic, Yoshiki Sunada et al.
Fast and accurate qubit measurement remains a critical challenge on the path to fault-tolerant quantum computing. In superconducting quantum circuits, fast qubit measurement has been achieved using a dispersively coupled resonator with a large externally limited linewidth. This necessitates the use of a Purcell filter that protects the qubit from relaxation through the readout channel. Here, we show that a readout resonator and filter resonator, coupled to each other both capacitively and inductively via a multiconductor transmission line, can produce a compact notch-filter circuit that effectively eliminates the Purcell decay channel through destructive interference. By utilizing linewidths as large as 42 MHz, we perform simultaneous readout of four qubits using a 56-ns integration window and benchmark an average assignment fidelity of 99.77%, with the highest qubit assignment fidelity exceeding 99.9%. Including the simulated readout ring-down time, the total readout duration was between 115 and 215 ns for the four qubits, which we anticipate can be reduced to around 100 ns with active ring-down pulse shaping. These results demonstrate a significant advancement in speed and fidelity for multiplexed superconducting-qubit readout.
Tarik Houichime, Younes El Amrani
As modern software systems expand in scale and complexity, the challenges associated with their modeling and formulation grow increasingly intricate. Traditional approaches often fall short in effectively addressing these complexities, particularly in tasks such as design pattern detection for maintenance and assessment, as well as code refactoring for optimization and long-term sustainability. This growing inadequacy underscores the need for a paradigm shift in how such challenges are approached and resolved. This paper presents Analytical Software Engineering (ASE), a novel design paradigm aimed at balancing abstraction, tool accessibility, compatibility, and scalability. ASE enables effective modeling and resolution of complex software engineering problems. The paradigm is evaluated through two frameworks Behavioral-Structural Sequences (BSS) and Optimized Design Refactoring (ODR), both developed in accordance with ASE principles. BSS offers a compact, language-agnostic representation of codebases to facilitate precise design pattern detection. ODR unifies artifact and solution representations to optimize code refactoring via heuristic algorithms while eliminating iterative computational overhead. By providing a structured approach to software design challenges, ASE lays the groundwork for future research in encoding and analyzing complex software metrics.
Mei ZENG, Yihan WANG, Zhiwei LEI, Xueyin LIU, Bailin LI
Conventional random occlusion algorithms used in generating synthetic occluded grape images often lead to data distortion, potentially rendering grape occlusion prediction ineffective. Therefore, this study proposes an occlusion data synthesis method suitable for grape occlusion prediction and further introduces a self-supervised grape instance de-occlusion prediction algorithm. During data synthesis, the proposed algorithm employs a proximity-based occlusion strategy to replace random occlusion methods for synthesizing different occluded instances from complete grape instances. Prior to the synthesis process, various preprocessing mechanisms are employed to control the sizes of mutually occluding grape instances, ensuring that the synthesized occluded grapes align with real-world conditions without distortion. Subsequently, the proposed approach splits occlusion prediction into mask reconstruction and semantic inpainting components. The study selects the corresponding synthetic data to train a generic Unet-based mask reconstruction network and a semantic inpainting network. To address the inability to predict complete instances owing to the limitations of instance segmentation cropping sizes, our algorithm fully considers both the occluded and occluder instances during data synthesis. The study introduces corresponding reconstruction and inpainting functions. In the occlusion prediction phase, an instance segmentation network, Pointrend, trained on an open-source architecture, the proposed mask reconstruction network, and a semantic inpainting network are sequentially applied to predict occluded grapes. When applied to the collected occlusion estimation dataset, the proposed algorithm achieves an Intersection-over-Union (IoU) value of 81.16% between the predicted occluded grape masks and ground truth annotations, outperforming other comparative methods. Experimental results demonstrate that the proposed synthesis algorithm and reconstruction framework are effective for grape occlusion prediction.
Christof Tinnes, Alisa Welter, Sven Apel
Modeling structure and behavior of software systems plays a crucial role in the industrial practice of software engineering. As with other software engineering artifacts, software models are subject to evolution. Supporting modelers in evolving software models with recommendations for model completions is still an open problem, though. In this paper, we explore the potential of large language models for this task. In particular, we propose an approach, RAMC, leveraging large language models, model histories, and retrieval-augmented generation for model completion. Through experiments on three datasets, including an industrial application, one public open-source community dataset, and one controlled collection of simulated model repositories, we evaluate the potential of large language models for model completion with RAMC. We found that large language models are indeed a promising technology for supporting software model evolution (62.30% semantically correct completions on real-world industrial data and up to 86.19% type-correct completions). The general inference capabilities of large language models are particularly useful when dealing with concepts for which there are few, noisy, or no examples at all.
R. DeMillo, R. Lipton, A. Perlis
YU Jian, ZHAO Mankun, GAO Jie, WANG Congyuan, LI Yarong, ZHANG Wenbin
Cross-item social recommendation is a method of integrating social relationships into the recommendation system.In social recommendation,user is the bridge connecting user-item interaction graph and user-user social graph.So user representation learning is essential to improve the performance of social recommendation.However,existing methods mainly use static attributes of users or items and explicit friend information in social networks for representations learning,and the temporal information of the interaction between users and items and their implicit friend information are not fully utilized.Therefore,in social recommendation,effective use of temporal information and social information has become one of the important research topics.This paper focuses on the temporal information of the interaction between users and items,and gives full play to the advantages of social network,modeling the user's implicit friends and item's social attributes.This paper proposes a novel graph neural networks social recommendation based on high-order and temporal features,referred to as HTGSR.Firstly,the framework uses gated recurrent unit to model item-based user representations to reflect the user's recent preferences,and defines a high-order mo-deling unit to extract the user's high-order connected features and obtain the user's implicit friend information.Secondly,HTGSR uses attention mechanism to obtain social-based user representation.Thirdly,the paper proposes different ways to construct item's social networks,and uses the attention mechanism to obtain item representations.Finally,the user's and item's representations are input to the MLP to complete the user's rating prediction for the item.The paper conducts specific experiments on two public and real-world datasets,and compares the experimental results with different recommendation algorithms.The results show that the HTGSR has achieved good results on the two datasets.
Yaoying Wang
The increasing integration of gas-fired units (GFU) and power-to-gas (P2G) technology has led to the interconnection of natural gas and electricity networks. However, the advanced information and communication equipment in these integrated electricity-gas energy systems give rise to cybersecurity concerns. This research proposes an entropy-based load redistribution (LR) attack detection approach for such integrated networks. The objective is to simultaneously overload multiple electrical lines and gas pipelines using a bi-level LR attack model, while ensuring system security through a defense strategy. Instead of relying on deep learning algorithms, this study leverages entropy-based techniques for attack detection. In order to thoroughly investigate the attack space, an attack detector based on entropy is trained utilizing a combination of normal data and randomly generated LR attacks. The efficacy of the suggested methodology in mitigating the hazards linked to inaccurate data injection is substantiated via simulations conducted on a modified version of the IEEE 118-bus power system, which incorporates a 14-node gas system. The findings indicate that the detector based on entropy exhibits efficacy in detecting both stochastic and purposeful attacks, thereby augmenting the security of interconnected gas and electricity networks.
Arnaldo Pereira, Alina Trifan, Rui Pedro Lopes et al.
Abstract Over the years, a growing number of semantic data repositories have been made available on the web. However, this has created new challenges in exploiting these resources efficiently. Querying services require knowledge beyond the typical user’s expertise, which is a critical issue in adopting semantic information solutions. Several proposals to overcome this difficulty have suggested using question answering (QA) systems to provide user‐friendly interfaces and allow natural language use. Because question answering over knowledge bases (KBQAs) is a very active research topic, a comprehensive view of the field is essential. The purpose of this study was to conduct a systematic review of methods and systems for KBQAs to identify their main advantages and limitations. The inclusion criteria rationale was English full‐text articles published since 2015 on methods and systems for KBQAs. Sixty‐six articles were reviewed to describe their underlying reference architectures.
Birgit Vogel-Heuser, Eva-Maria Neumann, Juliane Fischer
automated Production Systems (aPS) are highly complex, mechatronic systems that usually have to operate reliably for many decades. Standardization and reuse of control software modules is a core prerequisite to achieve the required system quality in increasingly shorter development cycles. However, industrial case studies in the field of aPS show that many aPS companies still struggle with strategically reusing software. This paper proposes a metric-based approach to objectively measure the maturity of industrial IEC 61131-based control software in aPS (MICOSE4aPS) to identify potential weaknesses and quality issues hampering systematic reuse. Module developers in the machine and plant manufacturing industry can directly benefit as the metric calculation is integrated into the software engineering workflow. An in-depth industrial evaluation in a top-ranked machine manufacturing company in food packaging and an expert evaluation with different companies confirmed the benefit to efficiently manage the quality of control software.
Lucas Gren, Martin Shepperd
Background: Volvo Cars is pioneering an agile transformation on a large scale in the automotive industry. Social psychological aspects of automotive software development are an under-researched area in general. Few studies on team maturity or group dynamics can be found specifically in the automotive software engineering domain. Objective: This study is intended as an initial step to fill that gap by investigating the connection between issues and problem reports and team maturity. Method: We conducted a quantitative study with 84 participants from 14 teams and qualitatively validated the result with the Release Train Engineer having an overview of all the participating teams. Results: We find that the more mature a team is, the faster they seem to resolve issues as provided through external feedback, at least in the two initial team maturity stages. Conclusion: This study suggests that working on team dynamics might increase productivity in modern automotive software development departments, but this needs further investigation.
Birgit Vogel-Heuser, Juliane Fischer, Stefan Feldmann et al.
Adaptive and flexible production systems require modular and reusable software especially considering their long term life cycle of up to 50 years. SWMAT4aPS, an approach to measure Software Maturity for automated Production Systems is introduced. The approach identifies weaknesses and strengths of various companie's solutions for modularity of software in the design of automated Production Systems (aPS). At first, a self assessed questionnaire is used to evaluate a large number of companies concerning their software maturity. Secondly, we analyze PLC code, architectural levels, workflows and abilities to configure code automatically out of engineering information in four selected companies. In this paper, the questionnaire results from 16 German world leading companies in machine and plant manufacturing and four case studies validating the results from the detailed analyses are introduced to prove the applicability of the approach and give a survey of the state of the art in industry.
Daniel Russo
Recruiting participants for software engineering research has been a primary concern of the human factors community. This is particularly true for quantitative investigations that require a minimum sample size not to be statistically underpowered. Traditional data collection techniques, such as mailing lists, are highly doubtful due to self-selection biases. The introduction of crowdsourcing platforms allows researchers to select informants with the exact requirements foreseen by the study design, gather data in a concise time frame, compensate their work with fair hourly pay, and most importantly, have a high degree of control over the entire data collection process. This experience report discusses our experience conducting sample studies using Prolific, an academic crowdsourcing platform. Topics discussed are the type of studies, selection processes, and power computation.
Ritu Kapur, Poojith U Rao, Agrim Dewan et al.
Software development comprises the use of multiple Third-Party Libraries (TPLs). However, the irrelevant libraries present in software application's distributable often lead to excessive consumption of resources such as CPU cycles, memory, and modile-devices' battery usage. Therefore, the identification and removal of unused TPLs present in an application are desirable. We present a rapid, storage-efficient, obfuscation-resilient method to detect the irrelevant-TPLs in Java and Python applications. Our approach's novel aspects are i) Computing a vector representation of a .class file using a model that we call Lib2Vec. The Lib2Vec model is trained using the Paragraph Vector Algorithm. ii) Before using it for training the Lib2Vec models, a .class file is converted to a normalized form via semantics-preserving transformations. iii) A eXtra Library Detector (XtraLibD) developed and tested with 27 different language-specific Lib2Vec models. These models were trained using different parameters and >30,000 .class and >478,000 .py files taken from >100 different Java libraries and 43,711 Python available at MavenCentral.com and Pypi.com, respectively. XtraLibD achieves an accuracy of 99.48% with an F1 score of 0.968 and outperforms the existing tools, viz., LibScout, LiteRadar, and LibD with an accuracy improvement of 74.5%, 30.33%, and 14.1%, respectively. Compared with LibD, XtraLibD achieves a response time improvement of 61.37% and a storage reduction of 87.93% (99.85% over JIngredient). Our program artifacts are available at https://www.doi.org/10.5281/zenodo.5179747.
S. Tripp, Barbara A. Bichelmeyer
Dora Cama-Pinto, Juan Antonio Holgado-Terriza, Miguel Damas-Hermoso et al.
Precision agriculture and smart farming are concepts that are acquiring an important boom due to their relationship with the Internet of Things (IoT), especially in the search for new mechanisms and procedures that allow for sustainable and efficient agriculture to meet future demand from an increasing population. Both concepts require the deployment of sensor networks that monitor agricultural variables for the integration of spatial and temporal agricultural data. This paper presents a system that has been developed to measure the attenuation of radio waves in the 2.4 GHz free band (ISM- Industrial, Scientific and Medical) when propagating inside a tomato greenhouse based on the received signal strength indicator (RSSI), and a procedure for using the system to measure RSSI at different distances and heights. The system is based on Zolertia Re-Mote nodes with the Contiki operating system and a Raspberry Pi to record the data obtained. The receiver node records the RSSI at different locations in the greenhouse with the transmitter node and at different heights. In addition, a study of the radio wave attenuation was measured in a tomato greenhouse, and we publish the corresponding obtained dataset in order to share with the research community.
Halaman 44 dari 407623