C. Roads
Hasil untuk "Computer software"
Menampilkan 20 dari ~8151778 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar
N. Jennings
Shinobu Saito
In software maintenance work, software architects and programmers need to identify modules that require modification or deletion. Whilst user requests and bug reports are utilised for this purpose, evaluating the execution status of modules within the software is also crucial. This paper, therefore, applies spatial statistics to assess internal software execution data. First, we define a software space dataset, viewing the software's internal structure as a space based on module call relationships. Then, using spatial statistics, we conduct the visualization of spatial clusters and the statistical testing using spatial measures. Finally, we consider the usefulness of spatial statistics in the software engineering domain and future challenges. (This paper has been published in the 14th International Conference on Model-Based Software and Systems Engineering (MODELSWARD 2016).
Zirui Chen, Xing Hu, Xin Xia et al.
Maintenance is a critical stage in the software lifecycle, ensuring that post-release systems remain reliable, efficient, and adaptable. However, manual software maintenance is labor-intensive, time-consuming, and error-prone, which highlights the urgent need for automation. Learning from maintenance activities conducted on other software systems offers an effective way to improve efficiency. In particular, recent research has demonstrated that migration-based approaches transfer knowledge, artifacts, or solutions from one system to another and show strong potential in tasks such as API evolution adaptation, software testing, and migrating patches for fault correction. This makes migration-based maintenance a valuable research direction for advancing automated maintenance. This paper takes a step further by presenting the first systematic research agenda on migration-based approaches to software maintenance. We characterize the migration-based maintenance lifecycle through four key stages: \ding{182} identifying a maintenance task that can be addressed through migration, \ding{183} selecting suitable migration sources for the target project,\ding{184} matching relevant data across systems and adapting the migrated data to the target context, and \ding{185} validating the correctness of the migration. We also analyze the challenges that may arise at each stage. Our goal is to encourage the community to explore migration-based approaches more thoroughly and to tackle the key challenges that must be solved to advance automated software maintenance.
Alisa Welter, Christof Tinnes, Sven Apel
In model-driven engineering and beyond, software models are key development artifacts. In practice, they often grow to substantial size and complexity, undergoing thousands of modifications over time due to evolution, refactoring, and maintenance. The rise of AI has sparked interest in how software modeling activities can be automated. Recently, LLM-based approaches for software model completion have been proposed, however, the state of the art supports only single-location model completion by predicting changes at a specific location. Going beyond, we aim to bridge the gap toward handling coordinated changes that span multiple locations across large, complex models. Specifically, we propose a novel global embedding-based next focus predictor, NextFocus, which is capable of multi-location model completion for the first time. The predictor consists of a neural network with an attention mechanism that is trained on historical software model evolution data. Starting from an existing change, it predicts further model elements to change, potentially spanning multiple parts of the model. We evaluate our approach on multi-location model changes that have actually been performed by developers in real-world projects. NextFocus achieves promising results for multi-location model completion, even when changes are heavily spread across the model. It achieves an average Precision@k score of 0.98 for $k \leq 10$, significantly outperforming the three baseline approaches.
Fernando Martinez-Martinez, David Roldán-Álvarez, Estefanía Martín-Barroso
The discussion in social networks is of general interest, but the extraction, curation and visualization of this information turns difficult for those without programming knowledge. In the framework of the project CSTrack, which studies the activities in Citizen Science, we present an easily accessible dashboard aimed to provide a platform for people of different levels of expertise and professionals. They can retrieve valuable information about the trends and topics inside Twitter with a standardized pipeline for analysis that provides a complete understanding of the state of the conversation in social networks. With this platform, we present an alternative to the lack of standardization in social networking analysis and also, we aim to palliate the insufficiency of replication of social network research.
Dheya Mustafa, Safaa M. Khabour, Mousa Al-kfairy et al.
Companies that deliver food (food delivery services, or FDS) try to use customer feedback to identify aspects where the customer experience could be improved. Consumer feedback on purchasing and receiving goods via online platforms is a crucial tool for learning about a company’s performance. Many English-language studies have been conducted on sentiment analysis (SA). Arabic is becoming one of the most extensively written languages on the World Wide Web, but because of its morphological and grammatical difficulty as well as the lack of openly accessible resources for Arabic SA, like as dictionaries and datasets, there has not been much research done on the language. Using a manually annotated FDS dataset, the current study conducts extensive sentiment analysis using reviews related to FDS that include Modern Standard Arabic and dialectal Arabic. It does this by utilizing word embedding models, deep learning techniques, and natural language processing to extract subjective opinions, determine polarity, and recognize customers’ feelings in the FDS domain. Convolutional neural network (CNN), bidirectional long short-term memory recurrent neural network (BiLSTM), and an LSTM-CNN hybrid model were among the deep learning approaches to classification that we evaluated. In addition, the article investigated different effective approaches for word embedding and stemming techniques. Using a dataset of Modern Standard Arabic and dialectal Arabic corpus gathered from Talabat.com, we trained and evaluated our suggested models. Our best accuracy was approximately 84% for multiclass classification and 92.5% for binary classification on the FDS. To verify that the proposed approach is suitable for analyzing human perceptions in diversified domains, we designed and carried out excessive experiments on other existing Arabic datasets. The highest obtained multi-classification accuracy is 88.9% on the Hotels Arabic-Reviews Dataset (HARD) dataset, and the highest obtained binary classification accuracy is 97.2% on the same dataset.
Miłosz Wieczór, Jacek Czub, Modesto Orozco
Despite the increasing automation of workflows for the preparation of systems for molecular dynamics simulations, the custom editing of molecular topologies to accommodate non-standard modifications remains a daunting task even for experienced users. To alleviate this issue, we created Gromologist, a utility library that provides the simulation community with a toolbox of primitive operations, as well as useful repetitive procedures identified during years of research. The library has been developed in response to users’ feedback, and will continue to grow to include more use cases, thorough automatic testing and support for a broader spectrum of rare features. The program is available at gitlab.com/KomBioMol/gromologist and via Python’s pip.
Jianjun Zhao
Abstraction is a fundamental principle in classical software engineering, which enables modularity, reusability, and scalability. However, quantum programs adhere to fundamentally different semantics, such as unitarity, entanglement, the no-cloning theorem, and the destructive nature of measurement, which introduce challenges to the safe use of classical abstraction mechanisms. This paper identifies a fundamental conflict in quantum software engineering: abstraction practices that are syntactically valid may violate the physical constraints of quantum computation. We present three classes of failure cases where naive abstraction breaks quantum semantics and propose a set of design principles for physically sound abstraction mechanisms. We further propose research directions, including quantum-specific type systems, effect annotations, and contract-based module design. Our goal is to initiate a systematic rethinking of abstraction in quantum software engineering, based on quantum semantics and considering engineering scalability.
Marvin Wyrich, Lloyd Montgomery
A well-rounded software engineer is often defined by technical prowess and the ability to deliver on complex projects. However, the narrative around the ideal Software Engineering (SE) candidate is evolving, suggesting that there is more to the story. This article explores the non-technical aspects emphasized in SE job postings, revealing the sociotechnical and organizational expectations of employers. Our Thematic Analysis of 100 job postings shows that employers seek candidates who align with their sense of purpose, fit within company culture, pursue personal and career growth, and excel in interpersonal interactions. This study contributes to ongoing discussions in the SE community about the evolving role and workplace context of software engineers beyond technical skills. By highlighting these expectations, we provide relevant insights for researchers, educators, practitioners, and recruiters. Additionally, our analysis offers a valuable snapshot of SE job postings in 2023, providing a scientific record of prevailing trends and expectations.
Boshuai Ye, Arif Ali Khan, Teemu Pihkakoski et al.
QSE is emerging as a critical discipline to make quantum computing accessible to a broader developer community; however, most quantum development environments still require developers to engage with low-level details across the software stack - including problem encoding, circuit construction, algorithm configuration, hardware selection, and result interpretation - making them difficult for classical software engineers to use. To bridge this gap, we present C2|Q>, a hardware-agnostic quantum software development framework that translates specific types of classical specifications into quantum-executable programs while preserving methodological rigor. The framework applies modular SE principles by classifying the workflow into three core modules: an encoder that classifies problems, produces Quantum-Compatible Formats, and constructs quantum circuits, a deployment module that generates circuits and recommends hardware based on fidelity, runtime, and cost, and a decoder that interprets quantum outputs into classical solutions. In evaluation, the encoder module achieved a 93.8% completion rate, the hardware recommendation module consistently selected the appropriate quantum devices for workloads scaling up to 56 qubits. End-to-end experiments on 434 Python programs and 100 JSON problem instances show that the full C2|Q> workflow executes reliably on simulators and can be deployed successfully on representative real quantum hardware, with empirical runs limited to small- and medium-sized instances consistent with current NISQ capabilities. These results indicate that C2|Q> lowers the entry barrier to quantum software development by providing a reproducible, extensible toolchain that connects classical specifications to quantum execution. The open-source implementation of C2|Q> is available at https://github.com/C2-Q/C2Q and as a Python package at https://pypi.org/project/c2q-framework/.
Ian Milligan
O Wget é um programa muito útil, que corre no computador através da linha de comandos, para facilitar o acesso a material online.
Sawsen Rebhi, Zarrin Basharat, Calvin R. Wei et al.
Background & Objectives American foulbrood (AFB), caused by the highly virulent, spore-forming bacterium Paenibacillus larvae, poses a significant threat to honey bee brood. The widespread use of antibiotics not only fails to effectively combat the disease but also raises concerns regarding honey safety. The current computational study was attempted to identify a novel therapeutic drug target against P. larvae, a causative agent of American foulbrood disease in honey bee. Methods We investigated effective novel drug targets through a comprehensive in silico pan-proteome and hierarchal subtractive sequence analysis. In total, 14 strains of P. larvae genomes were used to identify core genes. Subsequently, the core proteome was systematically narrowed down to a single protein predicted as the potential drug target. Alphafold software was then employed to predict the 3D structure of the potential drug target. Structural docking was carried out between a library of phytochemicals derived from traditional Chinese flora (n > 36,000) and the potential receptor using Autodock tool 1.5.6. Finally, molecular dynamics (MD) simulation study was conducted using GROMACS to assess the stability of the best-docked ligand. Results Proteome mining led to the identification of Ketoacyl-ACP synthase III as a highly promising therapeutic target, making it a prime candidate for inhibitor screening. The subsequent virtual screening and MD simulation analyses further affirmed the selection of ZINC95910054 as a potent inhibitor, with the lowest binding energy. This finding presents significant promise in the battle against P. larvae. Conclusions Computer aided drug design provides a novel approach for managing American foulbrood in honey bee populations, potentially mitigating its detrimental effects on both bee colonies and the honey industry.
Imad Khan, Atif M. Alamri, Abdullah M. Almarashi et al.
Abstract In this study, we suggested an innovative approach by introducing an Adaptive Exponential Weighted Moving Average (AEWMA) control chart utilizing Variable Sample Size (VSS) under Bayesian methodology. The proposed methodology utilized an integer linear function to dynamically adjust sample sizes according to the AEWMA statistic. Another appealing feature of our adaptive framework is the integration of the smoothing constant of an EWMA chart, which enhances monitoring responsiveness. We reveal the superiority of our recommended control chart by extensive simulations to existing Bayesian EWMA and Bayesian AEWMA control charts using Fixed sample size (FSS). The offered Bayesian VAEWMA control chart is more sensitive to detection improvement, a decrease in the false alarm rate, and overall more effective than the existing methods. These findings provide additional justification for the basic notion that process control statistical tools needed to be dynamic, as the manufacturing process itself was dynamic. The results suggest the importance of introducing adaptive SPC methods in dynamic manufacturing environments. A real data application is performed to evaluate the validity and optimal performance of our recommended chart.
Zahid Hasan, Areej Fatima, Tariq Shahzad et al.
Nanotechnologists and medical researchers are working hard to develop new and innovative ways to use nanorobots as nanomedicine to improve healthcare outcomes and revolutionize the field of therapeutics. Nanotechnology has the potential to revolutionize healthcare by providing new ways of treating chronic diseases in the field of medicine. A “Gold Nano Thermo Robot” (GNTR) model has been proposed in this research article, which can be considered a nanomedicine that will deliver controlled thermal therapy to targeted malignant tissues without damaging healthy tissues. The proposed nanotherapeutic system, empowered with a nano sensor network, interbody body communication network, and Internet of nanomedical things, has been used to normalize and control hyperthermal waves in real-time that have been used to eliminate breast cancer cells using the “SEE and TREAT” technique. To generate hyperthermia, which has been irradiated by laser pulses to propose GNTR, a Coulomb explosion took place, and a huge amount of dispersed hyperthermia waves were produced. To convert the intensity of dispersed and irregular hyperthermia into a regulated and disciplined format, a Finite Difference Method has been used to develop a “Heat Control System.” A comparative analysis has been provided of the intricate relationship between the required radius of Gold Nano Thermo Robots and the volume depth of the tumor for penetration, with a keen focus on evaluating how different GNTR sizes fit or do not fit for the task of effectively treating tumors at various depths within cancer tumors. Furthermore, the effectiveness of treatment has multifaceted outcomes that have been acquired by the interplay between two critical factors, the temperature limit and therapy duration. By examining a comprehensive matrix of thermal therapy durations (ranging from 25 minutes to 60 minutes) alongside various temperature limits (ranging from 33°C to 60°C). The best fit and the best response therapy session have been verified with a temperature limit of 42 °C for 30 minutes, achieving near-complete tumor ablation with minimum harm to the healthy tissues. The complex physical effects on the Gold Nano Robots surfaces due to the Coulomb explosion procedure are also provided in the form of simulation analysis, and an explanation is given in nine panels.
Jieke Shi, Zhou Yang, David Lo
Large Language Models (LLMs) have recently shown remarkable capabilities in various software engineering tasks, spurring the rapid growth of the Large Language Models for Software Engineering (LLM4SE) area. However, limited attention has been paid to developing efficient LLM4SE techniques that demand minimal computational cost, time, and memory resources, as well as green LLM4SE solutions that reduce energy consumption, water usage, and carbon emissions. This paper aims to redirect the focus of the research community towards the efficiency and greenness of LLM4SE, while also sharing potential research directions to achieve this goal. It commences with a brief overview of the significance of LLM4SE and highlights the need for efficient and green LLM4SE solutions. Subsequently, the paper presents a vision for a future where efficient and green LLM4SE revolutionizes the LLM-based software engineering tool landscape, benefiting various stakeholders, including industry, individual practitioners, and society. The paper then delineates a roadmap for future research, outlining specific research paths and potential solutions for the research community to pursue. While not intended to be a definitive guide, the paper aims to inspire further progress, with the ultimate goal of establishing efficient and green LLM4SE as a central element in the future of software engineering.
Diana Robinson, Christian Cabrera, Andrew D. Gordon et al.
What if end users could own the software development lifecycle from conception to deployment using only requirements expressed in language, images, video or audio? We explore this idea, building on the capabilities that generative Artificial Intelligence brings to software generation and maintenance techniques. How could designing software in this way better serve end users? What are the implications of this process for the future of end-user software engineering and the software development lifecycle? We discuss the research needed to bridge the gap between where we are today and these imagined systems of the future.
Marvin Wyrich, Justus Bogner
LinkedIn is the largest professional network in the world. As such, it can serve to build bridges between practitioners, whose daily work is software engineering (SE), and researchers, who work to advance the field of software engineering. We know that such a metaphorical bridge exists: SE research findings are sometimes shared on LinkedIn and commented on by software practitioners. Yet, we do not know what state the bridge is in. Therefore, we quantitatively and qualitatively investigate how SE practitioners and researchers approach each other via public LinkedIn discussions and what both sides can contribute to effective science communication. We found that a considerable proportion of LinkedIn posts on SE research are written by people who are not the paper authors (39%). Further, 71% of all comments in our dataset are from people in the industry, but only every second post receives at least one comment at all. Based on our findings, we formulate concrete advice for researchers and practitioners to make sharing new research findings on LinkedIn more fruitful.
Dharun Anandayuvaraj, Matthew Campbell, Arav Tewari et al.
Software failures inform engineering work, standards, regulations. For example, the Log4J vulnerability brought government and industry attention to evaluating and securing software supply chains. Accessing private engineering records is difficult, so failure analyses tend to use information reported by the news media. However, prior works in this direction have relied on manual analysis. That has limited the scale of their analyses. The community lacks automated support to enable such analyses to consider a wide range of news sources and incidents. In this paper, we propose the Failure Analysis Investigation with LLMs (FAIL) system to fill this gap. FAIL collects, analyzes, and summarizes software failures as reported in the news. FAIL groups articles that describe the same incidents. It then analyzes incidents using existing taxonomies for postmortems, faults, and system characteristics. To tune and evaluate FAIL, we followed the methods of prior works by manually analyzing 31 software failures. FAIL achieved an F1 score of 90% for collecting news about software failures, a V-measure of 0.98 for merging articles reporting on the same incident, and extracted 90% of the facts about failures. We then applied FAIL to a total of 137,427 news articles from 11 providers published between 2010 and 2022. FAIL identified and analyzed 2457 distinct failures reported across 4,184 articles. Our findings include: (1) current generation of large language models are capable of identifying news articles that describe failures, and analyzing them according to structured taxonomies; (2) high recurrences of similar failures within organizations and across organizations; and (3) severity of the consequences of software failures have increased over the past decade. The full FAIL database is available so that researchers, engineers, and policymakers can learn from a diversity of software failures.
Xinhua LI, Fengyuan ZOU, Sarah HAIDAR
Islamic floral patterns warrant further research and analysis as they are an important aspect of the cultural heritage of Islamic patterns. These floral patterns are aesthetically inspired by flowers, leaves, vines, and stems and feature characteristics such as symmetry, interlacing, and pattern repetition. This study analysed a five-pointed rose pattern (peony flower) and its elements, such as the curved lines that make up the leaves and flowers. A new floral pattern featuring a botanical motif and curved lines was designed and distributed using kite and dart tiling. The floral pattern was designed using the pentagram reflection of the Penrose tiling method to suit modern design requirements of looking like a Shamsah. The results of the floral ornament and newly designed patterns were then reviewed in order to facilitate the generation of new patterns accurately and quickly through computer design software. Thus, the problem of time and effort in designing Islamic floral patterns was solved. This study also provides suggestions for future studies on Islamic floral patterns.
Halaman 15 dari 407589