Hasil untuk "Computer software"

Menampilkan 20 dari ~8152493 hasil · dari DOAJ, arXiv, Semantic Scholar, CrossRef

JSON API
DOAJ Open Access 2026
VisioDECT: A robust dataset for aerial and scenario based multi-drone detection, identification, and neutralization

Simeon Okechukwu Ajakwe, Vivian Ukamaka Ihekoronye, Golam Mohtasin et al.

The rapid proliferation of unmanned aerial vehicles (UAVs) for logistics, surveillance, and civilian applications continues to pose significant challenges to airspace security, particularly through unauthorized or malicious deployments. Existing UAV datasets are limited in scope, often focusing on single-drone scenarios, synthetic imagery, or restricted environmental conditions, thereby constraining the development of robust counter-UAV systems. To bridge these gaps, we present vision-based drone detection dataset named as VisioDECT, a comprehensive and scenario-rich dataset for multi-drone detection, identification, and neutralization. The dataset comprises 20,924 annotated images and labels from six UAV models (Anafi-Extended, DJI FPV, DJI Phantom, EFT-E410S, Mavic Air 2, and Mavic 2 Enterprise), captured across three distinct scenarios (sunny, cloudy, and evening) at varying altitudes (30–100 m) and distances. Importantly, all UAVs included in this dataset are rotary-wing (multirotor) platforms, which dominate low-altitude airspace and are the most commonly encountered in real-world surveillance and counter-UAV scenarios. Data were collected over 20 months from more than 12 locations in South Korea, ensuring diversity in illumination, weather, and background complexity. Each sample is provided in three standard formats (.txt, .xml, .csv), with detailed metadata and quality-verified annotations for detection and classification tasks. Illustrative benchmark evaluations using state-of-the-art detection models (e.g., DRONET, YOLO variants) are included solely to validate the quality and practical usability of the dataset for real-time drone defense research. VisioDECT provides a standardized, reproducible, and scalable resource that enables benchmarking, model training, and evaluation for airspace surveillance, UAV traffic management, and national security applications.

Computer applications to medicine. Medical informatics, Science (General)
DOAJ Open Access 2026
BIF-RCNN: Fusing Background Information for Rotated Object Detection

Jianbin Zhao, Xing Xu, Shaoying Wang et al.

Rotated object detection aims to achieve precise localization by strictly aligning bounding boxes with object orientations, thereby minimizing background interference. Existing methods predominantly focus on extracting intra-object features within rotated bounding boxes. However, these approaches often overlook the discriminative contextual information from the surrounding background, leading to classification ambiguity when internal features are indistinguishable. To address this limitation, we propose Background Information Fusion R-CNN (<b>BIF-RCNN</b>), a novel rotated object detection framework that strategically re-integrates the background context from the object’s horizontal enclosing region to validate its category, turning previously discarded “noise” into auxiliary discriminative cues. Specifically, we introduce a dual-level rotation-horizontal feature fusion module (<b>DFM</b>), which leverages horizontal bounding boxes enclosing the rotated objects to extract contextual background features. These features are then adaptively fused with the internal object features to enhance the overall representation capability of the model. In addition, we design a Prediction Difference and Entropy-Constrained Loss (<b>PDE Loss</b>), which guides the model to focus on hard-to-classify samples that are prone to confusion due to similar feature representations. This loss function improves the model’s robustness and discriminative power. Extensive experiments conducted on the DOTA benchmark dataset demonstrate the effectiveness of the proposed method. Notably, our approach achieves up to a 4.02% AP improvement in single-category detection performance compared to a strong baseline, highlighting its superiority in rotated object detection tasks.

Industrial engineering. Management engineering, Electronic computers. Computer science
arXiv Open Access 2026
Towards an OSF-based Registered Report Template for Software Engineering Controlled Experiments

Ana B. M. Bett, Thais S. Nepomuceno, Edson OliveiraJr et al.

Context: The empirical software engineering (ESE) community has contributed to improving experimentation over the years. However, there is still a lack of rigor in describing controlled experiments, hindering reproducibility and transparency. Registered Reports (RR) have been discussed in the ESE community to address these issues. A RR registers a study's hypotheses, methods, and/or analyses before execution, involving peer review and potential acceptance before data collection. This helps mitigate problematic practices such as p-hacking, publication bias, and inappropriate post hoc analysis. Objective: This paper presents initial results toward establishing an RR template for Software Engineering controlled experiments using the Open Science Framework (OSF). Method: We analyzed templates of selected OSF RR types in light of documentation guidelines for controlled experiments. Results: The observed lack of rigor motivated our investigation of OSF-based RR types. Our analysis showed that, although one of the RR types aligned with many of the documentation suggestions contained in the guidelines, none of them covered the guidelines comprehensively. The study also highlights limitations in OSF RR template customization. Conclusion: Despite progress in ESE, planning and documenting experiments still lack rigor, compromising reproducibility. Adopting OSF-based RRs is proposed. However, no currently available RR type fully satisfies the guidelines. Establishing RR-specific guidelines for SE is deemed essential.

en cs.SE
DOAJ Open Access 2025
Characterizing Agile Software Development: Insights from a Data-Driven Approach Using Large-Scale Public Repositories

Carlos Moreno Martínez, Jesús Gallego Carracedo, Jaime Sánchez Gallego

This study investigates the prevalence and impact of Agile practices by leveraging metadata from thousands of public GitHub repositories through a novel data-driven methodology. To facilitate this analysis, we developed the AgileScore index, a metric designed to identify and evaluate patterns, characteristics, performance and community engagement in Agile-oriented projects. This approach enables comprehensive, large-scale comparisons between Agile methodologies and traditional development practices within digital environments. Our findings reveal a significant annual growth of 16% in the adoption of Agile practices and validate the AgileScore index as a systematic tool for assessing Agile methodologies across diverse development contexts. Furthermore, this study introduces innovative analytical tools for researchers in software project management, software engineering and related fields, providing a foundation for future work in areas such as cost estimation and hybrid project management. These insights contribute to a deeper understanding of Agile’s role in fostering collaboration and adaptability in dynamic digital ecosystems.

Computer software
DOAJ Open Access 2025
SIMULATING MACHINING OF CONTINUOUSLY VARIABLE TRANSMISSION PULLEYS IN COMPLEX BLADE TOOL TRAJECTORY IN ANSYS SOFTWARE PACKAGE

A.A. Generalova, A.A. Nikulin, D.S. Bychkov

Background. At the stage of developing the technological process of turning a part, it is important to identify the processes that occur as a result of the impact of the tool on the workpiece. The stresses, pressures, forces and temperature deformations resulting from the action of the cutter mainly determine the properties of the part obtained as a result of processing. The most important step at the stage of technological development of a part is the modeling of the cutting process. Computer modeling allows you to fully simulate the turning process, take into account the parameters of the workpiece rotation, cutting modes, gravity and inertia of the workpiece during rotation, forced vibrations and self-oscillations, as well as the chip formation process. The purpose of the study is to develop a computer model for complex spatial movement of a cutting tool, which makes it possible to study the stress-strain and thermal state of the cutting process, chip formation conditions, predict the quality parameters of the surface layer, and also take into account the characteristics of the workpiece and the cutting tool, with the subsequent possibility of parameterizing the process. Materials and methods. The theoretical and experimental studies carried out in the work are based on the basic principles of cutting theory, materials science, and material resistance. The virtual simulation was carried out in the Ansys Workbench software package. Results. A computer model of the complex movement of a cutting tool and the destruction of a rotating workpiece with chip formation has been developed. Conclusions. The obtained results of the computer model of the turning and chip forming process are adequate to field studies of processing.

Engineering (General). Civil engineering (General)
DOAJ Open Access 2025
Automatic Scheduling Search Optimization Method Based on TVM

HAN Lin, WANG Yifan, LI Jianan, GAO Wei

With the rapid development of artificial intelligence and the continuous emergence of new operators and hardware,the development and maintenance of operator libraries face enormous challenges.Relying solely on manual optimization can no longer meet the needs of improving AI model performance.Ansor is an operator automatic scheduling technique based on TVM,which can search for the best scheduling schemes for different backend deep learning models or operators,generate high-performance code without the need for users to manually define templates.However,the huge search space results in low search efficiency.Therefore,two optimization schemes are proposed.One is to select the optimal performance sketch based on Reinforcement lear-ning algorithm,and the other is to predict mutation rules based on machine learning models.Two optimization schemes aim to reduce the search time for the optimal scheduling scheme and quickly generate high-performance operators.To evaluate the effectiveness of the optimization plan,three models such as Resnet-50 and three operators such as conv2d are tested and evaluated.The results show that the optimized Ansor can generate target programs with the same or even better performance as before in only 70%~75% search time.Moreover,under the optimal iteration number,the inference speed of the target program can be improved by up to 5%.

Computer software, Technology (General)
arXiv Open Access 2025
How Does Users' App Knowledge Influence the Preferred Level of Detail and Format of Software Explanations?

Martin Obaidi, Jannik Fischbach, Marc Herrmann et al.

Context and Motivation: Due to their increasing complexity, everyday software systems are becoming increasingly opaque for users. A frequently adopted method to address this difficulty is explainability, which aims to make systems more understandable and usable. Question/problem: However, explanations can also lead to unnecessary cognitive load. Therefore, adapting explanations to the actual needs of a user is a frequently faced challenge. Principal ideas/results: This study investigates factors influencing users' preferred the level of detail and the form of an explanation (e.g., short text or video tutorial) in software. We conducted an online survey with 58 participants to explore relationships between demographics, software usage, app-specific knowledge, as well as their preferred explanation form and level of detail. The results indicate that users prefer moderately detailed explanations in short text formats. Correlation analyses revealed no relationship between app-specific knowledge and the preferred level of detail of an explanation, but an influence of demographic aspects (like gender) on app-specific knowledge and its impact on application confidence were observed, pointing to a possible mediated relationship between knowledge and preferences for explanations. Contribution: Our results show that explanation preferences are weakly influenced by app-specific knowledge but shaped by demographic and psychological factors, supporting the development of adaptive explanation systems tailored to user expertise. These findings support requirements analysis processes by highlighting important factors that should be considered in user-centered methods such as personas.

arXiv Open Access 2025
Benchmarking AI Models in Software Engineering: A Review, Search Tool, and Unified Approach for Elevating Benchmark Quality

Roham Koohestani, Philippe de Bekker, Begüm Koç et al.

Benchmarks are essential for unified evaluation and reproducibility. The rapid rise of Artificial Intelligence for Software Engineering (AI4SE) has produced numerous benchmarks for tasks such as code generation and bug repair. However, this proliferation has led to major challenges: (1) fragmented knowledge across tasks, (2) difficulty in selecting contextually relevant benchmarks, (3) lack of standardization in benchmark creation, and (4) flaws that limit utility. Addressing these requires a dual approach: systematically mapping existing benchmarks for informed selection and defining unified guidelines for robust, adaptable benchmark development. We conduct a review of 247 studies, identifying 273 AI4SE benchmarks since 2014. We categorize them, analyze limitations, and expose gaps in current practices. Building on these insights, we introduce BenchScout, an extensible semantic search tool for locating suitable benchmarks. BenchScout employs automated clustering with contextual embeddings of benchmark-related studies, followed by dimensionality reduction. In a user study with 22 participants, BenchScout achieved usability, effectiveness, and intuitiveness scores of 4.5, 4.0, and 4.1 out of 5. To improve benchmarking standards, we propose BenchFrame, a unified framework for enhancing benchmark quality. Applying BenchFrame to HumanEval yielded HumanEvalNext, featuring corrected errors, improved language conversion, higher test coverage, and greater difficulty. Evaluating 10 state-of-the-art code models on HumanEval, HumanEvalPlus, and HumanEvalNext revealed average pass-at-1 drops of 31.22% and 19.94%, respectively, underscoring the need for continuous benchmark refinement. We further examine BenchFrame's scalability through an agentic pipeline and confirm its generalizability on the MBPP dataset. All review data, user study materials, and enhanced benchmarks are publicly released.

en cs.SE, cs.AI
arXiv Open Access 2025
No Silver Bullets: Why Understanding Software Cycle Time is Messy, Not Magic

John C. Flournoy, Carol S. Lee, Maggie Wu et al.

Understanding factors that influence software development velocity is crucial for engineering teams and organizations, yet empirical evidence at scale remains limited. A more robust understanding of the dynamics of cycle time may help practitioners avoid pitfalls in relying on velocity measures while evaluating software work. We analyze cycle time, a widely-used metric measuring time from ticket creation to completion, using a dataset of over 55,000 observations across 216 organizations. Through Bayesian hierarchical modeling that appropriately separates individual and organizational variation, we examine how coding time, task scoping, and collaboration patterns affect cycle time while characterizing its substantial variability across contexts. We find precise but modest associations between cycle time and factors including coding days per week, number of merged pull requests, and degree of collaboration. However, these effects are set against considerable unexplained variation both between and within individuals. Our findings suggest that while common workplace factors do influence cycle time in expected directions, any single observation provides limited signal about typical performance. This work demonstrates methods for analyzing complex operational metrics at scale while highlighting potential pitfalls in using such measurements to drive decision-making. We conclude that improving software delivery velocity likely requires systems-level thinking rather than individual-focused interventions.

S2 Open Access 2020
Improving Deep Video Compression by Resolution-adaptive Flow Coding

Zhihao Hu, Zhenghao Chen, Dong Xu et al.

In the learning based video compression approaches, it is an essential issue to compress pixel-level optical flow maps by developing new motion vector (MV) encoders. In this work, we propose a new framework called Resolution-adaptive Flow Coding (RaFC) to effectively compress the flow maps globally and locally, in which we use multi-resolution representations instead of single-resolution representations for both the input flow maps and the output motion features of the MV encoder. To handle complex or simple motion patterns globally, our frame-level scheme RaFC-frame automatically decides the optimal flow map resolution for each video frame. To cope different types of motion patterns locally, our block-level scheme called RaFC-block can also select the optimal resolution for each local block of motion features. In addition, the rate-distortion criterion is applied to both RaFC-frame and RaFC-block and select the optimal motion coding mode for effective flow coding. Comprehensive experiments on four benchmark datasets HEVC, VTL, UVG and MCL-JCV clearly demonstrate the effectiveness of our overall RaFC framework after combing RaFC-frame and RaFC-block for video compression.

140 sitasi en Computer Science
DOAJ Open Access 2024
HLFSRNN-MIL: A Hybrid Multi-Instance Learning Model for 3D CT Image Classification

Huilong Chen, Xiaoxia Zhang

At present, many diseases are diagnosed by computer tomography (CT) image technology, which affects the health of the lives of millions of people. In the process of disease confrontation, it is very important for patients to detect diseases in the early stage by deep learning of 3D CT images. The paper offers a hybrid multi-instance learning model (HLFSRNN-MIL), which hybridizes high-low frequency feature fusion (HLFFF) with sequential recurrent neural network (SRNN) for CT image classification tasks. Firstly, the hybrid model uses Resnet-50 as the deep feature. The main feature of the HLFSRNN-MIL lies in its ability to make full use of the advantages of the HLFFF and SRNN methods to make up for their own weakness; i.e., the HLFFF can extract more targeted feature information to avoid the problem of excessive gradient fluctuation during training, and the SRNN is used to process the time-related sequences before classification. The experimental study of the HLFSRNN-MIL model is on two public CT datasets, namely, the Cancer Imaging Archive (TCIA) dataset on lung cancer and the China Consortium of Chest CT Image Investigation (CC-CCII) dataset on pneumonia. The experimental results show that the model exhibits better performance and accuracy. On the TCIA dataset, HLFSRNN-MIL with Residual Network (ResNet) as the feature extractor achieves an accuracy (ACC) of 0.992 and an area under curve (AUC) of 0.997. On the CC-CCII dataset, HLFSRNN-MIL achieves an ACC of 0.994 and an AUC of 0.997. Finally, compared with the existing methods, HLFSRNN-MIL has obvious advantages in all aspects. These experimental results demonstrate that HLFSRNN-MIL can effectively solve the disease problem in the field of 3D CT images.

Technology, Engineering (General). Civil engineering (General)
DOAJ Open Access 2024
Bug Report Analytics for Software Reliability Assessment using Hybrid Swarm -- Evolutionary Algorithm

Sangeeta, Sitender, Rachna Jain et al.

Background: With the growing advances in the digital world, software development demands are increasing at an exponential rate. To ensure reliability of the software, high-performance tools for bug report analysis are needed. Aim: This paper proposes a new ‘Iterative Software Reliability’ model based on one of the most recent Software Development Life Cycle (SDLC) approach. Method: The proposed iterative failure rate model assumes that new functionality enhancement occurs in each iteration of software development and accordingly design modification is made at each stage of software development. In terms of defects, testing effort, and added functionality, these changing needs in each iteration are reflected in the proposed model using iterative factors. The proposed model has been tested on twelve Eclipse and six JDT software failure datasets. Proposed model parameters have been estimated using a hybrid swarm-evolutionary algorithm. Results: The proposed model has about 32% and 55% improved efficiency on Eclipse and JDT datasets respectively as compared to other models like Jelinski Moranda Model, Shick-Wolverton Model, Goel Okumotto Imperfect Model etc. Conclusion: In each analysis done, the proposed model is found to be reaching acceptable performance and could be applied on other software failure datasets for further validation.

Computer software
arXiv Open Access 2024
"How do people decide?": A Model for Software Library Selection

Minaoar Hossain Tanzil, Gias Uddin, Ann Barcomb

Modern-day software development is often facilitated by the reuse of third-party software libraries. Despite the significant effort to understand the factors contributing to library selection, it is relatively unknown how the libraries are selected and what tools are still needed to support the selection process. Using Straussian grounded theory, we conducted and analyzed the interviews of 24 professionals across the world and derived a model of library selection process which is governed by six selection patterns (i.e., rules). The model draws from marketing theory and lays the groundwork for the development of a library selection tool which captures the technical and non-technical aspects developers consider.

en cs.SE, cs.HC
arXiv Open Access 2024
A Symbolic Computing Perspective on Software Systems

Arthur C. Norman, Stephen M. Watt

Symbolic mathematical computing systems have served as a canary in the coal mine of software systems for more than sixty years. They have introduced or have been early adopters of programming language ideas such ideas as dynamic memory management, arbitrary precision arithmetic and dependent types. These systems have the feature of being highly complex while at the same time operating in a domain where results are well-defined and clearly verifiable. These software systems span multiple layers of abstraction with concerns ranging from instruction scheduling and cache pressure up to algorithmic complexity of constructions in algebraic geometry. All of the major symbolic mathematical computing systems include low-level code for arithmetic, memory management and other primitives, a compiler or interpreter for a bespoke programming language, a library of high level mathematical algorithms, and some form of user interface. Each of these parts invokes multiple deep issues. We present some lessons learned from this environment and free flowing opinions on topics including: * Portability of software across architectures and decades; * Infrastructure to embrace and infrastructure to avoid; * Choosing base abstractions upon which to build; * How to get the most out of a small code base; * How developments in compilers both to optimise and to validate code have always been and remain of critical importance, with plenty of remaining challenges; * The way in which individuals including in particular Alan Mycroft who has been able to span from hand-crafting Z80 machine code up to the most abstruse high level code analysis techniques are needed, and * Why it is important to teach full-stack thinking to the next generation.

en cs.SC, cs.MS
arXiv Open Access 2024
PVAC: Package Version Activity Categorizer, Leveraging Semantic Versioning in a Heterogeneous System

Shane K. Panter, Luke Hindman, Nasir U. Eisty

Context: Modern open-source software ecosystems, such as those managed by GNU/Linux distributions, are composed of numerous packages developed independently by diverse communities. These ecosystems employ package management tools to facilitate software installation and dependency resolution. However, these tools lack robust mechanisms for systematically evaluating the development activity and versioning dynamics within their heterogeneous software environments. Objective: This research aims to introduce a systematic method and a prototype tool for assessing version activity within heterogeneous package manager ecosystems, enabling quantitative analysis of software package updates. Method: We developed a Package Version Activity Categorizer (PVAC) that consists of three components. The Version Categorizer (VC), which categorizes diverse semantic version numbers, a Version Number Delta (VND) component, which calculates a numeric score representing the aggregated semantic version changes across packages at the ecosystem level, and finally, an Activity Categorizer (AC) that categorizes the activity of individual packages within that ecosystem. PVAC utilizes tailored regular expressions to parse semantic versioning details (epoch, major, minor, and patch versions) from diverse package version strings, enabling consistent categorization and quantitative scoring of version changes. Results: PVAC was empirically evaluated using a dataset of 22,535 packages drawn from recent releases of Debian and Ubuntu GNU/Linux distributions. Our findings demonstrate PVAC's effectiveness for accurately categorizing versioning schemes and quantitatively measuring version activity across releases. We provide empirical evidence confirming that semantic versioning, including adapted variations, is predominantly employed across these ecosystems.

DOAJ Open Access 2023
A Novel OpenBCI Framework for EEG-Based Neurophysiological Experiments

Yeison Nolberto Cardona-Álvarez, Andrés Marino Álvarez-Meza, David Augusto Cárdenas-Peña et al.

An Open Brain–Computer Interface (OpenBCI) provides unparalleled freedom and flexibility through open-source hardware and firmware at a low-cost implementation. It exploits robust hardware platforms and powerful software development kits to create customized drivers with advanced capabilities. Still, several restrictions may significantly reduce the performance of OpenBCI. These limitations include the need for more effective communication between computers and peripheral devices and more flexibility for fast settings under specific protocols for neurophysiological data. This paper describes a flexible and scalable OpenBCI framework for electroencephalographic (EEG) data experiments using the Cyton acquisition board with updated drivers to maximize the hardware benefits of ADS1299 platforms. The framework handles distributed computing tasks and supports multiple sampling rates, communication protocols, free electrode placement, and single marker synchronization. As a result, the OpenBCI system delivers real-time feedback and controlled execution of EEG-based clinical protocols for implementing the steps of neural recording, decoding, stimulation, and real-time analysis. In addition, the system incorporates automatic background configuration and user-friendly widgets for stimuli delivery. Motor imagery tests the closed-loop BCI designed to enable real-time streaming within the required latency and jitter ranges. Therefore, the presented framework offers a promising solution for tailored neurophysiological data processing.

Chemical technology
DOAJ Open Access 2023
Batched Eigenvalue Decomposition Algorithms for Hermitian Matrices on GPU

HUANG Rongfeng, LIU Shifang, ZHAO Yonghua

Batched matrix computing problems are widely existed in scientific computing and engineering applications.With rapid performance improvements,GPU has become an important tool to solve such problems.The eigenvalue decomposition belongs to the two-sided decomposition and must be solved by the iterative algorithm.Iterative numbers for different matrices can be varied.Therefore,designing eigenvalue decomposition algorithms for batched matrices on the GPU is more challenging than designing batched algorithms for the one-sided decomposition,such as LU decomposition.This paper proposes batched algorithms based on the Jacobi algorithms for eigenvalue decomposition of Hermitian matrices.For matrices that cannot reside in shared memory wholly,the block technique is used to improve the arithmetic intensity,thus improving the use of GPU resources.Algorithms presented in this paper run completely on the GPU,avoiding the communication between the CPU and GPU.Kernel fusion is adopted to decrease the overhead of launching kernel and global memory access.Experimental results on V100 GPU show that our algorithms are better than existing works.Performance evaluation results of the Roofline model indicate that our implementations are close to the upper bound,approaching 4.11TFLOPS.

Computer software, Technology (General)
arXiv Open Access 2023
Contradicting Motivations in Civic Tech Software Development: Analysis of a Grassroots Project

Antti Knutas, Dominik Siemon, Natasha Tylosky et al.

Grassroots civic tech, or software for social change, is an emerging practice where people create and then use software to create positive change in their community. In this interpretive case study, we apply Engeström's expanded activity theory as a theoretical lens to analyze motivations, how they relate to for example group goals or development tool supported processes, and what contradictions emerge. Participants agreed on big picture motivations, such as learning new skills or improving the community. The main contradictions occurred inside activity systems on details of implementation or between system motives, instead of big picture motivations. Two most significant contradictions involved planning, and converging on design and technical approaches. These findings demonstrate the value of examining civic tech development processes as evolving activity systems.

arXiv Open Access 2023
Applications of Causality and Causal Inference in Software Engineering

Patrick Chadbourne, Nasir Eisty

Causal inference is a study of causal relationships between events and the statistical study of inferring these relationships through interventions and other statistical techniques. Causal reasoning is any line of work toward determining causal relationships, including causal inference. This paper explores the relationship between causal reasoning and various fields of software engineering. This paper aims to uncover which software engineering fields are currently benefiting from the study of causal inference and causal reasoning, as well as which aspects of various problems are best addressed using this methodology. With this information, this paper also aims to find future subjects and fields that would benefit from this form of reasoning and to provide that information to future researchers. This paper follows a systematic literature review, including; the formulation of a search query, inclusion and exclusion criteria of the search results, clarifying questions answered by the found literature, and synthesizing the results from the literature review. Through close examination of the 45 found papers relevant to the research questions, it was revealed that the majority of causal reasoning as related to software engineering is related to testing through root cause localization. Furthermore, most causal reasoning is done informally through an exploratory process of forming a Causality Graph as opposed to strict statistical analysis or introduction of interventions. Finally, causal reasoning is also used as a justification for many tools intended to make the software more human-readable by providing additional causal information to logging processes or modeling languages.

en cs.SE

Halaman 46 dari 407625