Hasil untuk "Computer software"

Menampilkan 20 dari ~2828377 hasil · dari CrossRef, arXiv

JSON API
arXiv Open Access 2025
Quantum Algorithm Software for Condensed Matter Physics

T. Farajollahpour

This report offers a comprehensive analysis of the evolving landscape of quantum algorithm software specifically tailored for condensed matter physics. It examines fundamental quantum algorithms such as Variational Quantum Eigensolver (VQE), Quantum Phase Estimation (QPE), Quantum Annealing (QA), Quantum Approximate Optimization Algorithm (QAOA), and Quantum Machine Learning (QML) as applied to key condensed matter problems including strongly correlated systems, topological phases, and quantum magnetism. This review details leading software development kits (SDKs) like Qiskit, Cirq, PennyLane, and Q\#, and profiles key academic, commercial, and governmental initiatives driving innovation in this domain. Furthermore, it assesses current challenges, including hardware limitations, algorithmic scalability, and error mitigation, and explores future trajectories, anticipating new algorithmic breakthroughs, software enhancements, and the impact of next-generation quantum hardware. The central theme emphasizes the critical role of a co-design approach, where algorithms, software, and hardware evolve in tandem, and highlights the necessity of standardized benchmarks to accelerate progress towards leveraging quantum computation for transformative discoveries in condensed matter physics.

en cond-mat.str-el, cond-mat.dis-nn
arXiv Open Access 2025
Towards Emotionally Intelligent Software Engineers: Understanding Students' Self-Perceptions After a Cooperative Learning Experience

Allysson Allex Araújo, Marcos Kalinowski, Matheus Paixao et al.

[Background] Emotional Intelligence (EI) can impact Software Engineering (SE) outcomes through improved team communication, conflict resolution, and stress management. SE workers face increasing pressure to develop both technical and interpersonal skills, as modern software development emphasizes collaborative work and complex team interactions. Despite EI's documented importance in professional practice, SE education continues to prioritize technical knowledge over emotional and social competencies. [Objective] This paper analyzes SE students' self-perceptions of their EI after a two-month cooperative learning project, using Mayer and Salovey's four-ability model to examine how students handle emotions in collaborative development. [Method] We conducted a case study with 29 SE students organized into four squads within a project-based learning course, collecting data through questionnaires and focus groups that included brainwriting and sharing circles, then analyzing the data using descriptive statistics and open coding. [Results] Students demonstrated stronger abilities in managing their own emotions compared to interpreting others' emotional states. Despite limited formal EI training, they developed informal strategies for emotional management, including structured planning and peer support networks, which they connected to improved productivity and conflict resolution. [Conclusion] This study shows how SE students perceive EI in a collaborative learning context and provides evidence-based insights into the important role of emotional competencies in SE education.

en cs.SE
arXiv Open Access 2025
A Defect Classification Framework for AI-Based Software Systems (AI-ODC)

Mohammed O. Alannsary

Artificial Intelligence has gained a lot of attention recently, it has been utilized in several fields ranging from daily life activities, such as responding to emails and scheduling appointments, to manufacturing and automating work activities. Artificial Intelligence systems are mainly implemented as software solutions, and it is essential to discover and remove software defects to assure its quality using defect analysis which is one of the major activities that contribute to software quality. Despite the proliferation of AI-based systems, current defect analysis models fail to capture their unique attributes. This paper proposes a framework inspired by the Orthogonal Defect Classification (ODC) paradigm and enables defect analysis of Artificial Intelligence systems while recognizing its special attributes and characteristics. This study demonstrated the feasibility of modifying ODC for AI systems to classify its defects. The ODC was adjusted to accommodate the Data, Learning, and Thinking aspects of AI systems which are newly introduced classification dimensions. This adjustment involved the introduction of an additional attribute to the ODC attributes, the incorporation of a new severity level, and the substitution of impact areas with characteristics pertinent to AI systems. The framework was showcased by applying it to a publicly available Machine Learning bug dataset, with results analyzed through one-way and two-way analysis. The case study indicated that defects occurring during the Learning phase were the most prevalent and were significantly linked to high-severity classifications. In contrast, defects identified in the Thinking phase had a disproportionate effect on trustworthiness and accuracy. These findings illustrate AIODC's capability to identify high-risk defect categories and inform focused quality assurance measures.

en cs.SE, cs.AI
arXiv Open Access 2024
Sustaining Maintenance Labor for Healthy Open Source Software Projects through Human Infrastructure: A Maintainer Perspective

Johan Linåker, Georg J. P. Link, Kevin Lumbard

Background: Open Source Software (OSS) fuels our global digital infrastructure but is commonly maintained by small groups of people whose time and labor represent a depletable resource. For the OSS projects to stay sustainable, i.e., viable and maintained over time without interruption or weakening, maintenance labor requires an underlying infrastructure to be supported and secured. Aims: Using the construct of human infrastructure, our study aims to investigate how maintenance labor can be supported and secured to enable the creation and maintenance of sustainable OSS projects, viewed from the maintainers' perspective. Method: In our exploration, we interviewed ten maintainers from nine well-adopted OSS projects. We coded the data in two steps using investigator-triangulation. Results: We constructed a framework of infrastructure design that provide insight for OSS projects in the design of their human infrastructure. The framework specifically highlight the importance of human factors, e.g., securing a work-life balance and proactively managing social pressure, toxicity, and diversity. We also note both differences and overlaps in how the infrastructure needs to support and secure maintenance labor from maintainers and the wider OSS community, respectively. Funding is specifically highlighted as an important enabler for both types of resources. Conclusions: The study contributes to the qualitative understanding of the importance, sensitivity, and risk for depletion of the maintenance labor required to build and maintain healthy OSS projects. Human infrastructure is pivotal in ensuring that maintenance labor is sustainable, and by extension the OSS projects on which we all depend.

en cs.SE
arXiv Open Access 2024
AssetHarvester: A Static Analysis Tool for Detecting Secret-Asset Pairs in Software Artifacts

Setu Kumar Basak, K. Virgil English, Ken Ogura et al.

GitGuardian monitored secrets exposure in public GitHub repositories and reported that developers leaked over 12 million secrets (database and other credentials) in 2023, indicating a 113% surge from 2021. Despite the availability of secret detection tools, developers ignore the tools' reported warnings because of false positives (25%-99%). However, each secret protects assets of different values accessible through asset identifiers (a DNS name and a public or private IP address). The asset information for a secret can aid developers in filtering false positives and prioritizing secret removal from the source code. However, existing secret detection tools do not provide the asset information, thus presenting difficulty to developers in filtering secrets only by looking at the secret value or finding the assets manually for each reported secret. The goal of our study is to aid software practitioners in prioritizing secrets removal by providing the assets information protected by the secrets through our novel static analysis tool. We present AssetHarvester, a static analysis tool to detect secret-asset pairs in a repository. Since the location of the asset can be distant from where the secret is defined, we investigated secret-asset co-location patterns and found four patterns. To identify the secret-asset pairs of the four patterns, we utilized three approaches (pattern matching, data flow analysis, and fast-approximation heuristics). We curated a benchmark of 1,791 secret-asset pairs of four database types extracted from 188 public GitHub repositories to evaluate the performance of AssetHarvester. AssetHarvester demonstrates precision of (97%), recall (90%), and F1-score (94%) in detecting secret-asset pairs. Our findings indicate that data flow analysis employed in AssetHarvester detects secret-asset pairs with 0% false positives and aids in improving recall of secret detection tools.

en cs.CR, cs.SE
arXiv Open Access 2023
Sustainability is Stratified: Toward a Better Theory of Sustainable Software Engineering

Sean McGuire, Erin Shultz, Bimpe Ayoola et al.

Background: Sustainable software engineering (SSE) means creating software in a way that meets present needs without undermining our collective capacity to meet our future needs. It is typically conceptualized as several intersecting dimensions or ``pillars'' -- environmental, social, economic, technical and individual. However; these pillars are theoretically underdeveloped and require refinement. Objectives: The objective of this paper is to generate a better theory of SSE. Method: First, a scoping review was conducted to understand the state of research on SSE and identify existing models thereof. Next, a meta-synthesis of qualitative research on SSE was conducted to critique and improve the existing models identified. Results: 961 potentially relevant articles were extracted from five article databases. These articles were de-duplicated and then screened independently by two screeners, leaving 243 articles to examine. Of these, 109 were non-empirical, the most common empirical method was systematic review, and no randomized controlled experiments were found. Most papers focus on ecological sustainability (158) and the sustainability of software products (148) rather than processes. A meta-synthesis of 36 qualitative studies produced several key propositions, most notably, that sustainability is stratified (has different meanings at different levels of abstraction) and multisystemic (emerges from interactions among multiple social, technical, and sociotechnical systems). Conclusion: The academic literature on SSE is surprisingly non-empirical. More empirical evaluations of specific sustainability interventions are needed. The sustainability of software development products and processes should be conceptualized as multisystemic and stratified, and assessed accordingly.

arXiv Open Access 2022
Two case studies on implementing best practices for Software Process Improvement

Bartosz Walter, Branko Marovic, Ivan Garnizov et al.

Software Process Improvement requires significant effort related not only to the identification of relevant issues and providing an adequate response to them but also to the implementation and adoption of the changes. Best practices provide recommendations to software teams on how to address the identified objectives in practice, based on aggregated experience and knowledge. In the paper, we present the GEANT experience and observations from the process of adopting the best practices and present the setting we have been using.

arXiv Open Access 2022
Fork Entropy: Assessing the Diversity of Open Source Software Projects' Forks

Liang Wang, Zhiwen Zheng, Xiangchen Wu et al.

On open source software (OSS) platforms such as GitHub, forking and accepting pull-requests is an important approach for OSS projects to receive contributions, especially from external contributors who cannot directly commit into the source repositories. Having a large number of forks is often considered as an indicator of a project being popular. While extensive studies have been conducted to understand the reasons of forking, communications between forks, features and impacts of forks, there are few quantitative measures that can provide a simple yet informative way to gain insights about an OSS project's forks besides their count. Inspired by studies on biodiversity and OSS team diversity, in this paper, we propose an approach to measure the diversity of an OSS project's forks (i.e., its fork population). We devise a novel fork entropy metric based on Rao's quadratic entropy to measure such diversity according to the forks' modifications to project files. With properties including symmetry, continuity, and monotonicity, the proposed fork entropy metric is effective in quantifying the diversity of a project's fork population. To further examine the usefulness of the proposed metric, we conduct empirical studies with data retrieved from fifty projects on GitHub. We observe significant correlations between a project's fork entropy and different outcome variables including the project's external productivity measured by the number of external contributors' commits, acceptance rate of external contributors' pull-requests, and the number of reported bugs. We also observe significant interactions between fork entropy and other factors such as the number of forks. The results suggest that fork entropy effectively enriches our understanding of OSS projects' forks beyond the simple number of forks, and can potentially support further research and applications.

en cs.SE
arXiv Open Access 2021
Business Model Canvas Should Pay More Attention to the Software Startup Team

Kai-Kristian Kemell, Atte Elonen, Mari Suoranta et al.

Business Model Canvas (BMC) is a tool widely used to describe startup business models. Despite the various business aspects described, BMC pays a little emphasis on team-related factors. The importance of team-related factors in software development has been acknowledged widely in literature. While not as extensively studied, the importance of teams in software startups is also known in both literature and among practitioners. In this paper, we propose potential changes to BMC to have the tool better reflect the importance of the team, especially in a software startup environment. Based on a literature review, we identify various components related to the team, which we then further support with empirical data. We do so by means of a qualitative case study of five startups.

arXiv Open Access 2021
Number Parsing at a Gigabyte per Second

Daniel Lemire

With disks and networks providing gigabytes per second, parsing decimal numbers from strings becomes a bottleneck. We consider the problem of parsing decimal numbers to the nearest binary floating-point value. The general problem requires variable-precision arithmetic. However, we need at most 17 digits to represent 64-bit standard floating-point numbers (IEEE 754). Thus we can represent the decimal significand with a single 64-bit word. By combining the significand and precomputed tables, we can compute the nearest floating-point number using as few as one or two 64-bit multiplications. Our implementation can be several times faster than conventional functions present in standard C libraries on modern 64-bit systems (Intel, AMD, ARM and POWER9). Our work is available as open source software used by major systems such as Apache Arrow and Yandex ClickHouse. The Go standard library has adopted a version of our approach.

en cs.DS, cs.MS
arXiv Open Access 2020
How (Un)Happiness Impacts on Software Engineers in Agile Teams?

Luís Felipe Amorim, Marcelo Marinho, Suzana Sampaio

Information technology (IT) organizations are increasing the use of agile practices, which are based on a people-centred culture alongside the software development process. Thus, it is vital to understand the social and human factors of the individuals working in agile environments, such as happiness and unhappiness and how these factors impact this kind of environment. Therefore, five case-studies were developed inside agile projects, in a company that values innovation, aiming to identify how (un)happiness impacts software engineers in agile environments. According to the answers gathered from 67 participants through a survey, interviews and using a cross-analysis, happiness factors identified by agile teams were effective communication, motivated members, collaboration among members, proactive members, and present leaders.

arXiv Open Access 2019
A Longitudinal Study of Static Analysis Warning Evolution and the Effects of PMD on Software Quality in Apache Open Source Projects

Alexander Trautsch, Steffen Herbold, Jens Grabowski

Automated static analysis tools (ASATs) have become a major part of the software development workflow. Acting on the generated warnings, i.e., changing the code indicated in the warning, should be part of, at latest, the code review phase. Despite this being a best practice in software development, there is still a lack of empirical research regarding the usage of ASATs in the wild. In this work, we want to study ASAT warning trends in software via the example of PMD as an ASAT and its usage in open source projects. We analyzed the commit history of 54 projects (with 112,266 commits in total), taking into account 193 PMD rules and 61 PMD releases. We investigate trends of ASAT warnings over up to 17 years for the selected study subjects regarding changes of warning types, short and long term impact of ASAT use, and changes in warning severities. We found that large global changes in ASAT warnings are mostly due to coding style changes regarding braces and naming conventions. We also found that, surprisingly, the influence of the presence of PMD in the build process of the project on warning removal trends for the number of warnings per lines of code is small and not statistically significant. Regardless, if we consider defect density as a proxy for external quality, we see a positive effect if PMD is present in the build configuration of our study subjects.

arXiv Open Access 2018
Using Meta-heuristics and Machine Learning for Software Optimization of Parallel Computing Systems: A Systematic Literature Review

Suejb Memeti, Sabri Pllana, Alecio Binotto et al.

While modern parallel computing systems offer high performance, utilizing these powerful computing resources to the highest possible extent demands advanced knowledge of various hardware architectures and parallel programming models. Furthermore, optimized software execution on parallel computing systems demands consideration of many parameters at compile-time and run-time. Determining the optimal set of parameters in a given execution context is a complex task, and therefore to address this issue researchers have proposed different approaches that use heuristic search or machine learning. In this paper, we undertake a systematic literature review to aggregate, analyze and classify the existing software optimization methods for parallel computing systems. We review approaches that use machine learning or meta-heuristics for software optimization at compile-time and run-time. Additionally, we discuss challenges and future research directions. The results of this study may help to better understand the state-of-the-art techniques that use machine learning and meta-heuristics to deal with the complexity of software optimization for parallel computing systems. Furthermore, it may aid in understanding the limitations of existing approaches and identification of areas for improvement.

en cs.DC, cs.PF
arXiv Open Access 2017
Round-Trip Sketches: Supporting the Lifecycle of Software Development Sketches from Analog to Digital and Back

Sebastian Baltes, Fabrice Hollerich, Stephan Diehl

Sketching is an important activity for understanding, designing, and communicating different aspects of software systems such as their requirements or architecture. Often, sketches start on paper or whiteboards, are revised, and may evolve into a digital version. Users may then print a revised sketch, change it on paper, and digitize it again. Existing tools focus on a paperless workflow, i.e., archiving analog documents, or rely on special hardware - they do not focus on integrating digital versions into the analog-focused workflow that many users follow. In this paper, we present the conceptual design and a prototype of LivelySketches, a tool that supports the "round-trip" lifecycle of sketches from analog to digital and back. The proposed workflow includes capturing both analog and digital sketches as well as relevant context information. In addition, users can link sketches to other related sketches or documents. They may access the linked artifacts and captured information using digital as well as augmented analog versions of the sketches. We further present results from a formative user study with four students and outline possible directions for future work.

en cs.SE
arXiv Open Access 2016
What is Wrong with Topic Modeling? (and How to Fix it Using Search-based Software Engineering)

Amritanshu Agrawal, Wei Fu, Tim Menzies

Context: Topic modeling finds human-readable structures in unstructured textual data. A widely used topic modeler is Latent Dirichlet allocation. When run on different datasets, LDA suffers from "order effects" i.e. different topics are generated if the order of training data is shuffled. Such order effects introduce a systematic error for any study. This error can relate to misleading results;specifically, inaccurate topic descriptions and a reduction in the efficacy of text mining classification results. Objective: To provide a method in which distributions generated by LDA are more stable and can be used for further analysis. Method: We use LDADE, a search-based software engineering tool that tunes LDA's parameters using DE (Differential Evolution). LDADE is evaluated on data from a programmer information exchange site (Stackoverflow), title and abstract text of thousands ofSoftware Engineering (SE) papers, and software defect reports from NASA. Results were collected across different implementations of LDA (Python+Scikit-Learn, Scala+Spark); across different platforms (Linux, Macintosh) and for different kinds of LDAs (VEM,or using Gibbs sampling). Results were scored via topic stability and text mining classification accuracy. Results: In all treatments: (i) standard LDA exhibits very large topic instability; (ii) LDADE's tunings dramatically reduce cluster instability; (iii) LDADE also leads to improved performances for supervised as well as unsupervised learning. Conclusion: Due to topic instability, using standard LDA with its "off-the-shelf" settings should now be depreciated. Also, in future, we should require SE papers that use LDA to test and (if needed) mitigate LDA topic instability. Finally, LDADE is a candidate technology for effectively and efficiently reducing that instability.

en cs.SE, cs.AI
arXiv Open Access 2016
On the Benefit of Automated Static Analysis for Small and Medium-Sized Software Enterprises

Mario Gleirscher, Dmitriy Golubitskiy, Maximilian Irlbeck et al.

Today's small and medium-sized enterprises (SMEs) in the software industry are faced with major challenges. While having to work efficiently using limited resources they have to perform quality assurance on their code to avoid the risk of further effort for bug fixes or compensations. Automated static analysis can reduce this risk because it promises little effort for running an analysis. We report on our experience in analysing five projects from and with SMEs by three different static analysis techniques: code clone detection, bug pattern detection and architecture conformance analysis. We found that the effort that was needed to introduce those techniques was small (mostly below one person-hour), that we can detect diverse defects in production code and that the participating companies perceived the usefulness of the presented techniques as well as our analysis results high enough to include the techniques in their quality assurance.

arXiv Open Access 2014
Improved 3-Dimensional Security in Cloud Computing

Sagar Tirodkar, Yazad Baldawala, Sagar Ulane et al.

Cloud computing is a trending technology in the field of Information Technology as it allows sharing of resources over a network. The reason Cloud computing gained traction so rapidly was because of its performance, availability and low cost among other features. Besides these features, companies are still refraining from binding their business with cloud computing due to the fear of data leakage. The focus of this paper is on the problem of data leakage. It proposes a framework which works in two phases. The first phase consists of data encryption and classification which is performed before storing the data. In this phase, the client may want to encrypt his data prior to uploading. After encryption, data is classified using three parameters namely Confidentiality [C], Integrity [I] and Availability [A]. With the help of proposed algorithm, criticality rating (Cr) of the data is calculated. According to the Cr, security will be provided on the basis of the 3 Dimensions proposed in this paper. The second phase consists of data retrieval by the client. As per the concept of 3D, users who want to access their data need to be authenticated, to avoid data from being compromised. Before every access to data, the users identity is verified for authorization. After the user is authorized for data access, if the data is encrypted, the user can decrypt the same.

en cs.CR, cs.DC
arXiv Open Access 2013
Advanced Techniques for Scientific Programming and Collaborative Development of Open Source Software Packages at the International Centre for Theoretical Physics (ICTP)

Ivan Girotto, Axel Kohlmeyer, David Grellscheid et al.

A large number of computational scientific research projects make use of open source software packages. However, the development process of such tools frequently differs from conventional software development; partly because of the nature of research, where the problems being addressed are not always fully understood; partly because the majority of the development is often carried out by scientists with limited experience and exposure to best practices of software engineering. Often the software development suffers from the pressure to publish scientific results and that credit for software development is limited in comparison. Fundamental components of software engineering like modular and reusable design, validation, documentation, and software integration as well as effective maintenance and user support tend to be disregarded due to lack of resources and qualified specialists. Thus innovative developments are often hindered by steep learning curves required to master development for legacy software packages full of ad hoc solutions. The growing complexity of research, however, requires suitable and maintainable computational tools, resulting in a widening gap between the potential users (often growing in number) and contributors to the development of such a package. In this paper we share our experiences aiming to improve the situation by training particularly young scientists, through disseminating our own experiences at contributing to open source software packages and practicing key components of software engineering adapted for scientists and scientific software development. Specifically we summarize the outcome of the Workshop in Advanced Techniques for Scientific Programming and Collaborative Development of Open Source Software Packages run at the Abdus Salam International Centre for Theoretical Physics in March 2013, and discuss our conclusions for future efforts.

en cs.SE, cs.MS
arXiv Open Access 2013
On the Current Measurement Practices in Agile Software Development

Taghi Javdani, Hazura Zulzalil, Abdul Azim Abd Ghani et al.

Agile software development (ASD) methods were introduced as a reaction to traditional software development methods. Principles of these methods are different from traditional methods and so there are some different processes and activities in agile methods comparing to traditional methods. Thus ASD methods require different measurement practices comparing to traditional methods. Agile teams often do their projects in the simplest and most effective way so, measurement practices in agile methods are more important than traditional methods, because lack of appropriate and effective measurement practices, will increase risk of project. The aims of this paper are investigation on current measurement practices in ASD methods, collecting them together in one study and also reviewing agile version of Common Software Measurement International Consortium (COSMIC) publication.

en cs.SE

Halaman 53 dari 141419