Hasil untuk "Computer software"

Menampilkan 20 dari ~8150742 hasil · dari DOAJ, arXiv, Semantic Scholar, CrossRef

JSON API
S2 Open Access 2009
The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines

L. Barroso, U. Hölzle

As computation continues to move into the cloud, the computing platform of interest no longer resembles a pizza box or a refrigerator, but a warehouse full of computers. These new large datacenters are quite different from traditional hosting facilities of earlier times and cannot be viewed simply as a collection of co-located servers. Large portions of the hardware and software resources in these facilities must work in concert to efficiently deliver good levels of Internet service performance, something that can only be achieved by a holistic approach to their design and deployment. In other words, we must treat the datacenter itself as one massive warehouse-scale computer (WSC). We describe the architecture of WSCs, the main factors influencing their design, operation, and cost structure, and the characteristics of their software base. We hope it will be useful to architects and programmers of today's WSCs, as well as those of future many-core platforms which may one day implement the equivalent of today's WSCs on a single board. Table of Contents: Introduction / Workloads and Software Infrastructure / Hardware Building Blocks / Datacenter Basics / Energy and Power Efficiency / Modeling Costs / Dealing with Failures and Repairs / Closing Remarks

1979 sitasi en Computer Science
DOAJ Open Access 2026
Unified fracture criterion for brittle 2D materials

Shenda Jiang, Israel Greenfeld, Lin Yang et al.

Abstract Two-dimensional materials (2DMs), possessing atomic-scale thickness, are prone to brittle fracture under loading conditions, which can lead to catastrophic failure. As their structural dimensions approach the nanoscale, conventional linear elastic fracture mechanics (LEFM) based on continuum assumptions is deficient in capturing the underlying failure mechanisms and accurately predicting potential crack instability. This limitation emphasizes the critical need for a new theoretical approach suited to the fracture behavior of 2DM systems. We propose a unified fracture mechanics (UFM) criterion that systematically incorporates two key physical mechanisms governing brittle fracture in 2DMs at the nanoscale, namely nonlinear elasticity and atomic-scale discreteness. By introducing two corrective parameters, for nonlinearity and quantization, the UFM model successfully resolves the limitations of LEFM in predicting failure. This is particularly important in the short crack regime, as small defects are frequent in 2DMs. The theoretical predictions show excellent agreement with molecular dynamics simulations of five different types of 2DMs and accurately capture the fracture strength of both cracked and defect-free structures. In addition, we present an empirical method that allows the fracture behavior of 2DMs to be estimated directly from their intrinsic structural and elastic properties. The unified theoretical framework is applicable not only to the materials simulated in this study but may also be applied to a broader class of atomically thin brittle systems.

Materials of engineering and construction. Mechanics of materials, Computer software
arXiv Open Access 2026
Large Language Models for Software Testing Education: an Experience Report

Peng Yang, Yunfeng Zhu, Chao Chang et al.

The rapid integration of Large Language Models (LLMs) into software engineering practice is reshaping how software testing activities are performed. LLMs are increasingly used to support software testing. Consequently, software testing education must evolve to prepare students for this new paradigm. However, while students have already begun to use LLMs in an ad hoc manner for testing tasks, there is limited empirical understanding of how such usage influences their testing behaviors, judgment, and learning outcomes. It is necessary to conduct a systematic investigation into how students learn to evaluate, control, and refine LLM-assisted testing results. This paper presents a mixed-methods, two-phase exploratory study on human-LLM collaboration in software testing education. In Phase I, we analyze classroom learning artifacts and interaction records from 15 students, together with a large-scale survey conducted in a national software testing competition (337 valid responses), to identify recurring prompt-related difficulties across testing tasks. The results reveal systematic interaction breakdowns, including missing contextual information, insufficient constraints, rigid one-shot prompting, and limited strategy-driven iteration, with automated test script generation emerging as a particularly heterogeneous and effort-intensive interaction context. Building on these findings, Phase II conducts an illustrative classroom practice that operationalizes the observed breakdowns into a lightweight, stage-aware prompt scaffold for test script generation, guiding students to explicitly articulate execution-relevant information such as environmental assumptions, interaction grounding, synchronization, and validation intent, and reporting descriptive shifts in students' testing-related articulation when interacting with LLMs.

en cs.SE
DOAJ Open Access 2025
HawkEye: AI-Driven Software for Objective Analysis and Characterization of Nodular Cast Iron Microstructures

Javier Nieves, Antonio Serena-Barriuso, Guillermo Elejoste-Rementeria

Metallographic evaluation of nodular cast iron is crucial for quality control in the foundry industry. Traditionally, this process relies on experts who visually interpret microscopic images. This study introduces HawkEye, a comprehensive software solution that automates metallographic analysis using advanced computer vision and deep learning models. Specifically, HawkEye software dynamically adapts its processing workflow based on the input image and its typological classification. The software supports both etched and non-etched specimens and automates the segmentation and classification of graphite nodules, gathering their morphological descriptors; it identifies microstructural phases and provides a global quality assessment. All these functions are embedded into a user-friendly interface designed for both laboratory and industrial use. Nevertheless, the key contribution of this work is the replacement of subjective evaluation with a reproducible, AI-driven approach, which significantly enhances the objectivity, traceability, and scalability of metallurgical analysis. In fact, the proposed approach achieves 99% accuracy in nodule classification compared to manual expert assessment, reduces manual image processing steps, and introduces a novel method for ferrite/perlite measurement in combination with carbide detection using YOLO and SAM models.

Technology, Engineering (General). Civil engineering (General)
DOAJ Open Access 2025
Efficient Large Graph Partitioning Scheme Using Incremental Processing in GPU

Hyeonbyeong Lee, Jeonghyun Baek, Sangho Song et al.

As the processing of large-scale graphs on a single device is infeasible without partitioning, graph partitioning algorithms are essential for various algorithms and distributed computing tasks utilizing graph data. However, graph partitioning is a nondeterministic polynomial time NP-Complete problem, which is characterized by high computational complexity. To address this complexity, previous studies have proposed processing graphs in parallel using GPUs. Nonetheless, due to the limited memory space of GPUs compared to CPUs, they are susceptible to out-of-memory (OOM) issues. This research proposes a GPU-accelerated graph partitioning technique that employs dynamic memory management and incremental processing. The proposed method incrementally processes large graphs and reduces the overall size of the graph through streaming clustering on the CPU. The reduced graph is sufficiently small to be processed on the GPU. The method combines an initial partitioning based on the label propagation algorithm with the high-degree replicated first algorithm to leverage the high parallel processing capabilities of the GPU and manage the computational load of graph partitioning. Experiments on various large-scale real-world graph datasets demonstrate the efficiency, scalability, and superior partitioning quality of the proposed method. Specifically, the method achieves execution speeds up to 9 times faster than CPU-based streaming techniques on large graphs and improves the replication factor by over 20% compared to existing methods. Furthermore, it demonstrates stable processing of large-scale graphs that previous GPU-based methods such as GPU-P could not handle owing to memory limitations.

Electrical engineering. Electronics. Nuclear engineering
arXiv Open Access 2025
Contrasting to spark creativity in software development teams

Marian Petre, Mary Shaw

Three decades of empirical research in high-performing software development teams provides evidence that creativity can be promoted by an effective, disciplined development culture. This paper describes 'contrasting' as a key driver for creativity; describes creativity moves, tactics used by high-performing teams to produce useful contrasts; and characterizes key development behaviours observed to support a 'culture' of creativity. The empirical research was carried out in a broad range of software development organizations and application domains.

en cs.SE
S2 Open Access 2018
Genetic Improvement of Software: A Comprehensive Survey

J. Petke, S. Haraldsson, M. Harman et al.

Genetic improvement (GI) uses automated search to find improved versions of existing software. We present a comprehensive survey of this nascent field of research with a focus on the core papers in the area published between 1995 and 2015. We identified core publications including empirical studies, 96% of which use evolutionary algorithms (genetic programming in particular). Although we can trace the foundations of GI back to the origins of computer science itself, our analysis reveals a significant upsurge in activity since 2012. GI has resulted in dramatic performance improvements for a diverse set of properties such as execution time, energy and memory consumption, as well as results for fixing and extending existing system functionality. Moreover, we present examples of research work that lies on the boundary between GI and other areas, such as program transformation, approximate computing, and software repair, with the intention of encouraging further exchange of ideas between researchers in these fields.

230 sitasi en Computer Science
DOAJ Open Access 2024
Frustrating Quantum Batteries

A.G. Catalano, S.M. Giampaolo, O. Morsch et al.

We propose to use a quantum spin chain as a device to store and release energy coherently and we investigate the interplay between its internal correlations and outside decoherence. We employ the quantum Ising chain in a transverse field and our charging protocol consists of a sudden global quantum quench in the external field to take the system out of equilibrium. Interactions with the environment and decoherence phenomena can dissipate part of the work that the chain can supply after being charged, measured by the ergotropy. We find that overall, the system shows remarkably better performance, in terms of resilience, charging time, and energy storage, when topological frustration is introduced by setting antiferromagnetic interactions with an odd number of sites and periodic boundary conditions. Moreover, we show that in a simple discharging protocol to an external spin, only the frustrated chain can transfer work and not just heat.

Physics, Computer software
DOAJ Open Access 2024
EEG Power Analysis of Children with Autism Spectrum Disorders (ASD) Based on EIBI Curriculum Levels

Rahmahtrisilvia Rahmahtrisilvia, Rudi Setiawan, Asep Ahmad Sopandi et al.

Early Intervention Behavioral Therapy as a method has been shown to aid children diagnosed with Autism in adjusting behavior through Applied Behavior Analysis. While there are three levels of ABA, EIBI does not provide a concrete metric of what separates between the individual levels. The current study focuses on differentiating the electrical patterns found in EEG in children and plans to explore how EIBI can serve across the ABA spectrum. The electrodes F3, F4, C3, C4, P3, P4, O1, and O2 were used to capture the EEG signals and were utilized in estimating the power, spectral density using the Welch method. It was observed during the statistical examination that there existed differences in the results of power across the frequency band amongst the groups. The higher levels of Alpha lead us to believe that there was better emotional management. The chronic group was shown to have more prominent Delta power reflecting weakened control. Comparatively, beginning level’s theta power was found to be higher across all groups showcasing change in attention requiring tasks. Due to greater focus being placed on the lower range frequency activity there existed no noteworthy changes in the Beta and Gamma portions. These findings highlight the role of EIBI in neuromodulation in the Alpha and Delta bands, and its application in the enhancement of emotional and neurological stability. EEG is an effective measure as it quantifies EIBI outcomes. Further studies should examine the long-term effects and enhance curriculum concepts to increase the efficacy of the interventions.

Computer software
DOAJ Open Access 2024
The value of generalized linear mixed models for data analysis in the plant sciences

Laurence V. Madden, Peter S. Ojiambo

Modern data analysis typically involves the fitting of a statistical model to data, which includes estimating the model parameters and their precision (standard errors) and testing hypotheses based on the parameter estimates. Linear mixed models (LMMs) fitted through likelihood methods have been the foundation for data analysis for well over a quarter of a century. These models allow the researcher to simultaneously consider fixed (e.g., treatment) and random (e.g., block and location) effects on the response variables and account for the correlation of observations, when it is assumed that the response variable has a normal distribution. Analysis of variance (ANOVA), which was developed about a century ago, can be considered a special case of the use of an LMM. A wide diversity of experimental and treatment designs, as well as correlations of the response variable, can be handled using these types of models. Many response variables are not normally distributed, of course, such as discrete variables that may or may not be expressed as a percentage (e.g., counts of insects or diseased plants) and continuous variables with asymmetrical distributions (e.g., survival time). As expansions of LMMs, generalized linear mixed models (GLMMs) can be used to analyze the data arising from several non-normal statistical distributions, including the discrete binomial, Poisson, and negative binomial, as well as the continuous gamma and beta. A GLMM allows the data analyst to better match the model to the data rather than to force the data to match a specific model. The increase in computer memory and processing speed, together with the development of user-friendly software and the progress in statistical theory and methodology, has made it practical for non-statisticians to use GLMMs since the late 2000s. The switch from LMMs to GLMMs is deceptive, however, as there are several major issues that must be thought about or judged when using a GLMM, which are mostly resolved for routine analyses with LMMs. These include the consideration of conditional versus marginal distributions and means, overdispersion (for discrete data), the model-fitting method [e.g., maximum likelihood (integral approximation), restricted pseudo-likelihood, and quasi-likelihood], and the choice of link function to relate the mean to the fixed and random effects. The issues are explained conceptually with different model formulations and subsequently with an example involving the percentage of diseased plants in a field study with wheat, as well as with simulated data, starting with a LMM and transitioning to a GLMM. A brief synopsis of the published GLMM-based analyses in the plant agricultural literature is presented to give readers a sense of the range of applications of this approach to data analysis.

arXiv Open Access 2024
Foundation Model Engineering: Engineering Foundation Models Just as Engineering Software

Dezhi Ran, Mengzhou Wu, Wei Yang et al.

By treating data and models as the source code, Foundation Models (FMs) become a new type of software. Mirroring the concept of software crisis, the increasing complexity of FMs making FM crisis a tangible concern in the coming decade, appealing for new theories and methodologies from the field of software engineering. In this paper, we outline our vision of introducing Foundation Model (FM) engineering, a strategic response to the anticipated FM crisis with principled engineering methodologies. FM engineering aims to mitigate potential issues in FM development and application through the introduction of declarative, automated, and unified programming interfaces for both data and model management, reducing the complexities involved in working with FMs by providing a more structured and intuitive process for developers. Through the establishment of FM engineering, we aim to provide a robust, automated, and extensible framework that addresses the imminent challenges, and discovering new research opportunities for the software engineering field.

en cs.SE, cs.AI
S2 Open Access 2018
Overview and Comparison of Gate Level Quantum Software Platforms

Ryan Larose

Quantum computers are available to use over the cloud, but the recent explosion of quantum software platforms can be overwhelming for those deciding on which to use. In this paper, we provide a current picture of the rapidly evolving quantum computing landscape by comparing four software platforms - Forest (pyQuil), Qiskit, ProjectQ, and the Quantum Developer Kit (Q#) - that enable researchers to use real and simulated quantum devices. Our analysis covers requirements and installation, language syntax through example programs, library support, and quantum simulator capabilities for each platform. For platforms that have quantum computer support, we compare hardware, quantum assembly languages, and quantum compilers. We conclude by covering features of each and briefly mentioning other quantum computing software packages.

173 sitasi en Physics, Computer Science
DOAJ Open Access 2023
Convolutional Network Entity Missing Detection Method Combined with Gated Mechanism

YE Han, LI Xin, SUN Haichun

The adequacy of the entity information directly affects the applications that depend on textual entity information,while conventional entity recognition models can only identify the existing entities.The task of the entity missing detection,defined as a sequence labeling task,aims at finding the location where the entity is missing.In order to construct training dataset,three corres-ponding methods are proposed.We introduce an entity missing detection method combining the convolutional neural network with the gated mechanism and the pre-trained language model.Experiments show that the F1 scores of this model are 80.45% for the PER entity,83.02% for the ORG entity,and 86.75% for the LOC entity.The model performance exceeds the other LSTM-based named entity recognition model.It is found that there is a correlation between the accuracy of the model and the word frequency of the annotated characters.

Computer software, Technology (General)
DOAJ Open Access 2023
Analyzing Non-Markovian Systems by Using a Stochastic Process Calculus and a Probabilistic Model Checker

Gabriel Ciobanu

The non-Markovian systems represent almost all stochastic processes, except of a small class having the Markov property; it is a real challenge to analyze these systems. In this article, we present a general method of analyzing non-Markovian systems. The novel viewpoint is given by the use of a compact stochastic process calculus developed in the formal framework of computer science for describing concurrent systems. Since phase-type distributions can approximate non-Markovian systems with arbitrary precision, we approximate a non-Markovian system by describing it easily in our stochastic process calculus, which employs phase-type distributions. The obtained process (in our calculus) are then translated into the probabilistic model checker PRISM; by using this free software tool, we can analyze several quantitative properties of the Markovian approximation of the initial non-Markovian system.

DOAJ Open Access 2023
Comparing Measured Agile Software Development Metrics Using an Agile Model-Based Software Engineering Approach versus Scrum Only

Moe Huss, Daniel R. Herber, John M. Borky

This study compares the <i>reliability of estimation</i>, <i>productivity</i>, and <i>defect rate</i> metrics for sprints driven by a specific instance of the agile approach (i.e., scrum) and an agile model-Bbased software engineering (MBSE) approach called the integrated Scrum Model-Based System Architecture Process (sMBSAP) when developing a software system. The quasi-experimental study conducted ten sprints using each approach. The approaches were then evaluated based on their effectiveness in helping the <i>product development team</i> estimate the backlog items that they could build during a time-boxed sprint and deliver more product backlog items (PBI) with fewer defects. The <i>commitment reliability (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>C</mi><mi>R</mi></mrow></semantics></math></inline-formula>)</i> was calculated to compare the <i>reliability of estimation</i> with a measured average scrum-driven value of 0.81 versus a statistically different average sMBSAP-driven value of 0.94. Similarly, the average <i>sprint velocity</i> (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>S</mi><mi>V</mi></mrow></semantics></math></inline-formula>) for the scrum-driven sprints was 26.8 versus 31.8 for the MBSAP-driven sprints. The average <i>defect density</i> (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>D</mi><mi>D</mi></mrow></semantics></math></inline-formula>) for the scrum-driven sprints was 0.91, while that of the sMBSAP-driven sprints was 0.63. The average <i>defect leakage</i> (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>D</mi><mi>L</mi></mrow></semantics></math></inline-formula>) for the scrum-driven sprints was 0.20, while that of the sMBSAP-driven sprints was 0.15. The <i>t</i>-test analysis concluded that the sMBSAP-driven sprints were associated with a statistically significant larger mean <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>C</mi><mi>R</mi></mrow></semantics></math></inline-formula>, <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>S</mi><mi>V</mi></mrow></semantics></math></inline-formula>, <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>D</mi><mi>D</mi></mrow></semantics></math></inline-formula>, and <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>D</mi><mi>L</mi></mrow></semantics></math></inline-formula> than that of the scrum-driven sprints. The overall results demonstrate formal quantitative benefits of an agile MBSE approach compared to an agile alone, thereby strengthening the case for considering agile MBSE methods within the software development community. Future work might include comparing agile and agile MBSE methods using alternative research designs and further software development objectives, techniques, and metrics.

Computer software
arXiv Open Access 2023
Software startup within a university -- producing industry-ready graduates

Saara Tenhunen, Tomi Männistö, Petri Ihantola et al.

Previous research has demonstrated that preparing students for life in software engineering is not a trivial task. Authentic learning experiences are challenging to provide, and there are gaps between what students have done at the university and what they are expected to master when getting into the industry after graduation. To address this challenge, we present a novel way of teaching industry-relevant skills in a university-led internal software startup called Software Development Academy (SDA). In addition to describing the SDA concept in detail, we have investigated what educational aspects characterise SDA and how it compares to capstone projects. The questions are answered based on 15 semi-structured interviews with alumni of SDA. Working with production-quality software and having a wide range of responsibilities were perceived as the most integral aspects of SDA and provided students with a comprehensive skill set for the future.

en cs.SE
arXiv Open Access 2023
Revisiting Sentiment Analysis for Software Engineering in the Era of Large Language Models

Ting Zhang, Ivana Clairine Irsan, Ferdian Thung et al.

Software development involves collaborative interactions where stakeholders express opinions across various platforms. Recognizing the sentiments conveyed in these interactions is crucial for the effective development and ongoing maintenance of software systems. For software products, analyzing the sentiment of user feedback, e.g., reviews, comments, and forum posts can provide valuable insights into user satisfaction and areas for improvement. This can guide the development of future updates and features. However, accurately identifying sentiments in software engineering datasets remains challenging. This study investigates bigger large language models (bLLMs) in addressing the labeled data shortage that hampers fine-tuned smaller large language models (sLLMs) in software engineering tasks. We conduct a comprehensive empirical study using five established datasets to assess three open-source bLLMs in zero-shot and few-shot scenarios. Additionally, we compare them with fine-tuned sLLMs, using sLLMs to learn contextual embeddings of text from software platforms. Our experimental findings demonstrate that bLLMs exhibit state-of-the-art performance on datasets marked by limited training data and imbalanced distributions. bLLMs can also achieve excellent performance under a zero-shot setting. However, when ample training data is available or the dataset exhibits a more balanced distribution, fine-tuned sLLMs can still achieve superior results.

en cs.SE

Halaman 9 dari 407538