Hasil untuk "Computer software"

Menampilkan 20 dari ~8152128 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar

JSON API
S2 Open Access 2021
Taguette: open-source qualitative data analysis

Rémi Rampin, Vicky Rampin

Taguette is a free and open-source computer-assisted qualitative data analysis software (CAQDAS) (Knowledge Bank, 2018) package. CAQDAS helps researchers using qualitative methods to organize, annotate, collaborate on, analyze, and visualize their work. Qualitative methods are used in a wide range of fields, such as anthropology, education, nursing, psychology, sociology, and marketing. Qualitative data has a similarly wide range: interviews, focus groups, ethnographies, and more.

316 sitasi en Computer Science
S2 Open Access 2020
Chasing Carbon: The Elusive Environmental Footprint of Computing

Udit Gupta, Young Geun Kim, Sylvia Lee et al.

Given recent algorithm, software, and hardware innovation, computing has enabled a plethora of new applications. As computing becomes increasingly ubiquitous, however, so does its environmental impact. This paper brings the issue to the attention of computer-systems researchers. Our analysis, built on industry-reported characterization, quantifies the environmental effects of computing in terms of carbon emissions. Broadly, carbon emissions have two sources: operational energy consumption, and hardware manufacturing and infrastructure. Although carbon emissions from the former are decreasing thanks to algorithmic, software, and hardware innovations that boost performance and power efficiency, the overall carbon footprint of computer systems continues to grow. This work quantifies the carbon output of computer systems to show that most emissions related to modern mobile and data-center equipment come from hardware manufacturing and infrastructure. We therefore outline future directions for minimizing the environmental impact of computing systems.

341 sitasi en Computer Science
arXiv Open Access 2026
Towards Predicting Multi-Vulnerability Attack Chains in Software Supply Chains from Software Bill of Materials Graphs

Laura Baird, Armin Moin

Software supply chain security compromises often stem from cascaded interactions of vulnerabilities, for example, between multiple vulnerable components. Yet, Software Bill of Materials (SBOM)-based pipelines for security analysis typically treat scanner findings as independent per-CVE (Common Vulnerabilities and Exposures) records. We propose a new research direction based on learning multi-vulnerability attack chains through a novel SBOM-driven graph-learning approach. This treats SBOM structure and scanner outputs as a dependency-constrained evidence graph rather than a flat list of vulnerabilities. We represent vulnerability-enriched CycloneDX SBOMs as heterogeneous graphs whose nodes capture software components and known vulnerabilities (i.e, CVEs), connected by typed relations, such as dependency and vulnerability links. We train a Heterogeneous Graph Attention Network (HGAT) to predict whether a component is associated with at least one known vulnerability as a feasibility check for learning over this structure. Additionally, we frame the discovery of cascading vulnerabilities as CVE-pair link prediction using a lightweight Multi-Layer Perceptron (MLP) neural network trained on documented multi-vulnerability chains. Validated on 200 real-world SBOMs from the Wild SBOMs public dataset, the HGAT component classifier achieves 91.03% Accuracy and 74.02% F1-score, while the cascade predictor model (MLP) achieves a Receiver Operating Characteristic - Area Under Curve (ROC-AUC) of 0.93 on a seed set of 35 documented attack chains.

en cs.SE, cs.CR
DOAJ Open Access 2025
Hide and seek in transaction networks: a multi-agent framework for simulating and detecting money laundering activities

Qianyu Wang, Wei-Tek Tsai, Tianyu Shi et al.

Abstract Detecting money laundering within financial networks presents a complex challenge due to the elusive behavior patterns of laundering agents, often resulting in data gaps. In this research, we propose a ‘Multiverse Simulation’ framework using a multi-agent system to generate synthetic datasets for anti-money laundering (AML) training and detection. This framework creates diverse virtual worlds, each with unique parameters to represent varying levels of illicit activity, thus mimicking the dynamics of money laundering and legitimate transactions. Our framework comprises two main types of agents: (1) the Detector, trained to identify laundering signs, and (2) Transaction agents, divided into those involved in laundering and those in legal transactions. These agents interact in a synthetic environment governed by rules that simulate real-world financial behaviors, enabling the generation of complex, realistic data. In the hide-and-seek multiverse simulation, the Detector learns to distinguish between licit and illicit transactions, a process refined by the evolving strategies of transaction agents to avoid detection. This adversarial setup fosters the co-evolution of laundering techniques and detection methods, enhancing system robustness. We demonstrate the efficacy of this approach by pre-training on synthetic cross-bank data, then evaluating with real-world data from the Elliptic dataset. Our results show that transfer learning significantly improves AML system performance, effectively bridging the gap between synthetic and authentic transaction patterns. The ‘Multiverse Simulation’ offers a scalable, dynamic approach to better understand and mitigate the gap between simulation and reality, contributing to more resilient and intelligent AML solutions.

Electronic computers. Computer science, Information technology
DOAJ Open Access 2025
Examining Nasdaq Market Data and Presenting an Optimized Model by Extreme Gradient Boosting Regression and Artificial Bee Colony

Ali Ahmadpour

Stock price prediction is a critical task in the financial sector due to its profound implications for traders and investors. This paper presents a comparative analysis of machine learning models applied to stock price prediction using historical data from the Nasdaq stock index spanning the years 2015 to 2023. The study introduces an Extreme Gradient Boosting Regression (XGBR) model optimized with three distinct metaheuristic algorithms: Battle Royal Optimization (BRO), Moth Flame Optimization (MFO), and Artificial Bee Colony (ABC). These optimization techniques aim to enhance the model's predictive performance by improving parameter tuning and model generalization. Among the optimized models, the ABC-XGBR demonstrated superior performance due to its strong balance between exploration and exploitation and its effective search capability in high-dimensional feature spaces. The experimental results show significant improvements over the baseline XGBR model, with R² values of 0.9721 for BRO-XGBR, 0.9885 for MFO-XGBR, and 0.9936 for ABC-XGBR. These outcomes underscore the effectiveness of combining machine learning with nature-inspired optimization algorithms to produce more accurate stock price forecasts. This research contributes valuable insights into the practical application of hybrid models for financial forecasting, emphasizing their utility in enhancing predictive accuracy. It also offers decision-makers—such as investors, analysts, and financial institutions—a robust framework for incorporating data-driven strategies into risk assessment and portfolio management. Future work may explore additional datasets, real-time prediction capabilities, and further refinement of optimization algorithms to extend the applicability of these methods to broader financial contexts.

Computer software, Mining engineering. Metallurgy
DOAJ Open Access 2025
High Performance GPU Strategies for Real-Time In-Cabin Monitoring

Snehal D. Patil, Prashant P. Bartakke

With the expeditious advancement in multicarrier automotive in-vehicle monitoring for a bus conforming environment with computer vision, there is an exigency for a setup subsisting on inspection cameras and an onboard embedded computing device. This paper offers a novel method for in-cabin occupant recognition with an integration of hardware subsystem comprising NVIDIA AGX Orin Development Kit, camera sensors, and camera interface serdes card accompanying software components embracing Pytorch, Open Neural Network Exchange (ONNX), TensorRT (TRT), CuPy, CV-CUDA, and NVIDIA Nsight Profiler tool. This work presents an optimized model construction picked up from a pool of state-of-the-art deep learning models and deployment, achieving high accuracy and low latency, furthermore permitting real-time video streaming and inference. The inference performance of an embedded device hiring a Graphic Processing Unit (GPU) and Compute Unified Device Architecture (CUDA) is embellished on TRT with CuPy and CV-CUDA to escalate GPU performance. Each GPU optimization strategy is thoroughly analyzed in the context of design space exploration or on-the-fly tuning. The GPU performance is monitored with the NVIDIA Nsight Profiler tool to knock off the GPU coldspots, GPU starvation, and asynchronous CUDA memory copy operations. The Roofline model analysis in Nsight Compute is used to pinpoint the exact causes of GPU underutilization. The experimental performance showcase to achieve a test accuracy of over 80% with the boost in GPU utilization by a factor of 2.24.

Electrical engineering. Electronics. Nuclear engineering
DOAJ Open Access 2025
From Intention to Adoption: Managerial Misalignment in Cybersecurity Training Investments for Software Development Organizations

Hannes Salin, Vasileios Gkougkaras

To ensure adequate skill development, but also competitive advantage as a software engineering organization, initiatives in cybersecurity training is one of several important investment decisions to make for management. This study builds upon three case organizations in Sweden and Greece, where managers’ and software developers’ perceptions on trialability and observability effects are analyzed, grounded in the theory of innovation diffusion. Using interviews and a developer-centric survey, both quantitative and qualitative data are collected, and used in combination to support the development of a pre-investment framework for management. The analysis includes thematic analysis, cosine similarity comparison, and, to some extent, sentiment polarity scoring. A pre-investment framework consisting of a process of seven concrete steps is proposed, based on the empirical findings in the study.

Computer software
arXiv Open Access 2025
The Role of Empathy in Software Engineering -- A Socio-Technical Grounded Theory

Hashini Gunatilake, John Grundy, Rashina Hoda et al.

Empathy, defined as the ability to understand and share others' perspectives and emotions, is essential in software engineering (SE), where developers often collaborate with diverse stakeholders. It is also considered as a vital competency in many professional fields such as medicine, healthcare, nursing, animal science, education, marketing, and project management. Despite its importance, empathy remains under-researched in SE. To further explore this, we conducted a socio-technical grounded theory (STGT) study through in-depth semi-structured interviews with 22 software developers and stakeholders. Our study explored the role of empathy in SE and how SE activities and processes can be improved by considering empathy. Through applying the systematic steps of STGT data analysis and theory development, we developed a theory that explains the role of empathy in SE. Our theory details the contexts in which empathy arises, the conditions that shape it, the causes and consequences of its presence and absence. We also identified contingencies for enhancing empathy or overcoming barriers to its expression. Our findings provide practical implications for SE practitioners and researchers, offering a deeper understanding of how to effectively integrate empathy into SE processes.

en cs.SE
arXiv Open Access 2025
Not real or too soft? On the challenges of publishing interdisciplinary software engineering research

Sonja M. Hyrynsalmi, Grischa Liebel, Ronnie de Souza Santos et al.

The discipline of software engineering (SE) combines social and technological dimensions. It is an interdisciplinary research field. However, interdisciplinary research submitted to software engineering venues may not receive the same level of recognition as more traditional or technical topics such as software testing. For this paper, we conducted an online survey of 73 SE researchers and used a mixed-method data analysis approach to investigate their challenges and recommendations when publishing interdisciplinary research in SE. We found that the challenges of publishing interdisciplinary research in SE can be divided into topic-related and reviewing-related challenges. Furthermore, while our initial focus was on publishing interdisciplinary research, the impact of current reviewing practices on marginalized groups emerged from our data, as we found that marginalized groups are more likely to receive negative feedback. In addition, we found that experienced researchers are less likely to change their research direction due to feedback they receive. To address the identified challenges, our participants emphasize the importance of highlighting the impact and value of interdisciplinary work for SE, collaborating with experienced researchers, and establishing clearer submission guidelines and new interdisciplinary SE publication venues. Our findings contribute to the understanding of the current state of the SE research community and how we could better support interdisciplinary research in our field.

en cs.SE
DOAJ Open Access 2024
Practices and pain points in personal records

Matt Balogh, William Billingsley, David Paul et al.

Introduction. This paper reports the findings of a survey on personal electronic records management practices focussing on records that people deal with in their everyday lives at home. The aim of this research was to determine which personal electronic records practices were most effective in averting oversights and generating satisfaction in participant’s records management practices. This paper presents one stage of a broader design science research program. Method. The research for this paper was conducted by means of an online questionnaire using Qualtrics software and participants were recruited through social media. Analysis. Analysis was conducted using tabular analysis in SPSS, and Principal Component Analysis in R. Results. The research found that there is a statistical relationship between the practices that respondents adopted with their personal electronic records management and their level of satisfaction with that process. For example, respondents who saved records on a computer or in the cloud reported higher levels of satisfaction with how they managed their personal records and experienced fewer adverse incidents such as losing documents or failing to pay bills on time. Conclusion. The paper concludes by identifying some specific personal records management practices that are likely to improve satisfaction with that task, such as saving and sorting records that need to be retained outside of email in a structured filing system.

Bibliography. Library science. Information resources
DOAJ Open Access 2024
Session-based Recommendation Algorithm Based on Memory Augmented Network

WEI Xing, SUN Hao, CAO Jian, ZHU Xiaobin

Session-based recommendation systems serve as essential tools for assisting users in identifying matching interests and requirements from large volumes of data. These systems aim to predict the next user actions based on anonymous sessions. However, existing methods inadequately represent the overall interests of a user and frequently neglect the positional relationships among items. To address this limitation, an enhanced memory network-based session recommendation model, SR-MAN, is proposed to analyze global user interest representations and item sequence problems. Initially, the method introduces position encoding during the generation of item embedding vectors to emphasize the impact of different positions on the sequence. Subsequently, a neural Turing machine is employed to store recent session information, and an attention network is designed to learn long-term user preferences by integrating the most recent user interaction as the current interest indicator. Finally, the method integrates long-term and current preferences to predict and recommend items of interest. Bayesian Personalized Ranking(BPR) is employed to estimate the model parameters. Experiments on three datasets demonstrate the effectiveness of the proposed method.

Computer engineering. Computer hardware, Computer software
DOAJ Open Access 2024
Information model of the building and its application in selected phases of the life cycle

Konovalov Denis, Svajlenka Jozef, Katunsky Dusan

Despite various aspects, be it social or economic, construction is considered one of the decisive industries in many countries. In this industry, it is not possible to automate and streamline processes like for example in the automotive industry, as each construction work is unique every time. However, by implementing modern technologies of the 21st century, it is possible to streamline the processes in the pre-investment, investment and especially in the phase of using the construction work. Since the construction work is supposed to be used and functional for several decades. A suitable tool is the use of the building information model in short - "BIM". This issue has been moving and progressing significantly in recent years. By implementing these digital technologies in construction projects, it is possible to significantly speed up the course of construction, avoid collision situations, save financial resources, and last but not least, when creating a high-quality model, it is possible to create a digital twin of a given construction work, which can significantly optimize and streamline processes right in the use phase. It is in this phase of the life cycle that there is the greatest scope for using the completed building model, which of course must be revised during the first two phases to make it as accurate as possible and usable for management and maintenance. During this phase, it is also appropriate to implement software tools such as Computer Aided Facility Management (CAFM).

Environmental sciences

Halaman 23 dari 407607