Hasil untuk "Computer software"

Menampilkan 20 dari ~8152228 hasil · dari CrossRef, arXiv, Semantic Scholar, DOAJ

JSON API
DOAJ Open Access 2026
Two-Stream Modeling for Document-Level Event Argument Extraction Using Contextual Clue and AMR Structures

Yiqing Song, Xinna Shang, Guiren Dai et al.

Document-level (Doc-level) event argument extraction (EAE) needs to deal with longer text inputs and complex semantic relationships than sentence-level, making it a challenging information extraction task. Extracting event arguments from an entire document primarily faces two critical issues: (i) how to handle the long-distance dependency between trigger and role arguments and (ii) how to extract key event contextual information. We propose a two-stream modeling framework using contextual clues and abstract meaning representation (AMR) parsing (TSCA). TSCA employs two-stream encoding to semantically model the document from event-critical context and event-semantic structure two perspectives. This approach leverages both contextual clues and semantic structure information to better mitigate the two issues. We incorporate AMR to assist in the semantic understanding of complex event structures and effectively capture long-distance dependencies. Additionally, we introduce a span indicator based on triggers to adaptively merge the two-stream information, enhancing the capture of semantic relevance between triggers and candidate arguments. We validated the effectiveness of our method on the public datasets RAMs and Wikievents, where TSCA achieved the best scores in various subtasks, surpassing state-of-the-art models by 3.02 F1 and 1.01 F1, respectively.

Computer software
DOAJ Open Access 2026
Influence of Placement Techniques on Marginal Integrity, Wear Behavior, and Clinical Efficiency of a Bulk-Fill Resin Composite

Kerem Can Işık, Handan Yıldırım-Işık, Uğur Tuna Sazlıkoğlu et al.

The placement technique of resin composites may significantly influence marginal integrity, wear resistance, and operative efficiency. This in vitro study evaluated the influence of different placement techniques for a bulk-fill resin composite on marginal integrity, wear behavior, and application time. Standardized Class I cavities were prepared in extracted human molars and restored using the same bulk-fill composite (Filtek One Bulk Fill, 3M, USA) applied with four techniques: incremental placement, incremental placement with a modeling liquid (GC Modeling Liquid, GC Corp., Tokyo, Japan), bulk placement, and the stamp technique. Application time was recorded in seconds. All specimens underwent combined mechanical and thermal aging (SD Mechatronik, Germany). Marginal integrity was assessed three-dimensionally using micro-computed tomography, while surface wear was quantified through computer-based digital analysis with OraCheck software (Dentsply Sirona, Germany). Bulk placement exhibited significantly higher microleakage scores than the other techniques while demonstrating the shortest application time. Incremental placement, incremental placement with modeling liquid, and the stamp technique showed comparable microleakage results (<i>p</i> > 0.05). Although the use of modeling liquid did not increase microleakage, it resulted in significantly greater wear. Placement technique significantly influences marginal integrity, wear behavior, and application time of bulk-fill composite restorations.

Biotechnology, Medicine (General)
arXiv Open Access 2025
Empathy Guidelines for Improving Practitioner Well-being & Software Engineering Practices

Hashini Gunatilake, John Grundy, Rashina Hoda et al.

Empathy is a powerful yet often overlooked element in software engineering (SE), supporting better teamwork, smoother communication, and effective decision-making.This paper introduces 17 actionable empathy guidelines designed to support practitioners, teams, and organisations. We also explore how these guidelines can be implemented in practice by examining real-world applications, challenges, and strategies to overcome them shared by software practitioners. To support adoption, we present a visual prioritisation framework that categorises the guidelines based on perceived importance, ease of implementation, and willingness to adopt. The findings offer practical and flexible suggestions for integrating empathy into everyday SE work, helping teams move from principles to sustainable action.

en cs.SE
arXiv Open Access 2025
Using LLMs in Generating Design Rationale for Software Architecture Decisions

Xiyu Zhou, Ruiyin Li, Peng Liang et al.

Design Rationale (DR) for software architecture decisions refers to the reasoning underlying architectural choices, which provides valuable insights into the different phases of the architecting process throughout software development. However, in practice, DR is often inadequately documented due to a lack of motivation and effort from developers. With the recent advancements in Large Language Models (LLMs), their capabilities in text comprehension, reasoning, and generation may enable the generation and recovery of DR for architecture decisions. In this study, we evaluated the performance of LLMs in generating DR for architecture decisions. First, we collected 50 Stack Overflow (SO) posts, 25 GitHub issues, and 25 GitHub discussions related to architecture decisions to construct a dataset of 100 architecture-related problems. Then, we selected five LLMs to generate DR for the architecture decisions with three prompting strategies, including zero-shot, chain of thought (CoT), and LLM-based agents. With the DR provided by human experts as ground truth, the Precision of LLM-generated DR with the three prompting strategies ranges from 0.267 to 0.278, Recall from 0.627 to 0.715, and F1-score from 0.351 to 0.389. Additionally, 64.45% to 69.42% of the arguments of DR not mentioned by human experts are also helpful, 4.12% to 4.87% of the arguments have uncertain correctness, and 1.59% to 3.24% of the arguments are potentially misleading. To further understand the trustworthiness and applicability of LLM-generated DR in practice, we conducted semi-structured interviews with six practitioners. Based on the experimental and interview results, we discussed the pros and cons of the three prompting strategies, the strengths and limitations of LLM-generated DR, and the implications for the practical use of LLM-generated DR.

en cs.SE, cs.AI
arXiv Open Access 2025
On the Need to Rethink Trust in AI Assistants for Software Development: A Critical Review

Sebastian Baltes, Timo Speith, Brenda Chiteri et al.

Trust is a fundamental concept in human decision-making and collaboration that has long been studied in philosophy and psychology. However, software engineering (SE) articles often use the term trust informally; providing an explicit definition or embedding results in established trust models is rare. In SE research on AI assistants, this practice culminates in equating trust with the likelihood of accepting generated content, which, in isolation, does not capture the full conceptual complexity of trust. Without a common definition, true secondary research on trust is impossible. The objectives of our research were: (1) to present the psychological and philosophical foundations of human trust, (2) to systematically study how trust is conceptualized in SE and the related disciplines human-computer interaction and information systems, and (3) to discuss limitations of equating trust with content acceptance, outlining how SE research can adopt existing trust models to overcome the widespread informal use of the term trust. We conducted a literature review across disciplines and a critical review of recent SE articles with a focus on trust conceptualizations. We found that trust is rarely defined or conceptualized in SE articles. Related disciplines commonly embed their methodology and results in established trust models, clearly distinguishing, for example, between initial trust and trust formation and between appropriate and inappropriate trust. On a meta-scientific level, other disciplines even discuss whether and when trust can be applied to AI assistants at all. Our study reveals a significant maturity gap of trust research in SE compared to other disciplines. We provide concrete recommendations on how SE researchers can adopt established trust models and instruments to study trust in AI assistants beyond the acceptance of generated software artifacts.

en cs.SE
DOAJ Open Access 2025
From Topological Optimization to Spline Layouts: An Approach for Industrial Real-Wise Parts

Carolina Vittoria Beccari, Alessandro Ceruti, Filip Chudy

Additive manufacturing technologies have allowed the production of complex geometries that are typically obtained by applying topology optimization techniques. The outcome of the optimization process is a tessellated geometry, which has reduced aesthetic quality and unwanted spikes and cusps. Filters can be applied to improve the surface quality, but volume shrinking and geometry modification can be noticed. The design practice suggests manually re-designing the object in Computer-Aided Design (CAD) software, imitating the shape suggested by topology optimization. However, this operation is tedious and a lot of time is wasted. This paper proposes a methodology to automate the conversion from topology optimization output to a CAD-compatible design for industrial components. Topology optimization usually produces a dense triangle mesh with a high topological genus for those objects. We present a method to automatically generate a collection of spline (tensor-product) patches joined watertight and test the approach on real-wise industrial components. The methodology is based on the use of quadrilateral patches which are built on the external surface of the components. Based on the tests carried out, promising results have been obtained. It constitutes a first step towards the automatic generation of shapes that can readily be imported and edited in a CAD system.

DOAJ Open Access 2025
Dual-Branch CNN–Mamba Method for Image Defocus Deblurring

Wenqi Zhao, Chunlei Wu, Jing Lu et al.

Defocus deblurring is a challenging task in the fields of computer vision and image processing. The irregularity of defocus blur kernels, coupled with the limitations of computational resources, poses significant difficulties for defocused image restoration. Additionally, the varying degrees of blur across different regions of the image impose higher demands on feature capture. Insufficient fine-grained feature extraction can result in artifacts and the loss of details, while inadequate coarse-grained feature extraction can cause image distortion and unnatural transitions. To address these challenges, we propose a defocus image deblurring method based on a hybrid CNN–Mamba architecture. This approach employs a data-driven, network-based self-learning strategy for blur processing, eliminating the need for traditional blur kernel estimation. Furthermore, by designing parallel feature extraction modules, the method leverages the local feature extraction capabilities of CNNs to capture image details, effectively restoring texture and edge information. The Mamba module models long-range dependencies, ensuring global consistency in the image. On the real defocus blur dual-pixel image dataset DPDD, the proposed CMDDNet achieves a PSNR of 28.37 in the Indoor dataset, surpassing Uformer-B (28.23) while significantly reducing the parameter count to only 9.74 M, which is 80.9% less than Uformer-B (50.88 M). Although the PSNR on the Outdoor and Combined datasets is slightly lower, CMDDNet maintains competitive performance with a significantly reduced model size, demonstrating its efficiency and effectiveness in defocus deblurring. These results indicate that CMDDNet offers a promising trade-off between performance and computational efficiency, making it well suited for lightweight applications.

Technology, Engineering (General). Civil engineering (General)
DOAJ Open Access 2025
pytopicgram: A library for data extraction and topic modeling from Telegram channels

Juan Gómez-Romero, Javier Cantón Correa, Rubén Pérez Mercado et al.

Telegram is a popular platform for communication, generating large volumes of messages through its open channels. pytopicgram is a Python library designed to help researchers efficiently collect, organize, and analyze Telegram messages, addressing the increasing demand to understand online discourse. Key functionalities include efficient message retrieval, computation of engagement metrics, and advanced topic modeling. By automating the data extraction and analysis pipeline, pytopicgram simplifies the investigation of how content spreads, how topics evolve, and how audiences interact on Telegram. The library’s modular architecture ensures flexibility and scalability, making it suitable for diverse applications. This paper describes the design, main features, and illustrative examples that demonstrate pytopicgram’s practical effectiveness for studying public conversations.

Computer software
arXiv Open Access 2024
Integrating Various Software Artifacts for Better LLM-based Bug Localization and Program Repair

Qiong Feng, Xiaotian Ma, Jiayi Sheng et al.

LLMs have garnered considerable attention for their potential to streamline Automated Program Repair (APR). LLM-based approaches can either insert the correct code or directly generate patches when provided with buggy methods. However, most of LLM-based APR methods rely on a single type of software information, without fully leveraging different software artifacts. Despite this, many LLM-based approaches do not explore which specific types of information best assist in APR. Addressing this gap is crucial for advancing LLM-based APR techniques. We propose DEVLoRe to use issue content (description and message) and stack error traces to localize buggy methods, then rely on debug information in buggy methods and issue content and stack error to localize buggy lines and generate plausible patches which can pass all unit tests. The results show that while issue content is particularly effective in assisting LLMs with fault localization and program repair, different types of software artifacts complement each other. By incorporating different artifacts, DEVLoRe successfully locates 49.3% and 47.6% of single and non-single buggy methods and generates 56.0% and 14.5% plausible patches for the Defects4J v2.0 dataset, respectively. This outperforms current state-of-the-art APR methods. Furthermore, we re-implemented and evaluated our framework, demonstrating its effectiveness in its effectiveness in resolving 9 unique issues compared to other state-of-the-art frameworks using the same or more advanced models on SWE-bench Lite.We also discussed whether a leading framework for Python code can be directly applied to Java code, or vice versa. The source code and experimental results of this work for replication are available at https://github.com/XYZboom/DEVLoRe.

en cs.SE, cs.AI
arXiv Open Access 2024
The Role of Data Filtering in Open Source Software Ranking and Selection

Addi Malviya-Thakur, Audris Mockus

Faced with over 100M open source projects most empirical investigations select a subset. Most research papers in leading venues investigated filtering projects by some measure of popularity with explicit or implicit arguments that unpopular projects are not of interest, may not even represent "real" software projects, or that less popular projects are not worthy of study. However, such filtering may have enormous effects on the results of the studies if and precisely because the sought-out response or prediction is in any way related to the filtering criteria. We exemplify the impact of this practice on research outcomes: how filtering of projects listed on GitHub affects the assessment of their popularity. We randomly sample over 100,000 repositories and use multiple regression to model the number of stars (a proxy for popularity) based on the number of commits, the duration of the project, the number of authors, and the number of core developers. Comparing control with the entire dataset with a filtered model projects having ten or more authors we find that while certain characteristics of the repository consistently predict popularity, the filtering process significantly alters the relation ships between these characteristics and the response. The number of commits exhibited a positive correlation with popularity in the control sample but showed a negative correlation in the filtered sample. These findings highlight the potential biases introduced by data filtering and emphasize the need for careful sample selection in empirical research of mining software repositories. We recommend that empirical work should either analyze complete datasets such as World of Code, or employ stratified random sampling from a complete dataset to ensure that filtering is not biasing the results.

en cs.SE
arXiv Open Access 2024
Diversity Drives Fairness: Ensemble of Higher Order Mutants for Intersectional Fairness of Machine Learning Software

Zhenpeng Chen, Xinyue Li, Jie M. Zhang et al.

Intersectional fairness is a critical requirement for Machine Learning (ML) software, demanding fairness across subgroups defined by multiple protected attributes. This paper introduces FairHOME, a novel ensemble approach using higher order mutation of inputs to enhance intersectional fairness of ML software during the inference phase. Inspired by social science theories highlighting the benefits of diversity, FairHOME generates mutants representing diverse subgroups for each input instance, thus broadening the array of perspectives to foster a fairer decision-making process. Unlike conventional ensemble methods that combine predictions made by different models, FairHOME combines predictions for the original input and its mutants, all generated by the same ML model, to reach a final decision. Notably, FairHOME is even applicable to deployed ML software as it bypasses the need for training new models. We extensively evaluate FairHOME against seven state-of-the-art fairness improvement methods across 24 decision-making tasks using widely adopted metrics. FairHOME consistently outperforms existing methods across all metrics considered. On average, it enhances intersectional fairness by 47.5%, surpassing the currently best-performing method by 9.6 percentage points.

en cs.LG, cs.SE
arXiv Open Access 2024
The Impact of Generative AI-Powered Code Generation Tools on Software Engineer Hiring: Recruiters' Experiences, Perceptions, and Strategies

Alyssia Chen, Timothy Huo, Yunhee Nam et al.

The rapid advancements in Generative AI (GenAI) tools, such as ChatGPT and GitHub Copilot, are transforming software engineering by automating code generation tasks. While these tools improve developer productivity, they also present challenges for organizations and hiring professionals in evaluating software engineering candidates' true abilities and potential. Although there is existing research on these tools in both industry and academia, there is a lack of research on how these tools specifically affect the hiring process. Therefore, this study aims to explore recruiters' experiences and perceptions regarding GenAI-powered code generation tools, as well as their challenges and strategies for evaluating candidates. Findings from our survey of 32 industry professionals indicate that although most participants are familiar with such tools, the majority of organizations have not adjusted their candidate evaluation methods to account for candidates' use/knowledge of these tools. There are mixed opinions on whether candidates should be allowed to use these tools during interviews, with many participants valuing candidates who can effectively demonstrate their skills in using these tools. Additionally, most participants believe that it is important to incorporate GenAI-powered code generation tools into computer science curricula and mention the key risks and benefits of doing so.

en cs.SE
arXiv Open Access 2023
Towards green AI-based software systems: an architecture-centric approach (GAISSA)

Silverio Martínez-Fernández, Xavier Franch, Francisco Durán

Nowadays, AI-based systems have achieved outstanding results and have outperformed humans in different domains. However, the processes of training AI models and inferring from them require high computational resources, which pose a significant challenge in the current energy efficiency societal demand. To cope with this challenge, this research project paper describes the main vision, goals, and expected outcomes of the GAISSA project. The GAISSA project aims at providing data scientists and software engineers tool-supported, architecture-centric methods for the modelling and development of green AI-based systems. Although the project is in an initial stage, we describe the current research results, which illustrate the potential to achieve GAISSA objectives.

en cs.SE, cs.LG
DOAJ Open Access 2023
Greatly enhanced tunneling electroresistance in ferroelectric tunnel junctions with a double barrier design

Wei Xiao, Xiaohong Zheng, Hua Hao et al.

Abstract We propose that the double barrier effect is expected to enhance the tunneling electroresistance (TER) in the ferroelectric tunnel junctions (FTJs). To demonstrate the feasibility of this mechanism, we design a model structure of Pt/BaTiO3/LaAlO3/Pt/BaTiO3/LaAlO3/Pt double barrier ferroelectric tunnel junction (DB-FTJ), which can be considered as two identical Pt/BaTiO3/LaAlO3/Pt single barrier ferroelectric tunnel junctions (SB-FTJs) connected in series. Based on density functional calculation, we obtain the giant TER ratio of 2.210 × 108% in the DB-FTJ, which is at least three orders of magnitude larger than that of the SB-FTJs of Pt/BaTiO3/LaAlO3/Pt, together with an ultra-low resistance area product (0.093 KΩμm2) in the high conductance state of the DB-FTJ. Moreover, it is possible to control the direction of polarization of the two single ferroelectric barriers separately and thus four resistance states can be achieved, making DB-FTJs promising as multi-state memory devices.

Materials of engineering and construction. Mechanics of materials, Computer software
DOAJ Open Access 2023
Dynamic deformable transformer for end‐to‐end face alignment

Liming Han, Chi Yang, Qing Li et al.

Abstract Heatmap‐based regression (HBR) methods have dominated for a long time in the face alignment field while these methods need complex design and post‐processing. In this study, the authors propose an end‐to‐end and simple enough coordinate‐based regression (CBR) method called Dynamic Deformable Transformer (DDT) for face alignment. Unlike general pre‐defined landmark queries, DDT uses Dynamic Landmark Queries (DLQs) to query landmarks' classes and coordinates together. Besides, DDT adopts a deformable attention mechanism rather than a regular attention mechanism which has a faster convergence speed and lower computational complexity. Experiment results on three mainstream datasets 300W, WFLW, and COFW demonstrate DDT exceeds the state‐of‐the‐art CBR methods by a large margin and is comparable to the current state‐of‐the‐art HBR methods with much less computational complexity.

Computer applications to medicine. Medical informatics, Computer software
DOAJ Open Access 2022
Research on Text Representation of Video Content Based on Multi-Modal Fusion and Multi-Layer Attention

ZHAO Hong, GUO Lan, CHEN Zhiwen, ZHENG Houze

Aiming at the challenges of single-text representation and low accuracy of existing video content text-representation models, a video content text-reprsentation model that integrates frame-level image and audio information is proposed.The network structure of the model includes a single-mode embedding layer based on a self attention mechanism, and learns single-mode feature parameters.Two schemes, joint-representation and cooperative-representation, are adopted to fuse high-dimensional feature vectors output from the single-mode embedding layer, so that the model can focus on different objects in the video and their interaction, thereby generating richer and more accurate video text representation.The model is pretrained through large-scale datasets, and representation information, such as video frames and audio carried by the video, are extracted and sent to the coder to realize the text representation of the video content.The experimental results on MSR-VTT and LSMDC datasets show that the BLEU4, METEOR, ROUGEL, and CIDEr scores of the proposed model are 0.386, 0.250, 0.609 and 0.463 respectively.Compared with the model released by the IIT DeIhi in the MSR-VTT challenge, the proposed model improves the indexes above by 0.082, 0.037, 0.115 and 0.257 respectively.The model in this study can effectively improve the accuracy of the video content text-representation model.

Computer engineering. Computer hardware, Computer software
DOAJ Open Access 2022
A method of single‐shot target detection with multi‐scale feature fusion and feature enhancement

Zhong Qu, Xue Shang, Shu‐Fang Xia et al.

Abstract The Single Shot MultiBox Detector (SSD) is one of the fastest detection algorithms. Although it has achieved good results in detection, it also has the problem of poor detection effect for small targets and occlusion between objects. Here, the authors propose a new target detection method called single‐shot target detection with multi‐scale feature fusion and feature enhancement. Here, the authors introduce multi‐scale feature fusion module, feature enhancement module and efficient channel attention module, and integrate them into the detection module of the original SSD target detection algorithm to improve the ability of network feature extraction. Experimental results on pascal VOC 2007 datasets show that the proposed algorithm works well when the input size is 300 × 300, the detection speed reaches 41.7 frames per second (FPS) and the detection accuracy reaches 79.6%, which is 2.4% higher than the original SSD target detection algorithm. When the input size is 512 × 512, the detection accuracy is 81.9%, and the detection speed reaches 36.5 FPS, which is 3.2% higher than the original SSD target detection algorithm. According to the experimental results, our algorithm has a better performance when there are many objects in the image and there is occlusion.

Photography, Computer software
DOAJ Open Access 2022
HISTOMORPHOMETRIC COMPARISON OF DIAMETER OF RIGHT AND LEFT VERTEBRAL

Jitendra D Rawal , Hrishikesh R Jadav

 Introduction: It is evident that slight changes in the diameter of a vessel cause tremendous changes in its ability to conduct blood when the blood flow is streamline. The conduction of vessel increases in proportion to the diameter. Asymmetry of the vertebral arteries, with a larger left than right vessels has been described, but only few authors has recorded the dimensions. Aim: The present study was carried out to measure and compare Inner and Outer diameter of left and right vertebral artery. Material and Methods: 300 transverse annuli (sections) of vertebral artery were studied from 30 embalmed cadavers. Transverse annuli were processed and stained with Haematoxylin & Eosin. Stained slides were studied under the trinocular research microscope using 40x magnifications and the images obtained under microscope were transferred to computer and histological parameters were taken on computer images using Image-proplus software version 5.1. Inner and Outer diameter of transverse annuli were measured. Comparisons of left and right vertebral artery diameters were made using paired t test by SPSS version 15 software. Results: Inner diameter of left vertebral artery was 2.74 ± 0.46 mm and right vertebral artery was 2.64 ± 0.45 mm, the difference is statistically significant. Outer diameter of left vertebral artery was 3.16 ± 0.54 and right vertebral artery was 3.03 ± 0.51, hence the difference is statistically significant. Conclusion: The left sided vertebral artery was found to be dominant than the right side.

DOAJ Open Access 2022
An Intelligent Framework for Person Identification Using Voice Recognition and Audio Data Classification

Khan Isra, Emaduddin Shah Muhammad, Ullah Ashhad et al.

The paper proposes a framework to record meeting to avoid hassle of writing points of meeting. Key components of framework are “Model Trainer” and “Meeting Recorder”. In model trainer, we first clean the noise in audio, then oversample the data size and extract features from audio, in the end we train the classification model. Meeting recorder is a post-processor used for sound recognition using the trained model and converting the audio into text. Experimental results show the high accuracy and effectiveness of the proposed implementation.

Computer software

Halaman 33 dari 407612