Hasil untuk "Electrical engineering. Electronics. Nuclear engineering"

Menampilkan 20 dari ~8844086 hasil · dari CrossRef, DOAJ, arXiv

JSON API
arXiv Open Access 2026
The Competence Crisis: A Design Fiction on AI-Assisted Research in Software Engineering

Mairieli Wessel, Daniel Feitosa, Sangeeth Kochanthara

Rising publication pressure and the routine use of generative AI tools are reshaping how software engineering research is produced, assessed, and taught. While these developments promise efficiency, they also raise concerns about skill degradation, responsibility, and trust in scholarly outputs. This vision paper employs Design Fiction as a methodological lens to examine how such concerns might materialise if current practices persist. Drawing on themes reported in a recent community survey, we construct a speculative artifact situated in a near future research setting. The fiction is used as an analytical device rather than a forecast, enabling reflection on how automated assistance might impede domain knowledge competence, verification, and mentoring practices. By presenting an intentionally unsettling scenario, the paper invites discussion on how the software engineering research community in the future will define proficiency, allocate responsibility, and support learning.

en cs.SE
arXiv Open Access 2026
One-Year Internship Program on Software Engineering: Students' Perceptions and Educators' Lessons Learned

Golnoush Abaei, Mojtaba Shahin, Maria Spichkova

The inclusion of internship courses in Software Engineering (SE) programs is essential for closing knowledge gaps and improving graduates' readiness for the software industry. Our study focuses on year-long internships at RMIT University (Melbourne, Australia), which offers in-depth industry engagement. We analysed how the course evolved over the last 10 years to incorporate students' needs and summarised the lessons learned that can be helpful for other educators supporting internship courses. Our qualitative analysis of internship data based on 91 reports during 2023-2024 identified three challenge themes the students faced, and which courses were found by students to be particularly beneficial during their internships. On this basis, we proposed recommendations for educators and companies to help interns overcome challenges and maximise their learning experience.

en cs.SE
arXiv Open Access 2026
Future of Software Engineering Research: The SIGSOFT Perspective

Massimiliano Di Penta, Kelly Blincoe, Marsha Chechik et al.

As software engineering conferences grow in size, rising costs and outdated formats are creating barriers to participation for many researchers. These barriers threaten the inclusivity and global diversity that have contributed to the success of the SE community. Based on survey data, we identify concrete actions the ACM Special Interest Group on Software Engineering (SIGSOFT) can take to address these challenges, including improving transparency around conference funding, experimenting with hybrid poster presentations, and expanding outreach to underrepresented regions. By implementing these changes, SIGSOFT can help ensure the software engineering community remains accessible and welcoming.

DOAJ Open Access 2025
Accelerated Prediction of Terahertz Performance Metrics in GaN IMPATT Sources via Artificial Neural Networks

Santu Mondal, Sneha Ray, Aritra Acharyya et al.

This work investigates the application of artificial neural network (ANN)-based regression models to predict the static and dynamic characteristics of GaN impact avalanche transit time (IMPATT) sources in the terahertz (THz) frequency regime. A comprehensive dataset, derived from self-consistent quantum drift-diffusion (SCQDD) simulations of GaN IMPATT structures designed for a wide frequency range from the microwave frequency bands, up to 5 THz, is used to train the ANN models. The models effectively capture the impact of variations in structural, doping, and biasing parameters on device performance. The proposed ANN approach significantly reduces computational time for predicting breakdown characteristics, power output, and conversion efficiency properties of IMPATT sources, achieving similar accuracy to traditional SCQDD simulations while requiring only 7.8&#x2013;20.1% of the computational time. Mean square errors are observed to be on the order of <inline-formula> <tex-math notation="LaTeX">$10^{-4}$ </tex-math></inline-formula>&#x2013;<inline-formula> <tex-math notation="LaTeX">$10^{-6}$ </tex-math></inline-formula>, demonstrating the models&#x2019; high accuracy. Experimental validation shows strong agreement in terms of breakdown voltage, power output, and efficiency, supporting the potential of machine learning to streamline the design and optimization of high-frequency semiconductor devices.

Electrical engineering. Electronics. Nuclear engineering
DOAJ Open Access 2025
An approach to distributed asynchronous multi‐sensor fusion utilising data compensation algorithm

Kuiwu Wang, Qin Zhang, Zhenlu Jin et al.

Abstract Multi‐sensor networks often encounter challenges such as inconsistent sampling times among local sensors and data loss during transmission. To address these issues, this paper employs a data loss compensation strategy to reconstruct missing data information. It designs the state estimation of local sensors utilising iterative state equations, leveraging multistep prediction techniques to estimate sensor states at unsampled points, thereby transforming the asynchronous sensor network system into a synchronous one. Furthermore, the projection theorem is applied to determine the fusion weights of local sensors, grounded on the principle of square‐averaging significance. Ultimately, data information pertaining to the same target is fused through arithmetic averaging, guided by distance correlation. Simulation outcomes demonstrate that the proposed algorithm balances estimation accuracy with communication overhead, achieved by designing an optimal number of communication iterations.

Telecommunication
arXiv Open Access 2025
LLM-Assisted Semantic Alignment and Integration in Collaborative Model-Based Systems Engineering Using SysML v2

Zirui Li, Stephan Husung, Haoze Wang

Cross-organizational collaboration in Model-Based Systems Engineering (MBSE) faces many challenges in achieving semantic alignment across independently developed system models. SysML v2 introduces enhanced structural modularity and formal semantics, offering a stronger foundation for interoperable modeling. Meanwhile, GPT-based Large Language Models (LLMs) provide new capabilities for assisting model understanding and integration. This paper proposes a structured, prompt-driven approach for LLM-assisted semantic alignment of SysML v2 models. The core contribution lies in the iterative development of an alignment approach and interaction prompts, incorporating model extraction, semantic matching, and verification. The approach leverages SysML v2 constructs such as alias, import, and metadata extensions to support traceable, soft alignment integration. It is demonstrated with a GPT-based LLM through an example of a measurement system. Benefits and limitations are discussed.

en cs.SE, cs.AI
arXiv Open Access 2025
Bridging the Quantum Divide: Aligning Academic and Industry Goals in Software Engineering

Jake Zappin, Trevor Stalnaker, Oscar Chaparro et al.

This position paper examines the substantial divide between academia and industry within quantum software engineering. For example, while academic research related to debugging and testing predominantly focuses on a limited subset of primarily quantum-specific issues, industry practitioners face a broader range of practical concerns, including software integration, compatibility, and real-world implementation hurdles. This disconnect mainly arises due to academia's limited access to industry practices and the often confidential, competitive nature of quantum development in commercial settings. As a result, academic advancements often fail to translate into actionable tools and methodologies that meet industry needs. By analyzing discussions within quantum developer forums, we identify key gaps in focus and resource availability that hinder progress on both sides. We propose collaborative efforts aimed at developing practical tools, methodologies, and best practices to bridge this divide, enabling academia to address the application-driven needs of industry and fostering a more aligned, sustainable ecosystem for quantum software development.

en cs.SE
arXiv Open Access 2025
Towards Trustworthy Sentiment Analysis in Software Engineering: Dataset Characteristics and Tool Selection

Martin Obaidi, Marc Herrmann, Jil Klünder et al.

Software development relies heavily on text-based communication, making sentiment analysis a valuable tool for understanding team dynamics and supporting trustworthy AI-driven analytics in requirements engineering. However, existing sentiment analysis tools often perform inconsistently across datasets from different platforms, due to variations in communication style and content. In this study, we analyze linguistic and statistical features of 10 developer communication datasets from five platforms and evaluate the performance of 14 sentiment analysis tools. Based on these results, we propose a mapping approach and questionnaire that recommends suitable sentiment analysis tools for new datasets, using their characteristic features as input. Our results show that dataset characteristics can be leveraged to improve tool selection, as platforms differ substantially in both linguistic and statistical properties. While transformer-based models such as SetFit and RoBERTa consistently achieve strong results, tool effectiveness remains context-dependent. Our approach supports researchers and practitioners in selecting trustworthy tools for sentiment analysis in software engineering, while highlighting the need for ongoing evaluation as communication contexts evolve.

en cs.SE
arXiv Open Access 2025
SeeAction: Towards Reverse Engineering How-What-Where of HCI Actions from Screencasts for UI Automation

Dehai Zhao, Zhenchang Xing, Qinghua Lu et al.

UI automation is a useful technique for UI testing, bug reproduction, and robotic process automation. Recording user actions with an application assists rapid development of UI automation scripts, but existing recording techniques are intrusive, rely on OS or GUI framework accessibility support, or assume specific app implementations. Reverse engineering user actions from screencasts is non-intrusive, but a key reverse-engineering step is currently missing - recognizing human-understandable structured user actions ([command] [widget] [location]) from action screencasts. To fill the gap, we propose a deep learning-based computer vision model that can recognize 11 commands and 11 widgets, and generate location phrases from action screencasts, through joint learning and multi-task learning. We label a large dataset with 7260 video-action pairs, which record user interactions with Word, Zoom, Firefox, Photoshop, and Windows 10 Settings. Through extensive experiments, we confirm the effectiveness and generality of our model, and demonstrate the usefulness of a screencast-to-action-script tool built upon our model for bug reproduction.

en cs.SE
DOAJ Open Access 2024
Trinity: In-Database Near-Data Machine Learning Acceleration Platform for Advanced Data Analytics

Ji-Hoon Kim, Seunghee Han, Kwanghyun Park et al.

The ability to perform machine learning (ML) tasks in a database management system (DBMS) is a new paradigm for conventional database systems as it enables advanced data analytics on top of well-established capabilities of DBMSs. However, the integration of ML in DBMSs introduces new challenges in traditional CPU-based systems because of its higher computational demands and bigger data bandwidth requirements. To address this, hardware acceleration has become even more important in database systems, and the computational storage device (CSD) placing an accelerator near storage is considered as an effective solution due to its high processing power with no extra data movement cost. In this paper, we propose Trinity, an end-to-end database system that enables in-database, in-storage platform that accelerates advanced analytics queries invoking trained ML models along with complex data operations. By designing a full stack from DBMS&#x2019;s internal software components to hardware accelerator, Trinity enables in-database ML pipelines on the CSD. On the software side, we extend the internals of conventional DBMSs to utilize the accelerator in the SmartSSD. Our extended analyzer evaluates the compatibility of the current query with our hardware accelerator and compresses compatible queries into a 24-byte numeric format for efficient hardware processing. Furthermore, the predictor is extended to integrate our performance cost models to always offload queries into the optimal hardware backend. The proposed SmartSSD cost model mathematically models our hardware, including host operations, data transfers, FPGA kernel execution time, and the CPU cost model uses polynomial regression ML models to predict complex CPU latency. On the hardware side, we introduce the in-database processing accelerator (i-DPA), a custom FPGA-based accelerator. i-DPA includes database page decoder to fully exploit the bandwidth benefit of near-storage processing. It also employs dynamic tuple binding to enhance the overall parallelism and hardware utilization. i-DPA;s architecture having heterogeneous computing units with a reconfigurable on-chip interconnect also allows seamless data streaming, enabling task-level pipeline across different computing units. Finally, our evaluation shows that Trinity improves the end-to-end performance of analytics queries by <inline-formula> <tex-math notation="LaTeX">$15.21\times $ </tex-math></inline-formula> on average and up to <inline-formula> <tex-math notation="LaTeX">$57.18\times $ </tex-math></inline-formula> compared to the conventional CPU-based DBMS platform. We also show that the Trinity&#x2019;s performance can linearly scale up with multiple SmartSSDs, achieving nearly up to <inline-formula> <tex-math notation="LaTeX">$200\times $ </tex-math></inline-formula> speedup over the baseline with four SmartSSDs.

Electrical engineering. Electronics. Nuclear engineering
arXiv Open Access 2023
Industrial Engineering with Large Language Models: A case study of ChatGPT's performance on Oil & Gas problems

Oluwatosin Ogundare, Srinath Madasu, Nathanial Wiggins

Large Language Models (LLMs) have shown great potential in solving complex problems in various fields, including oil and gas engineering and other industrial engineering disciplines like factory automation, PLC programming etc. However, automatic identification of strong and weak solutions to fundamental physics equations governing several industrial processes remain a challenging task. This paper identifies the limitation of current LLM approaches, particularly ChatGPT in selected practical problems native to oil and gas engineering but not exclusively. The performance of ChatGPT in solving complex problems in oil and gas engineering is discussed and the areas where LLMs are most effective are presented.

en cs.CL
arXiv Open Access 2023
Divide and Conquer the EmpiRE: A Community-Maintainable Knowledge Graph of Empirical Research in Requirements Engineering

Oliver Karras, Felix Wernlein, Jil Klünder et al.

[Background.] Empirical research in requirements engineering (RE) is a constantly evolving topic, with a growing number of publications. Several papers address this topic using literature reviews to provide a snapshot of its "current" state and evolution. However, these papers have never built on or updated earlier ones, resulting in overlap and redundancy. The underlying problem is the unavailability of data from earlier works. Researchers need technical infrastructures to conduct sustainable literature reviews. [Aims.] We examine the use of the Open Research Knowledge Graph (ORKG) as such an infrastructure to build and publish an initial Knowledge Graph of Empirical research in RE (KG-EmpiRE) whose data is openly available. Our long-term goal is to continuously maintain KG-EmpiRE with the research community to synthesize a comprehensive, up-to-date, and long-term available overview of the state and evolution of empirical research in RE. [Method.] We conduct a literature review using the ORKG to build and publish KG-EmpiRE which we evaluate against competency questions derived from a published vision of empirical research in software (requirements) engineering for 2020 - 2025. [Results.] From 570 papers of the IEEE International Requirements Engineering Conference (2000 - 2022), we extract and analyze data on the reported empirical research and answer 16 out of 77 competency questions. These answers show a positive development towards the vision, but also the need for future improvements. [Conclusions.] The ORKG is a ready-to-use and advanced infrastructure to organize data from literature reviews as knowledge graphs. The resulting knowledge graphs make the data openly available and maintainable by research communities, enabling sustainable literature reviews.

en cs.SE, cs.DL
arXiv Open Access 2023
Dipole-Spread Function Engineering for 6D Super-Resolution Microscopy

Tingting Wu, Matthew D. Lew

Fluorescent molecules are versatile nanoscale emitters that enable detailed observations of biophysical processes with nanoscale resolution. Because they are well-approximated as electric dipoles, imaging systems can be designed to visualize their 3D positions and 3D orientations, so-called dipole-spread function (DSF) engineering, for 6D super-resolution single-molecule orientation-localization microscopy (SMOLM). We review fundamental image-formation theory for fluorescent di-poles, as well as how phase and polarization modulation can be used to change the image of a dipole emitter produced by a microscope, called its DSF. We describe several methods for designing these modulations for optimum performance, as well as compare recently developed techniques, including the double-helix, tetrapod, crescent, and DeepSTORM3D learned point-spread functions (PSFs), in addition to the tri-spot, vortex, pixOL, raPol, CHIDO, and MVR DSFs. We also cover common imaging system designs and techniques for implementing engineered DSFs. Finally, we discuss recent biological applications of 6D SMOLM and future challenges for pushing the capabilities and utility of the technology.

en physics.optics, eess.IV
arXiv Open Access 2023
Nuclear Reactions in Evolving Stars

Friedrich-Karl Thielemann, Thomas Rauscher

This chapter will go through the important nuclear reactions in stellar evolution and explosions, passing through the individual stellar burning stages and also explosive burning conditions. To follow the changes in the composition of nuclear abundances requires the knowledge of the relevant nuclear reaction rates. For light nuclei (entering in early stellar burning stages) the resonance density is generally quite low and the reactions are determined by individual resonances, which are best obtained from experiments. For intermediate mass and heavy nuclei the level density is typically sufficient to apply statistical model approaches. For this reason, while we discuss all burning stages and explosive burning, focusing on the reactions of importance, we will for light nuclei refer to the chapters by M. Wiescher, deBoer & Reifarth (Experimental Nuclear Astrophysics) and P. Descouvement (Theoretical Studies of Low-Energy Nuclear Reactions), which display many examples, experimental methods utilized, and theoretical approaches how to predict nuclear reaction rates for light nuclei. For nuclei with sufficiently high level densities we discuss statistical model methods used in present predictions of nuclear reaction cross sections and thermonuclear rates across the nuclear chart, including also the application to nuclei far from stability and fission modes.

en astro-ph.SR, nucl-th
DOAJ Open Access 2022
Comparative Study of CUDA GPU Implementations in Python With the Fast Iterative Shrinkage-Thresholding Algorithm for LASSO

Younsang Cho, Jaeoh Kim, Donghyeon Yu

A general-purpose GPU (GPGPU) is employed in a variety of domains, including accelerating the spread of deep natural network models; however, further research into its effective implementation is needed. When using the compute unified device architecture (CUDA), which has recently gained popularity, the situation is analogous to use the GPUs and its memory space. This is due to the lack of a gold standard for selecting the most efficient approach for CUDA GPU parallel computation. Contrarily, as solving the least absolute shrinkage and selection operator (LASSO) regression fully consists of the basic linear algebra operations, the computation using GPGPU is more effective than other models. Additionally, its optimization problem often requires fast and efficient calculations. The purpose of this study is to provide brief introductions to the implementation approaches and numerically compare the computational efficiency of GPU parallel computation with that of the fast iterative shrinkage-thresholding algorithm for LASSO. This study contributes to providing gold standards for the CUDA GPU parallel computation, considering both computational efficiency and ease of implementation. Based on our comparison results, we recommend implementing the CUDA GPU parallel computation using Python, with either a dynamic-link library or PyTorch for the iterative algorithms.

Electrical engineering. Electronics. Nuclear engineering
arXiv Open Access 2022
Exploring Opportunities in Usable Hazard Analysis Processes for AI Engineering

Nikolas Martelaro, Carol J. Smith, Tamara Zilovic

Embedding artificial intelligence into systems introduces significant challenges to modern engineering practices. Hazard analysis tools and processes have not yet been adequately adapted to the new paradigm. This paper describes initial research and findings regarding current practices in AI-related hazard analysis and on the tools used to conduct this work. Our goal with this initial research is to better understand the needs of practitioners and the emerging challenges of considering hazards and risks for AI-enabled products and services. Our primary research question is: Can we develop new structured thinking methods and systems engineering tools to support effective and engaging ways for preemptively considering failure modes in AI systems? The preliminary findings from our review of the literature and interviews with practitioners highlight various challenges around integrating hazard analysis into modern AI development processes and suggest opportunities for exploration of usable, human-centered hazard analysis tools.

en cs.SE
arXiv Open Access 2022
A longitudinal case study on the effects of an evidence-based software engineering training

Sebastián Pizard, Diego Vallespir, Barbara Kitchenham

Context: Evidence-based software engineering (EBSE) can be an effective resource to bridge the gap between academia and industry by balancing research of practical relevance and academic rigor. To achieve this, it seems necessary to investigate EBSE training and its benefits for the practice. Objective: We sought both to develop an EBSE training course for university students and to investigate what effects it has on the attitudes and behaviors of the trainees. Method: We conducted a longitudinal case study to study our EBSE course and its effects. For this, we collect data at the end of each EBSE course (2017, 2018, and 2019), and in two follow-up surveys (one after 7 months of finishing the last course, and a second after 21 months). Results: Our EBSE courses seem to have taught students adequately and consistently. Half of the respondents to the surveys report making use of the new skills from the course. The most-reported effects in both surveys indicated that EBSE concepts increase awareness of the value of research and evidence and EBSE methods improve information gathering skills. Conclusions: As suggested by research in other areas, training appears to play a key role in the adoption of evidence-based practice. Our results indicate that our training method provides an introduction to EBSE suitable for undergraduates. However, we believe it is necessary to continue investigating EBSE training and its impact on software engineering practice.

arXiv Open Access 2022
Nuclear Weak Rates and Nuclear Weak Processes in Stars

Toshio Suzuki

Nuclear weak rates in stellar environments are obtained by shell-model calculations including Gamow-Teller (GT) and spin-dipole transitions, and applied to nuclear weak processes in stars. The important roles of accurate weak rates for the study of astrophysical processes are pointed out. The weak rates in $sd$-shell are used to study the evolution of ONeMg cores in stars with 8-10 M$_{\odot}$. Cooling of the core by nuclear Urca processes, and the heating by double e-captures on $^{20}$Ne are studied. Especially, the e-capture rates for a second-forbidden transition in $^{20}$Ne are evaluated with the multipole expansion method of Walecka and Behrens-B$\ddot{\mbox{u}}$hring, and the final fate of the cores, core-collapse or thermonuclear explosion, are discussed. The weak rates in $pf$-shell are applied to nucleosynthesis of iron-group elements in Type Ia supernovae. The over-production problem of neutron-rich iron isotopes compared with the solar abundances is now reduced to be within a factor of two. The weak rates for nuclear Urca pair with $A$=31 in the island of inversion are evaluated with the effective interaction obtained by the extended Kuo-Krenciglowa method. The transition strengths and e-capture rates in $^{78}$Ni, important for core-collapse processes, are evaluated with the $pf$-$sdg$ shell, and compared with those obtained by the random-phase-approximation and an effective rate formula. $β$-decay rates of $N$ =126 isotones are evaluated with both the GT and first-forbidden transitions. The half-lives are found to be shorter than those obtained by standard models. Neutrino-nucleus reaction cross sections on $^{13}$C, $^{16}$O and $^{40}$Ar are obtained with new shell-model Hamiltonians. Implications on nucleosynthesis, neutrino detection, neutrino oscillations and neutrino mass hierarchy are discussed.

en nucl-th, astro-ph.SR
DOAJ Open Access 2021
Recurrent Neural Networks Based Online Behavioural Malware Detection Techniques for Cloud Infrastructure

Jeffrey C. Kimmel, Andrew D. Mcdole, Mahmoud Abdelsalam et al.

Several organizations are utilizing cloud technologies and resources to run a range of applications. These services help businesses save on hardware management, scalability and maintainability concerns of underlying infrastructure. Key cloud service providers (CSPs) like Amazon, Microsoft and Google offer Infrastructure as a Service (IaaS) to meet the growing demand of such enterprises. This increased utilization of cloud platforms has made it an attractive target to the attackers, thereby, making the security of cloud services a top priority for CSPs. In this respect, malware has been recognized as one of the most dangerous and destructive threats to cloud infrastructure (IaaS). In this paper, we study the effectiveness of Recurrent Neural Networks (RNNs) based deep learning techniques for detecting malware in cloud Virtual Machines (VMs). We focus on two major RNN architectures: Long Short Term Memory RNNs (LSTMs) and Bidirectional RNNs (BIDIs). These models learn the behavior of malware over time based on run-time fine-grained processes system features such as CPU, memory, and disk utilization. We evaluate our approach on a dataset of 40,680 malicious and benign samples. The process level features were collected using real malware running in an open online cloud environment with no restrictions, which is important to emulate practical cloud provider settings and also capture the true behaviour of stealth and sophisticated malware. Both our LSTM and BIDI models achieve high detection rates over 99&#x0025; for different evaluation metrics. In addition, an analysis study is conducted to understand the significance of input data representations. Our results suggest that in particular cases, input ordering does have some affect on the performance of the trained RNN models.

Electrical engineering. Electronics. Nuclear engineering

Halaman 12 dari 442205