N. Jennings
Hasil untuk "Industrial engineering. Management engineering"
Menampilkan 20 dari ~11140432 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar
A. Orman, E. Aarts, J. Lenstra
M. Guicherd, M. Ben Khaled, M. Gueroult et al.
Mark Looi
The rapid advance of Generative AI into software development prompts this empirical investigation of perceptual effects on practice. We study the usage patterns of 147 professional developers, examining perceived correlates of AI tools use, the resulting productivity and quality outcomes, and developer readiness for emerging AI-enhanced development. We describe a virtuous adoption cycle where frequent and broad AI tools use are the strongest correlates of both Perceived Productivity (PP) and quality, with frequency strongest. The study finds no perceptual support for the Quality Paradox and shows that PP is positively correlated with Perceived Code Quality (PQ) improvement. Developers thus report both productivity and quality gains. High current usage, breadth of application, frequent use of AI tools for testing, and ease of use correlate strongly with future intended adoption, though security concerns remain a moderate and statistically significant barrier to adoption. Moreover, AI testing tools' adoption lags that of coding tools, opening a Testing Gap. We identify three developer archetypes (Enthusiasts, Pragmatists, Cautious) that align with an innovation diffusion process wherein the virtuous adoption cycle serves as the individual engine of progression. Our findings reveal that organizational adoption of AI tools follows such a process: Enthusiasts push ahead with tools, creating organizational success that converts Pragmatists. The Cautious are held in organizational stasis: without early adopter examples, they don't enter the virtuous adoption cycle, never accumulate the usage frequency that drives intent, and never attain high efficacy. Policy itself does not predict individuals' intent to increase usage but functions as a marker of maturity, formalizing the successful diffusion of adoption by Enthusiasts while acting as a gateway that the Cautious group has yet to reach.
Jingyue Li, André Storhaug
With the advancement of Agentic AI, researchers are increasingly leveraging autonomous agents to address challenges in software engineering (SE). However, the large language models (LLMs) that underpin these agents often function as black boxes, making it difficult to justify the superiority of Agentic AI approaches over baselines. Furthermore, missing information in the evaluation design description frequently renders the reproduction of results infeasible. To synthesize current evaluation practices for Agentic AI in SE, this study analyzes 18 papers on the topic, published or accepted by ICSE 2026, ICSE 2025, FSE 2025, ASE 2025, and ISSTA 2025. The analysis identifies prevailing approaches and their limitations in evaluating Agentic AI for SE, both in current research and potential future studies. To address these shortcomings, this position paper proposes a set of guidelines and recommendations designed to empower reproducible, explainable, and effective evaluations of Agentic AI in software engineering. In particular, we recommend that Agentic AI researchers make their Thought-Action-Result (TAR) trajectories and LLM interaction data, or summarized versions of these artifacts, publicly accessible. Doing so will enable subsequent studies to more effectively analyze the strengths and weaknesses of different Agentic AI approaches. To demonstrate the feasibility of such comparisons, we present a proof-of-concept case study that illustrates how TAR trajectories can support systematic analysis across approaches.
Bahman Ghazanfari
<p>A hybrid method for the numerical solution of the system of delayed linear fuzzy mixed VolterraFredholm integral equations (FMDVFIES) is introduced. Using the hybrid of Bernstein polynomials and blockpulse functions (HBBFs), an approximate solution for the equations system is provided. Firstly, the HBBFs and their operational matrices are introduced, and some of their characteristics are described. Then by applying the operational matrices on FMDVFIES convert it to the algebraic equations system. The numerical solution is obtained by solving this algebraic system. Then the convergence is investigated and some numerical examples are presented to show the effectiveness of the method.</p>
Дмитро АНРДЄЄВ, Олексій ЛИГУН, Андрій ДРОЗД et al.
Critical infrastructures are fundamental to the seamless operation of modern societies, encompassing sectors such as energy, healthcare, transportation, and communications. Ensuring their reliability, performance, continuous operation, safety, maintenance, and protection is a national priority for countries worldwide. The digital twins play a crucial role in critical infrastructure, as they enhance security, resilience, reliability, maintenance, continuity, and operational efficiency across all sectors. Among the benefits offered by digital twins are intelligent and autonomous decision-making, process optimization, improved traceability, interactive visualization, and real-time monitoring, analysis, and prediction. Furthermore, the study revealed that digital twins have the capability to bridge the gap between physical and virtual environments, can be used in combination with other technologies, and can be integrated into various contexts and industries. The use of digital twins was explored as the foundation for developing a modern monitoring system for critical infrastructure facilities enables multi-level assessment of asset conditions in real time, ensuring precise threat detection, anomaly identification, and timely decision-making. Integration with artificial intelligence and big data technologies allows not only the collection and analysis of large volumes of information but also the creation of adaptive behavioral models for systems in emergency situations. Special attention was given to the method of optimizing critical IT infrastructure using digital twins, which combines virtual modeling, predictive algorithms, and automated management. The proposed approach enhances the reliability of digital systems, minimizes downtime, optimizes maintenance costs, and strengthens cybersecurity. This system is especially relevant in the context of growing risks and increasing demands for the stability of strategically important infrastructure assets. The application of digital twins for monitoring and optimizing critical infrastructure demonstrates considerable potential for improving its resilience, safety, and operational efficiency. The approaches discussed in the study confirm the relevance of implementing digital models as tools for timely risk identification, failure prediction, and informed decision-making. By integrating such technologies, organizations can reduce operational costs, minimize downtime, and improve the overall stability of infrastructure operations. Therefore, digital twins represent a vital step toward the digital transformation and modernization of mission-critical systems across various sectors.
Evariste Sindani, Simon Ntumba Badibanga, Pierre Kafunda Katalay et al.
This study proposes an integrated approach to digitalizing human resources (HR) in African public institutions by developing a performance optimization model. Based on five key variables—processing time, operational cost, service quality, degree of automation, and employee satisfaction—this model aims to enhance the overall efficiency of HR processes. The study is applied to the case of the National Office for Population Identification (ONIP) in the Democratic Republic of Congo and highlights substantial improvements in human resource management. Theoretically, the approach contributes to the digital transformation field through modeling, and practically, by offering a reproducible and adaptable framework for other public organizations with limited resources. Keywords: Digitalization, HR process optimization, ONIP, HR performance, HRIS.
Roberto Verdecchia, Justus Bogner
From its first adoption in the late 80s, qualitative research has slowly but steadily made a name for itself in what was, and perhaps still is, the predominantly quantitative software engineering (SE) research landscape. As part of our regular column on empirical software engineering (ACM SIGSOFT SEN-ESE), we reflect on the state of qualitative SE research with a focus group of experts. Among other things, we discuss why qualitative SE research is important, how it evolved over time, common impediments faced while practicing it today, and what the future of qualitative SE research might look like. Joining the conversation are Rashina Hoda (Monash University, Australia), Carolyn Seaman (University of Maryland, United States), and Klaas Stol (University College Cork, Ireland). The content of this paper is a faithful account of our conversation from October 25, 2025, which we moderated and edited for our column.
Mohammed Latif Siddiq, Arvin Islam-Gomes, Natalie Sekerak et al.
Reproducibility is a cornerstone of scientific progress, yet its state in large language model (LLM)-based software engineering (SE) research remains poorly understood. This paper presents the first large-scale, empirical study of reproducibility practices in LLM-for-SE research. We systematically mined and analyzed 640 papers published between 2017 and 2025 across premier software engineering, machine learning, and natural language processing venues, extracting structured metadata from publications, repositories, and documentation. Guided by four research questions, we examine (i) the prevalence of reproducibility smells, (ii) how reproducibility has evolved over time, (iii) whether artifact evaluation badges reliably reflect reproducibility quality, and (iv) how publication venues influence transparency practices. Using a taxonomy of seven smell categories: Code and Execution, Data, Documentation, Environment and Tooling, Versioning, Model, and Access and Legal, we manually annotated all papers and associated artifacts. Our analysis reveals persistent gaps in artifact availability, environment specification, versioning rigor, and documentation clarity, despite modest improvements in recent years and increased adoption of artifact evaluation processes at top SE venues. Notably, we find that badges often signal artifact presence but do not consistently guarantee execution fidelity or long-term reproducibility. Motivated by these findings, we provide actionable recommendations to mitigate reproducibility smells and introduce a Reproducibility Maturity Model (RMM) to move beyond binary artifact certification toward multi-dimensional, progressive evaluation of reproducibility rigor.
Zulfikar Jati Aliansyah, Firdaus Firdaus
Fly ash, a by-product of coal combustion in thermal power plants is utilized as a substitute for Portland Cement in concrete due to its pozzolanic properties. Mainly, class F fly ash—produced from the combustion of anthracite coal at approximately 1560˚C (according to SK SNI S15-1990-F)—contains less than 10% lime (CaO) and exhibits significant pozzolanic or filler properties. This research investigates the impact of varying fly ash proportions (0%, 10%, 15%, and 20%) on the compressive strength of concrete. The experimental setup included 108 cylindrical test specimens (10 cm in diameter and 20 cm in height), representing different brands of cement and fly ash mixtures, tested at 7, 14, and 28 days. The study was conducted at PT. Waskita Beton Precast Plant in Sadang, Purwakarta, West Java, targeting a concrete compressive strength of 40 MPa. Results indicate that at 0% fly ash, the compressive strength using Garuda Cement reached 76.66 MPa after 28 days. However, this strength decreased with increasing fly ash content, measuring 72.35 MPa, 69.07 MPa, and 68.13 MPa for 10%, 15%, and 20% fly ash, respectively. These findings highlight the influence of fly ash content on the structural integrity of concrete, suggesting a potential trade-off between sustainability and mechanical performance. Keyword: Fly Ash, Compressive Strength of Concrete, Portland Cement
Oka Sudana, Ngurah Adi, Agung Cahyawan
Preserving Balinese cultural heritage is crucial for sustaining community identity. In Bali, temples (pura) are central to spiritual and cultural life. However, younger generations, especially temple caretakers of Pemerajan Agung Sakti Padangsambian, are increasingly losing knowledge of these sacred spaces, weakening their sense of belonging, to preserve cultural traditions. Current media efforts has failed to engage this demographic. This research addresses this challenge by developing an application-integrated images compiled into books and Android-based AR technology. The application employed a user-centered design approach involving analysis, design, development, testing, and evaluation phases. Results show AR effectively bridges the knowledge gap, with usability scores and a significant increase in user knowledge of 42.43%. This research demonstrates AR's potential for preserving and transmitting cultural heritage, including the reconstruction of damaged historical objects through 3D modeling with the marker detection technology, to ensure seamless integration between the real and virtual worlds.
Ebube Alor, Ahmad Abdellatif, SayedHassan Khatoonabadi et al.
Software engineering (SE) chatbots are increasingly gaining attention for their role in enhancing development processes. At the core of chatbots are Natural Language Understanding platforms (NLUs), which enable them to comprehend user queries but require labeled data for training. However, acquiring such labeled data for SE chatbots is challenging due to the scarcity of high-quality datasets, as training requires specialized vocabulary and phrases not found in typical language datasets. Consequently, developers often resort to manually annotating user queries -- a time-consuming and resource-intensive process. Previous approaches require human intervention to generate rules, called labeling functions (LFs), that categorize queries based on specific patterns. To address this issue, we propose an approach to automatically generate LFs by extracting patterns from labeled user queries. We evaluate our approach on four SE datasets and measure performance improvement from training NLUs on queries labeled by the generated LFs. The generated LFs effectively label data with AUC scores up to 85.3% and NLU performance improvements up to 27.2%. Furthermore, our results show that the number of LFs affects labeling performance. We believe that our approach can save time and resources in labeling users' queries, allowing practitioners to focus on core chatbot functionalities rather than manually labeling queries.
Lei Zhang, Zhi Pei
In the present paper, the online valet driving problem (OVDP) is studied. In this problem, customers request a valet driving service through the platform, then the valets arrive on e-bikes at the designated pickup location and drive the vehicle to the destination. The key task is to assign the valets effectively for driving orders to minimize the overall cost. To serve that purpose, we first propose a new online scheduling strategy that divides the planning horizon into several rounds with fixed length of time, and each round consists of pooling time and scheduling time. By including the features of online scheduling and the power level of e-bikes, this OVDP becomes more practical but nevertheless challenging. To solve the OVDP, we formulate it into a set partitioning model and design a branch-and-price (B&P) algorithm. To improve the computation efficiency, a label setting algorithm is incorporated to address the pricing subproblem, which is accelerated via a heuristic pricing method. As an essential part of the algorithm design, an artificial column technique and a greedy-based constructive heuristic are implemented to obtain the initial solution. Based on the numerical analysis of various scaled instances, it is verified that the proposed B&P algorithm is not only effective in optimum seeking, but also shows a high level of efficiency in comparison with the off-the-shelf commercial solvers. Furthermore, we also explore the impact of pooling and scheduling time on the OVDP and discover a bowl-shaped trend of the objective value with respect to the two time lengths.
Sameh Alsaqoor, Ahmad Alqatamin, Ali Alahmer et al.
This study examines the impact of incorporating phase change material (PCM) in photovoltaic thermal (PVT) systems on their electrical and thermal performance. Although PVT systems have shown effectiveness in converting solar energy into both electricity and heat, there is a necessity for studies to investigate how integrating PCMs can further enhance performance. The study also aims to explore the effect of solar irradiation and coolant mass flow rate on the electrical and thermal output of both PVT and PVT-PCM systems. A graphical user interface was developed within the MATLAB Simulink under the weather conditions of Amman, Jordan. The results show that the incorporation of PCM in PVT systems significantly reduces solar cell temperature and increases electrical efficiency. The highest electrical efficiency of a PVT system with PCM was found to be 14%, compared to 13.75% in a PVT system without PCM. Furthermore, the maximum achievable electrical power in a PVT system with PCM was 21 kW, while in the PVT system without PCM it was 18 kW. The study also found that increasing the coolant mass flow rate in a PVT system with PCM further reduced PV cell temperature and increased electrical efficiency, while the electrical efficiency of both the PVT and PVT-PCM systems decreases as solar incident radiation flux increases, resulting in a significant rise in cell temperature. At an increased solar radiation level from 500 W/m2 to 1000 W/m2, the electrical efficiency of the PVT configuration decreases from 13.75% to 11.1%, while the electrical efficiency of the PVT-PCM configuration falls from 14% to 12%. The findings of this study indicate that the use of PCM in PVT systems can lead to significant improvements in energy production and cooling processes. The results provide valuable information for designing and optimizing PVT-PCM systems.
Mehrdad Jalali, A. D. Dinga Wonanke, Christof Wöll
Kaushik Roy, Manas Gaur, Misagh Soltani et al.
Virtual Mental Health Assistants (VMHAs) are utilized in health care to provide patient services such as counseling and suggestive care. They are not used for patient diagnostic assistance because they cannot adhere to safety constraints and specialized clinical process knowledge (ProKnow) used to obtain clinical diagnoses. In this work, we define ProKnow as an ordered set of information that maps to evidence-based guidelines or categories of conceptual understanding to experts in a domain. We also introduce a new dataset of diagnostic conversations guided by safety constraints and ProKnow that healthcare professionals use (ProKnow-data). We develop a method for natural language question generation (NLG) that collects diagnostic information from the patient interactively (ProKnow-algo). We demonstrate the limitations of using state-of-the-art large-scale language models (LMs) on this dataset. ProKnow-algo incorporates the process knowledge through explicitly modeling safety, knowledge capture, and explainability. As computational metrics for evaluation do not directly translate to clinical settings, we involve expert clinicians in designing evaluation metrics that test four properties: safety, logical coherence, and knowledge capture for explainability while minimizing the standard cross entropy loss to preserve distribution semantics-based similarity to the ground truth. LMs with ProKnow-algo generated 89% safer questions in the depression and anxiety domain (tested property: safety). Further, without ProKnow-algo generations question did not adhere to clinical process knowledge in ProKnow-data (tested property: knowledge capture). In comparison, ProKnow-algo-based generations yield a 96% reduction in our metrics to measure knowledge capture. The explainability of the generated question is assessed by computing similarity with concepts in depression and anxiety knowledge bases. Overall, irrespective of the type of LMs, ProKnow-algo achieved an averaged 82% improvement over simple pre-trained LMs on safety, explainability, and process-guided question generation. For reproducibility, we will make ProKnow-data and the code repository of ProKnow-algo publicly available upon acceptance.
Lvyang Yang, Jiankang Zhang, Huaiqiang Li et al.
The digitization of engineering drawings is crucial for efficient reuse, distribution, and archiving. Existing computer vision approaches for digitizing engineering drawings typically assume the input drawings have high quality. However, in reality, engineering drawings are often blurred and distorted due to improper scanning, storage, and transmission, which may jeopardize the effectiveness of existing approaches. This paper focuses on restoring and recognizing low-quality engineering drawings, where an end-to-end framework is proposed to improve the quality of the drawings and identify the graphical symbols on them. The framework uses K-means clustering to classify different engineering drawing patches into simple and complex texture patches based on their gray level co-occurrence matrix statistics. Computer vision operations and a modified Enhanced Super-Resolution Generative Adversarial Network (ESRGAN) model are then used to improve the quality of the two types of patches, respectively. A modified Faster Region-based Convolutional Neural Network (Faster R-CNN) model is used to recognize the quality-enhanced graphical symbols. Additionally, a multi-stage task-driven collaborative learning strategy is proposed to train the modified ESRGAN and Faster R-CNN models to improve the resolution of engineering drawings in the direction that facilitates graphical symbol recognition, rather than human visual perception. A synthetic data generation method is also proposed to construct quality-degraded samples for training the framework. Experiments on real-world electrical diagrams show that the proposed framework achieves an accuracy of 98.98% and a recall of 99.33%, demonstrating its superiority over previous approaches. Moreover, the framework is integrated into a widely-used power system software application to showcase its practicality.
Elizabeth Bjarnason, Mirko Morandini, Markus Borg et al.
The RET (Requirements Engineering and Testing) workshop series provides a meeting point for researchers and practitioners from the two separate fields of Requirements Engineering (RE) and Testing. The goal is to improve the connection and alignment of these two areas through an exchange of ideas, challenges, practices, experiences and results. The long term aim is to build a community and a body of knowledge within the intersection of RE and Testing, i.e. RET. The 2nd workshop was held in co-location with ICSE 2015 in Florence, Italy. The workshop continued in the same interactive vein as the 1st one and included a keynote, paper presentations with ample time for discussions, and a group exercise. For true impact and relevance this cross-cutting area requires contribution from both RE and Testing, and from both researchers and practitioners. A range of papers were presented from short experience papers to full research papers that cover connections between the two fields. One of the main outputs of the 2nd workshop was a categorization of the presented workshop papers according to an initial definition of the area of RET which identifies the aspects RE, Testing and coordination effect.
Alejandro Bermúdez Cifuentes, Paula Alejandra Ramírez Lugo, Yeison Camilo Herrera Mosquera et al.
This project seeks to propose a drone system to improve security in Bogotá, given that insecurity rates have increasedsignificantly in recent years and even more so in the COVID-19 pandemic. The proposal is based on having an integratedsystem with special characteristics to face different situations, the drones would be equipped with non-lethal weapons,photo captures for facial and vehicle scanning, real-time location sending, aerial patrolling in more frequent areas ofcriminal acts, among other functions.
Halaman 10 dari 557022