In recent years, the increasing complexity of industrial cyber-physical systems such as autonomous vehicles has tightened real-time constraints, yet conventional model-based development (MBD) does not always exploit embedded multi/many-core processors. Obstacles include the need to manually implement ROS 2 I/O and node structure, the requirement in ROS 2 to handle execution timing such as event-driven and timer-driven callbacks and multi-input synchronization, and parallelization must be repeated for target hardware configurations. Although Model-based Parallelizer (MBP) can generate task-parallel code from Simulink models, its support for MATLAB/Simulink Toolbox blocks (e.g., ROS Toolbox) is limited, and lacking data parallelism, speedup is hard to obtain. This paper proposes a Simulink-to-ROS 2 parallel code generation method from Simulink models that include Toolbox blocks. The proposed method preserves Toolbox-equivalent functionality while preventing excessive block reduction from degrading parallelism, and generates ROS 2 C++ nodes automatically that support event-driven and timer-driven execution. Functional correctness is validated through split-merge-based verification and back-to-back tests using identical rosbag2 inputs. Beyond single-node execution time, the method measures end-to-end pipeline execution time across multiple nodes and ROS 2 latency; variability is quantified using WCET (maximum over 1,000 runs), jitter, and variance. Autoware Universe-derived ROS 2 nodes are evaluated on Raspberry Pi 4, WSL2, and the Coolidge platform. The evaluation demonstrates up to 15.92 times speedup on a 16-core Coolidge configuration. Meanwhile, ROS 2 communication and scheduling latency remains below approximately 1% of a 100 ms cycle. These results demonstrate practical, reproducible high-performance ROS 2 deployment on embedded platforms using Toolbox-based MBD.
Matolwandile M. Mtotywa, Jeri-Lee J. Mowers, Wavhudi Ndou
et al.
The integration of artificial intelligence (AI) in literature reviews aims to transform research by potentially automating processes, enhancing rigour, and improving quality. The study proposes a structured step-by-step approach to integrate AI tools into the literature review synthesis process. The developed methodological approach has five steps. The first step, planning and readiness, involves scoping, understanding practices, and defining boundaries of AI use. Next is selecting AI tools and aligning their capabilities with the literature needs through a matrix. The third step focuses on using AI to conduct the review, followed by validation and cross-referencing of AI-generated results. The final step is disclosing AI use in line with ethical and reporting standards. The approach is demonstrated through five scenarios: emerging or fragmented literature, large or saturated fields, interdisciplinary domains, methodologically diverse studies, and under-researched topics. This approach is designed to enhance transparency, potentially reduce bias, and support reproducibility by aligning AI functions with research goals. It also addresses ethical considerations and promotes human–AI collaboration. For researchers and academics, it aims to provide a practical roadmap for the responsible adoption of AI in literature reviews, supporting efficiency, ethical tool use, transparency, and the balance between machine assistance and academic judgment.
Mateus Martinez de Lucena, Josafat Leal Ribeiro, Matheus Wagner
et al.
Industrial application data acquisition systems can be sources of vast amounts of data. The seismic surveys conducted by oil and gas companies result in enormous datasets, often exceeding terabytes of data. The storage and communication demands these data require can only be achieved through compression. Careful consideration must be given to minimize the reconstruction error of compressed data caused by lossy compression. This paper investigates the combination of principal component analysis (PCA), discrete wavelet transform (DWT), thresholding, quantization, and entropy encoding to compress such datasets. The proposed method is a lossy compression algorithm tuned by evaluating the reconstruction error in frequency ranges of interest, namely 0–20 Hz and 15–65 Hz. The PCA compression and decompression acts as a noise filter while the DWT drives the compression. The proposed method can be tuned through threshold and quantization percentages and the number of principal components to achieve compression rates of up to 31:1 with reconstruction residues energy of less than <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>4</mn><mo>%</mo></mrow></semantics></math></inline-formula> in the frequency ranges of 0–20 Hz, 15–65 Hz, and 60–105 Hz.
Andreas Kapshammer, Severin Huemer-Kals, Kepa Zulueta
et al.
This study introduces a methodology for characterizing and modeling the viscosity and specific volume–pressure–temperature (pvT) behavior of sheet molding compound (SMC) materials, based on the use of specialized testing equipment. Conventional rheometers are inadequate for such materials due to the presence of long fibers, necessitating the use of specialized equipment like squeeze flow rheometers and pvT dilatometers. Our findings demonstrate that traditional oscillatoric rheometer measurements underestimate the viscosity of CF-SMCs, highlighting the need for advanced, albeit non-standardized, testing methods. Additionally, we found that standard Tait models failed to capture the temperature-dependent porosity of CF-SMCs at low pressures, whereas models based on thermodynamic state variables (TSVs) provided accurate predictions across a broader range of conditions. The study also addressed the complexities introduced by fiber–flow coupling and the fiber orientation in measuring the viscosity, revealing limitations in conventional modeling approaches. The numerical analysis showed that a power law-based anisotropic viscosity model (PL-IISO) combined with a TSV model offered the best predictive performance in finite volume flow simulations, especially for thick-walled regions. However, the current modeling approaches have limited predictive capabilities for the fiber orientation in thin-walled regions. This research underscores the challenges in accurately modeling CF-SMC materials in terms of the fiber orientation, whereas the compression forces needed from the pressing machine could be predicted accurately within an average error of 6.5% in the squeeze flow experiments.
Lekshmi Murali Rani, Faezeh Mohammadi, Robert Feldt
et al.
Incorporating responsible practices into software engineering (SE) for AI is essential to ensure ethical principles, societal impact, and accountability remain at the forefront of AI system design and deployment. This study investigates the ethical challenges and complexities inherent in responsible software engineering (RSE) for AI, underscoring the need for practical,scenario-driven operational guidelines. Given the complexity of AI and the relative inexperience of professionals in this rapidly evolving field, continuous learning and market adaptation are crucial. Through qualitative interviews with seven practitioners(conducted until saturation), quantitative surveys of 51 practitioners, and static validation of results with four industry experts in AI, this study explores how personal values, emerging roles, and awareness of AIs societal impact influence responsible decision-making in RSE for AI. A key finding is the gap between the current state of the art and actual practice in RSE for AI, particularly in the failure to operationalize ethical and responsible decision-making within the software engineering life cycle for AI. While ethical issues in RSE for AI largely mirror those found in broader SE process, the study highlights a distinct lack of operational frameworks and resources to guide RSE practices for AI effectively. The results reveal that current ethical guidelines are insufficiently implemented at the operational level, reinforcing the complexity of embedding ethics throughout the software engineering life cycle. The study concludes that interdisciplinary collaboration, H-shaped competencies(Ethical-Technical dual competence), and a strong organizational culture of ethics are critical for fostering RSE practices for AI, with a particular focus on transparency and accountability.
Sebe Vanbrabant, Gustavo Rovelo Ruiz, Davy Vanacken
While the increased integration of AI technologies into interactive systems enables them to solve an increasing number of tasks, the black-box problem of AI models continues to spread throughout the interactive system as a whole. Explainable AI (XAI) techniques can make AI models more accessible by employing post-hoc methods or transitioning to inherently interpretable models. While this makes individual AI models clearer, the overarching system architecture remains opaque. This challenge not only pertains to standard XAI techniques but also to human examination and conversational XAI approaches that need access to model internals to interpret them correctly and completely. To this end, we propose conceptually representing such interactive systems as sequences of structural building blocks. These include the AI models themselves, as well as control mechanisms grounded in literature. The structural building blocks can then be explained through complementary explanatory building blocks, such as established XAI techniques like LIME and SHAP. The flow and APIs of the structural building blocks form an unambiguous overview of the underlying system, serving as a communication basis for both human and automated agents, thus aligning human and machine interpretability of the embedded AI models. In this paper, we present our flow-based approach and a selection of building blocks as MATCH: a framework for engineering Multi-Agent Transparent and Controllable Human-centered systems. This research contributes to the field of (conversational) XAI by facilitating the integration of interpretability into existing interactive systems.
Large Language Models (LLMs) have transformed software engineering, but their application to physical engineering domains remains underexplored. This paper evaluates LLMs' capabilities in high-powered rocketry design through RocketBench, a benchmark connecting LLMs to high-fidelity rocket simulations. We test models on two increasingly complex design tasks: target altitude optimization and precision landing challenges. Our findings reveal that while state-of-the-art LLMs demonstrate strong baseline engineering knowledge, they struggle to iterate on their designs when given simulation results and ultimately plateau below human performance levels. However, when enhanced with reinforcement learning (RL), we show that a 7B parameter model outperforms both SoTA foundation models and human experts. This research demonstrates that RL-trained LLMs can serve as effective tools for complex engineering optimization, potentially transforming engineering domains beyond software development.
Research objectives and hypothesis/research questions
The aim is to critically analyze the challenges and inequalities in the management of the financing of the tasks of local government units (LGUs) in Poland, with particular emphasis on the impact of legislative, political, and financial factors on the effectiveness of their tasks.
Research questions:
1. Does the presence of councilors employed in units subordinate to local government units lead to a conflict of interest, which negatively impacts the transparency and independence of financial decisions made?
2. Does the amount of subsidies and subsidies awarded depend solely on the economic situation of municipalities, or is it also influenced by political links between local authorities and the ruling party at the central level?
3. As a result of underestimating the educational subsidy, are local government units forced to redirect their funds to finance educational tasks at the expense of other public activity areas?
4. Do the currently used algorithms for the distribution of subsidies reflect the real needs of local government units, and, as a result, there is an optimal allocation of public funds?
5. Is there equal access for local government units to European and national funds?
Research methods
1. Analysis of empirical data: Examination of data from local government units (LGUs) between 2019 and 2023.
2. Comparative analysis: Evaluation of financial indicators for LGUs based on their size, own revenues, and political affiliations.
3. Statistical analysis: Investigation of differences in the allocation of financial resources to identify disparities.
4. Analysis of source documents: Review legal documents, Supreme Audit Office (NIK) reports, and local budget data from LGUs.
5. Case study: Analysis of municipalities in the Radomsko focusing on underestimating educational subsidies and conflicts of interest.
6. Critical literature review: Examination of domestic and international literature to provide context and identify relevant issues.
Main results
1. The amount of subsidies and grants awarded often depends on the political affiliations of local authorities with the ruling party.
2. Educational subsidies fall short of covering actual educational costs, straining resources for other public responsibilities.
3. Councilors employed by subordinate LGU units cause conflicts of interest, harming transparency and financial independence.
4. Under governmental support programs, grant allocation processes lacked transparency and clear criteria, enabling abuses and discretionary fund distribution.
5. Financial support was unevenly distributed, worsening inequalities between wealthier and poorer regions.
Implications for theory and practice
For theory: the research brought a new perspective to the analysis of decentralization and self-government, showing the impact of political, legislative, and financial factors on the functioning of local governments. In particular, the results confirm the importance of political distribution theory, pointing to the practice of favoring individuals associated with the ruling party, reflecting the phenomenon of political allocation of resources. The problems of unequal allocation of resources and underestimation of education subsidies bring new elements to the theory of distributive justice, highlighting the imbalance in access to public resources between regions.
For practice: research indicates an urgent need for legislative reforms aimed at simplifying and stabilizing the regulations governing the activities of local government units. Recommendation for the introduction of more transparent mechanisms for allocating public funds. Emphasize the importance of support for less developed local government units, which would reduce regional inequalities and make more efficient use of available funds.
Management. Industrial management, Management information systems
Gabriel Arquelau Pimenta Rodrigues, André Luiz Marques Serrano, Guilherme Fay Vergara
et al.
A data breach is the unauthorized disclosure of sensitive personal data, and it impacts millions of individuals annually in the United States, as reported by Privacy Rights Clearinghouse. These breaches jeopardize the physical safety of the individuals whose data are exposed and result in substantial economic losses for the affected companies. To diminish the frequency and severity of data breaches in the future, it is imperative to research their causes and explore preventive measures. In pursuit of this goal, this study considers a dataset of data breach incidents affecting companies listed on the New York Stock Exchange and NASDAQ. This dataset has been augmented with additional information regarding the targeted company. This paper employs statistical visualizations of the data to clarify these incidents and assess their consequences on the affected companies and individuals whose data were compromised. We then propose mitigation controls based on established frameworks such as the NIST Cybersecurity Framework. Additionally, this paper reviews the compliance scenario by examining the relevant laws and regulations applicable to each case, including SOX, HIPAA, GLBA, and PCI-DSS, and evaluates the impacts of data breaches on stock market prices. We also review guidelines for appropriately responding to data leaks in the U.S., for compliance achievement and cost reduction. By conducting this analysis, this work aims to contribute to a comprehensive understanding of data breaches and empower organizations to safeguard against them proactively, improving the technical quality of their basic services. To our knowledge, this is the first paper to address compliance with data protection regulations, security controls as countermeasures, financial impacts on stock prices, and incident response strategies. Although the discussion is focused on publicly traded companies in the United States, it may also apply to public and private companies worldwide.
The article presents the developed information and analytical platform for working capital management according to the methodology of a systematic approach, considering the specifics of textile production and modern requirements of preventive crisis management. The project is presented, and an effective algorithm for preventive crisis management of working capital is adapted on the basis of an information and analytical environment using progressive analysis techniques. The author’s approach is presented to the regulation of the analytical information formation in the designated block of anti-crisis management, integrated into the analytical environment of the enterprise through the interrelations and interdependencies of quantitative and qualitative parameters of the use efficiency of working capital. The anti-crisis management project of organization working capital on the basis of a systematic approach allowed to identify the disproportions of individual parameters of the efficiency of working capital management, typical financial problems associated with a high level of operational, including production and financial cycles, which together poses a real threat to the sustainable development of the enterprise (innovation, investment, market and financial stability of development). In addition, due to this project, it became possible to develop a set of interrelated measures to improve the efficiency of working capital management in the format of optimization tasks in the stocks’ regulation of raw materials, finished products, accounts receivable. The results of the analytical study indicate the need to use innovative approaches to managing the turnover of working capital in the production and financial cycles with access to high-margin design solutions for the sustainable development of the enterprise.
In the social and organizational sciences, accountability has been linked to the efficient operation of organizations. However, it has received limited attention in software engineering (SE) research, in spite of its central role in the most popular software development methods (e.g., Scrum). In this article, we explore the mechanisms of accountability in SE environments. We investigate the factors that foster software engineers' individual accountability within their teams through an interview study with 12 people. Our findings recognize two primary forms of accountability shaping software engineers individual senses of accountability: institutionalized and grassroots. While the former is directed by formal processes and mechanisms, like performance reviews, grassroots accountability arises organically within teams, driven by factors such as peers' expectations and intrinsic motivation. This organic form cultivates a shared sense of collective responsibility, emanating from shared team standards and individual engineers' inner commitment to their personal, professional values, and self-set standards. While institutionalized accountability relies on traditional "carrot and stick" approaches, such as financial incentives or denial of promotions, grassroots accountability operates on reciprocity with peers and intrinsic motivations, like maintaining one's reputation in the team.
This paper introduces reAnalyst, a framework designed to facilitate the study of reverse engineering (RE) practices through the semi-automated annotation of RE activities across various RE tools. By integrating tool-agnostic data collection of screenshots, keystrokes, active processes, and other types of data during RE experiments with semi-automated data analysis and generation of annotations, reAnalyst aims to overcome the limitations of traditional RE studies that rely heavily on manual data collection and subjective analysis. The framework enables more efficient data analysis, which will in turn allow researchers to explore the effectiveness of protection techniques and strategies used by reverse engineers more comprehensively and efficiently. Experimental evaluations validate the framework's capability to identify RE activities from a diverse range of screenshots with varied complexities. Observations on past experiments with our framework as well as a survey among reverse engineers provide further evidence of the acceptability and practicality of our approach.
Context: Experiment replications play a central role in the scientific method. Although software engineering experimentation has matured a great deal, the number of experiment replications is still relatively small. Software engineering experiments are composed of complex concepts, procedures and artefacts. Laboratory packages are a means of transfer-ring knowledge among researchers to facilitate experiment replications. Objective: This paper investigates the experiment replication process to find out what information is needed to successfully replicate an experiment. Our objective is to propose the content and structure of laboratory packages for software engineering experiments. Method: We evaluated seven replications of three different families of experiments. Each replication had a different experimenter who was, at the time, unfamiliar with the experi-ment. During the first iterations of the study, we identified experimental incidents and then proposed a laboratory package structure that addressed these incidents, including docu-ment usability improvements. We used the later iterations to validate and generalize the laboratory package structure for use in all software engineering experiments. We aimed to solve a specific problem, while at the same time looking at how to contribute to the body of knowledge on laboratory packages. Results: We generated a laboratory package for three different experiments. These packages eased the replication of the respective experiments. The evaluation that we conducted shows that the laboratory package proposal is acceptable and reduces the effort currently required to replicate experiments in software engineering. Conclusion: We think that the content and structure that we propose for laboratory pack-ages can be useful for other software engineering experiments.
Abstract This paper introduces a novel and innovative approach to simulating random variates from two distinct probability distributions, namely the neutrosophic uniform distribution and the neutrosophic Weibull distribution. The primary objective of this research is to present a cutting-edge methodology for generating random variates by leveraging the accept-reject simulation method, particularly in the context of managing and addressing uncertainty. In addition to introducing the simulation methodology, this work will also provide comprehensive algorithms tailored to these proposed methods. These algorithms are essential for implementing the simulation techniques and will be instrumental in their practical applications. Furthermore, this study aims to explore the relationship between the level of indeterminacy and the resulting random variates. By investigating how varying degrees of indeterminacy impact random variates, we gain valuable insights into the dynamics of these distributions under different uncertainty conditions. Preliminary results suggest that random variates exhibit a trend of decreasing as indeterminacy levels increase, shedding light on the intriguing interplay between indeterminacy and random variate generation.
Computer engineering. Computer hardware, Information technology
Large Language Models (LLMs), such as ChatGPT, have transformed the field of natural language processing with their capacity for language comprehension and generation of human-like, fluent responses for many downstream tasks. Despite their impressive capabilities, they often fall short in domain-specific and knowledge-intensive domains due to a lack of access to relevant data. Moreover, most state-of-art LLMs lack transparency as they are often accessible only through APIs. Furthermore, their application in critical real-world scenarios is hindered by their proclivity to produce hallucinated information and inability to leverage external knowledge sources. To address these limitations, we propose an innovative system that enhances LLMs by integrating them with an external knowledge management module. The system allows LLMs to utilize data stored in vector databases, providing them with relevant information for their responses. Additionally, it enables them to retrieve information from the Internet, further broadening their knowledge base. The research approach circumvents the need to retrain LLMs, which can be a resource-intensive process. Instead, it focuses on making more efficient use of existing models. Preliminary results indicate that the system holds promise for improving the performance of LLMs in domain-specific and knowledge-intensive tasks. By equipping LLMs with real-time access to external data, it is possible to harness their language generation capabilities more effectively, without the need to continually strive for larger models.
Communications in the mmWave spectrum are gaining relevance in the last years as they are a promising candidate to cope with the increasing demand of throughput and latency in different use cases. Nowadays, several efforts have been carried out to characterize the propagation medium of these signals with the aim of designing their corresponding communication protocols accordingly, and a wide variety of both outdoor/indoor locations have already been studied. However, very few works endorse industrial scenarios, which are particularly demanding due to their stringent requirements in terms of reliability, determinism, and latency. This work aims to provide an insight of the propagation of 60 GHz mmWave signals in a typical industrial workshop in order to explore the particularities of this kind of scenario. In order to achieve this, an extensive measurement campaign has been carried out in this environment and a stochastic channel model has been proposed and validated.
This position paper for an invited talk on the "Future of eScience" discusses the Research Software Engineering Movement and where it might be in 2030. Because of the authors' experiences, it is aimed globally but with examples that focus on the United States and United Kingdom.
Markus Borg, Elizabeth Bjarnason, Michael Unterkalmsteiner
et al.
The RET (Requirements Engineering and Testing) workshop series provides a meeting point for researchers and practitioners from the two separate fields of Requirements Engineering (RE) and Testing. The long term aim is to build a community and a body of knowledge within the intersection of RE and Testing, i.e., RET. The 4th workshop was co-located with the 25th International Requirements Engineering Conference (RE'17) in Lisbon, Portugal and attracted about 20 participants. In line with the previous workshop instances, RET 2017 o ered an interactive setting with a keynote, an invited talk, paper presentations, and a concluding hands-on exercise.