Large language models (LLMs) demonstrate strong generative capabilities but remain vulnerable to hallucination and unreliable reasoning under adversarial prompting. Existing safety approaches -- such as reinforcement learning from human feedback (RLHF) and output filtering -- primarily operate at the behavioral level and may lack explicit architectural mechanisms for enforcing reasoning process integrity. This paper proposes the Box Maze framework, a conceptual process-control architecture that decomposes LLM reasoning into three explicit layers: memory grounding, structured inference, and boundary enforcement. We introduce preliminary simulation-based evaluation involving progressive boundary erosion scenarios across multiple heterogeneous LLM systems (DeepSeek-V3, Doubao, Qwen). Results from n=50 adversarial scenarios suggest that explicit cognitive control layers may improve consistency in boundary maintenance, with architectural constraints reducing boundary failure rates from approximately 40% (baseline RLHF) to below 1% under adversarial conditions. While current validation is simulation-based, these preliminary results indicate that process-level control may offer a promising direction for improving reliability in large language model reasoning.
Swarm coverage by unmanned underwater vehicles (UUVs) is essential for inspection, environmental monitoring, and search operations, but remains challenging in three-dimensional domains under limited sensing and communication. Pheromone-based stigmergic coordination provides a low-bandwidth alternative to explicit communication, yet conventional single-field models are susceptible to depth-dependent sensing inconsistencies and multi-source signal interference. This paper introduces a dual-trail stigmergic coordination framework in which a virtual pheromone field encodes short-term motion cues while an auxiliary coverage trail records the accumulated exploration effort. UUV motion is guided by the combined gradients of these two fields, enabling more consistent behavior across depth layers and mitigating ambiguities caused by overlapping pheromone sources. At the macroscopic level, swarm evolution is modeled by a coupled system of partial differential equations (PDEs) describing vehicle density, pheromone concentration, and coverage trail. A Lyapunov functional is constructed to derive sufficient conditions under which perturbations around the uniform coverage equilibrium decay exponentially. Numerical simulations in three-dimensional underwater domains demonstrate that the proposed framework reduces coverage holes, limits redundant overlap, and improves robustness with respect to a single-pheromone baseline and a potential-field-based controller. These results indicate that dual-field stigmergic control is a promising and scalable approach for UUV coverage in constrained underwater environments.
Tight-formation control is a key technology for unmanned surface vehicle (USV) swarms in harbor navigation, cooperative berthing, and operations in hazardous environments, yet achieving reliable obstacle avoidance while maintaining formation stability remains highly challenging. Although multi-agent reinforcement learning has shown strong potential in cooperative systems, parallel policy structures in many existing methods still struggle to achieve synchronized coordination in tight formations, leading to behavioral inconsistencies and unstable formation keeping. To address these challenges, an action-aware multi-agent soft actor–critic (AAMASAC) algorithm is proposed that introduces a hierarchical, action-aware decision mechanism. Within each time step, upper-layer actions are propagated as prior signals to lower-layer policies, establishing an ordered, intent-aligned decision flow that mitigates temporal inconsistency and enhances coordination efficiency. The architecture explicitly encodes inter-layer dependencies via a decision priority hierarchy and real-time behavioral information channels, enabling more accurate credit assignment and more stable value estimation and policy optimization. Across three representative validation scenarios, the AAMASAC algorithm significantly outperforms baseline methods in average reward, path-tracking accuracy, formation stability, and obstacle-avoidance performance. These results indicate that introducing a hierarchical model and action awareness effectively improves control accuracy and coordination in a USV swarm.
Software Architecture Descriptions (SADs) are essential for managing the inherent complexity of modern software systems. They enable high-level architectural reasoning, guide design decisions, and facilitate effective communication among diverse stakeholders. However, in practice, SADs are often missing, outdated, or poorly aligned with the system's actual implementation. Consequently, developers are compelled to derive architectural insights directly from source code-a time-intensive process that increases cognitive load, slows new developer onboarding, and contributes to the gradual degradation of clarity over the system's lifetime. To address these issues, we propose a semi-automated generation of SADs from source code by integrating reverse engineering (RE) techniques with a Large Language Model (LLM). Our approach recovers both static and behavioral architectural views by extracting a comprehensive component diagram, filtering architecturally significant elements (core components) via prompt engineering, and generating state machine diagrams to model component behavior based on underlying code logic with few-shots prompting. This resulting views representation offer a scalable and maintainable alternative to traditional manual architectural documentation. This methodology, demonstrated using C++ examples, highlights the potent capability of LLMs to: 1) abstract the component diagram, thereby reducing the reliance on human expert involvement, and 2) accurately represent complex software behaviors, especially when enriched with domain-specific knowledge through few-shot prompting. These findings suggest a viable path toward significantly reducing manual effort while enhancing system understanding and long-term maintainability.
Hashini Gunatilake, John Grundy, Rashina Hoda
et al.
Empathy plays a crucial role in software engineering (SE), influencing collaboration, communication, and decision-making. While prior research has highlighted the importance of empathy in SE, there is limited understanding of how empathy manifests in SE practice, what motivates SE practitioners to demonstrate empathy, and the factors that influence empathy in SE work. Our study explores these aspects through 22 interviews and a large scale survey with 116 software practitioners. Our findings provide insights into the expression of empathy in SE, the drivers behind empathetic practices, SE activities where empathy is perceived as useful or not, and the other factors that influence empathy. In addition, we offer practical implications for SE practitioners and researchers, offering a deeper understanding of how to effectively integrate empathy into SE processes.
In response to recent FIA regulations reducing Formula 1 team wind tunnel hours (from 320 hours for last-place teams to 200 hours for championship leaders) and strict budget caps of 135 million USD per year, more efficient aerodynamic development tools are needed by teams. Conventional computational fluid dynamics (CFD) simulations, though offering high fidelity results, require large computational resources with typical simulation durations of 8-24 hours per configuration analysis. This article proposes a Physics-Informed Neural Network (PINN) for the fast prediction of Formula 1 front wing aerodynamic coefficients. The suggested methodology combines CFD simulation data from SimScale with first principles of fluid dynamics through a hybrid loss function that constrains both data fidelity and physical adherence based on Navier-Stokes equations. Training on force and moment data from 12 aerodynamic features, the PINN model records coefficient of determination (R-squared) values of 0.968 for drag coefficient and 0.981 for lift coefficient prediction while lowering computational time. The physics-informed framework guarantees that predictions remain adherent to fundamental aerodynamic principles, offering F1 teams an efficient tool for the fast exploration of design space within regulatory constraints.
Submerged floating tunnel (SFT) may be subjected to sudden explosive loads such as internal vehicle explosions, terrorist attacks, and external explosions during operation. Based on the Arbitrary Lagrange–Euler (ALE) method, the locally truncated SFT model and fluid–structure interaction model of internal air and external water are established. Spherical explosives are used to simulate the destructive impact of internal explosions at different positions of the road inside the SFT and key positions at the bottom of the road. The results show that the peak accelerations at the monitoring points caused by the explosions of vehicles on the road rapidly decay within a range of three times the radius of the SFT, and circularly distributed damage appears on the explosion-facing side of the road surface. Longitudinal extensional damage occurs at the junction of the road surface and the SFT wall as well as the bottom supporting wall. Longitudinal cracks appear on the SFT wall. The peak accelerations at the monitoring points of the internal road caused by the concealed bomb at the bottom of the SFT rapidly decay within a range of twice the radius of the SFT, and the damage to the SFT is mainly concentrated on the road surface and the supporting wall. The most dangerous direction of external underwater explosion is determined to be directly below the SFT. When the scaled distance of the explosion is less than 0.543 m/kg<sup>1/3</sup>, the accelerations at the monitoring points of the internal road show a single-peak trend with rapid rise and decay, and circumferential through-cracks appear on the SFT wall. The supporting wall connecting the SFT wall and the internal road transmits stress to the road, causing extensive damage.
Ehidiame Ibazebo, Vimal Savsani, Arti Siddhpura
et al.
Boat collisions pose severe threats to maritime safety, economic activity, and environmental sustainability. Conventional risk assessment methods—such as Failure Mode and Effects Analysis, and Fault Tree Analysis—are widely applied but remain inadequate for addressing the uncertainty, subjectivity, and interdependency of risk factors in complex maritime environments. This study proposes a fuzzy Multi-Criteria Decision-Making framework for the risk assessment of boat collisions. The model integrates fuzzy logic with Analytic Hierarchy Process for criterion weighting and the Technique for Order Preference by Similarity to the Ideal Solution for risk ranking. Fuzzy logic is employed to capture linguistic expert judgments and to manage vague or incomplete data, which are common challenges in marine operations. Key collision risk factors—human error, boat engine system failure, environmental conditions, and intentional threats—are identified through literature review, incident data analysis, and expert consultation. A comparative analysis with a baseline non-fuzzy model demonstrates the added value of the fuzzy-integrated framework, showing improved capacity to handle imprecision and uncertainty. The model outputs not only prioritise risk rankings but also support the identification of critical control actions and effective safety measures. A case study of Nigerian waters illustrates the practicality of the framework in guiding risk mitigation strategies and informing policy decisions under uncertainty.
Muhammad Alì, Francesca Razzano, Sergio Vitale
et al.
Typically, the detection of marine debris relies on in-situ campaigns that are characterized by huge human effort and limited spatial coverage. Following the need of a rapid solution for the detection of floating plastic, methods based on remote sensing data have been proposed recently. Their main limitation is represented by the lack of a general reference for evaluating performance. Recently, the Marine Debris Archive (MARIDA) has been released as a standard dataset to develop and evaluate Machine Learning (ML) algorithms for detection of Marine Plastic Debris. The MARIDA dataset has been created for simplifying the comparison between detection solutions with the aim of stimulating the research in the field of marine environment preservation. In this work, an assessment of spectral based solutions is proposed by evaluating performance on MARIDA dataset. The outcome highlights the need of precise reference for fair evaluation.
In this practice paper, we propose a framework for integrating AI into disciplinary engineering courses and curricula. The use of AI within engineering is an emerging but growing area and the knowledge, skills, and abilities (KSAs) associated with it are novel and dynamic. This makes it challenging for faculty who are looking to incorporate AI within their courses to create a mental map of how to tackle this challenge. In this paper, we advance a role-based conception of competencies to assist disciplinary faculty with identifying and implementing AI competencies within engineering curricula. We draw on prior work related to AI literacy and competencies and on emerging research on the use of AI in engineering. To illustrate the use of the framework, we provide two exemplary cases. We discuss the challenges in implementing the framework and emphasize the need for an embedded approach where AI concerns are integrated across multiple courses throughout the degree program, especially for teaching responsible and ethical AI development and use.
Bohui Zhang, Valentina Anita Carriero, Katrin Schreiberhuber
et al.
Ontology engineering (OE) in large projects poses a number of challenges arising from the heterogeneous backgrounds of the various stakeholders, domain experts, and their complex interactions with ontology designers. This multi-party interaction often creates systematic ambiguities and biases from the elicitation of ontology requirements, which directly affect the design, evaluation and may jeopardise the target reuse. Meanwhile, current OE methodologies strongly rely on manual activities (e.g., interviews, discussion pages). After collecting evidence on the most crucial OE activities, we introduce \textbf{OntoChat}, a framework for conversational ontology engineering that supports requirement elicitation, analysis, and testing. By interacting with a conversational agent, users can steer the creation of user stories and the extraction of competency questions, while receiving computational support to analyse the overall requirements and test early versions of the resulting ontologies. We evaluate OntoChat by replicating the engineering of the Music Meta Ontology, and collecting preliminary metrics on the effectiveness of each component from users. We release all code at https://github.com/King-s-Knowledge-Graph-Lab/OntoChat.
Searching for similar images in archives of histology and histopathology images is a crucial task that may aid in patient matching for various purposes, ranging from triaging and diagnosis to prognosis and prediction. Whole slide images (WSIs) are highly detailed digital representations of tissue specimens mounted on glass slides. Matching WSI to WSI can serve as the critical method for patient matching. In this paper, we report extensive analysis and validation of four search methods bag of visual words (BoVW), Yottixel, SISH, RetCCL, and some of their potential variants. We analyze their algorithms and structures and assess their performance. For this evaluation, we utilized four internal datasets ($1269$ patients) and three public datasets ($1207$ patients), totaling more than $200,000$ patches from $38$ different classes/subtypes across five primary sites. Certain search engines, for example, BoVW, exhibit notable efficiency and speed but suffer from low accuracy. Conversely, search engines like Yottixel demonstrate efficiency and speed, providing moderately accurate results. Recent proposals, including SISH, display inefficiency and yield inconsistent outcomes, while alternatives like RetCCL prove inadequate in both accuracy and efficiency. Further research is imperative to address the dual aspects of accuracy and minimal storage requirements in histopathological image search.
This study aims to improve the accuracy of bathymetry predicted by gravity-geologic method (GGM) using the optimal machine learning model selected from machine learning techniques. In this study, several machine learning techniques were utilized to determine the optimal model from the performance of depth and gravity anomalies. In addition, a tuning density contrast calculated from satellite altimetry-derived free-air gravity anomalies (FAGAs) was applied to estimate enhanced bathymetry. By comparison with shipborne depth, the accuracy of the bathymetry estimated by using satellite altimetry-derived FAGAs and machine learning was evaluated. The findings reveal that the bathymetry predicted by the optimal machine learning using the Gaussian process regression and the GGM with a tuning density contrast can enhance the accuracy of 82.64 m, showing an improvement of 67.40% in the RMSE at shipborne depth measurements. Although the tuning density is larger than 1.67 g/cm<sup>3</sup>, bathymetry using satellite altimetry-derived FAGAs and machine learning can be effectively improved with higher accuracy.
Shahab Rouhi, Setare Sadeqi, Nikolaos I. Xiros
et al.
The primary goal of this study is to develop and test a small-scale horizontal-axis underwater Ocean Current Turbine (OCT) by creating a mathematical model for coupled dynamics aided by a Blade Element Momentum (BEM) simulation-integrated experimental approach. This research is motivated by the urgent need for sustainable energy sources and the vast potential of ocean currents. By integrating mathematical modeling with the experimental testing of scaled model OCTs, this study aims to evaluate performance accurately. The experimental setup involves encapsulating a 3D-printed turbine model within a watertight nacelle which is equipped with sensors for comprehensive data recording during towing tank tests. Through these experiments, we seek to establish correlations between the generated power, force, and rotational speed of the turbine’s Permanent Magnet DC (PMDC) motor, which determines the turbine’s capability to extract dynamic energy inflow. Moreover, this research aims to provide valuable insights into the accuracy and applicability of theoretical predictions in real-world scenarios by comparing the experimental results with BEM simulations. This combined approach not only advances our understanding of hydrokinetic energy conversion, but also contributes to the development of reliable and efficient renewable energy technologies that address global energy challenges while mitigating environmental impacts.
The sliding mode controller stands out for its exceptional stability, even when the system experiences noise or undergoes time-varying parameter changes. However, designing a sliding mode controller necessitates precise knowledge of the object’s exact model, which is often unattainable in practical scenarios. Furthermore, if the sliding control law’s amplitude becomes excessive, it can lead to undesirable chattering phenomena near the sliding surface. This article presents a new method that uses a special kind of computer program (Radial Basis Function Neural Network) to quickly calculate complex relationships in a robot’s control system. This calculation is combined with a technique called Sliding Mode Control, and Fuzzy Logic is used to measure the size of the control action, all while making sure the system stays stable using Lyapunov stability theory. We tested this new method on a robot arm that can move in three different ways at the same time, showing that it can handle complex, multiple-input, multiple-output systems. In addition, applying LPV combined with Kalman helps reduce noise and the system operates more stably. The manipulator’s response under this controller exhibits controlled overshoot (Rad), with a rise time of approximately 5 ± 3% seconds and a settling error of around 1%. These control results are rigorously validated through simulations conducted using MATLAB/Simulink software version 2022b. This research contributes to the advancement of control strategies for robotic manipulators, offering improved stability and adaptability in scenarios where precise system modeling is challenging.
Marco Bertolino, Carlo Cerrano, Giorgio Bavestrello
et al.
During scientific expeditions in Indonesia and Vietnam, several sponge specimens belonging to the genus <i>Cladocroce</i> were collected. The integration of morphological and molecular analyses, incorporating species delimitation models (ABGD, ASAP, and bPTP) and phylogenetic approaches using three molecular markers (COI, 28S, and 18S–ITS1–5.8S–ITS2–28S), allowed us to discriminate three congeneric species. Two of these species (<i>C. burapha</i> and <i>C. pansinii</i> sp. nov.) were supported by morphological and molecular data, whereas a third species (<i>C. lamellata</i> sp. nov.) was delimited by morphological data only. We formally describe two new species, <i>C</i>. <i>pansinii</i> sp. nov. and <i>C. lamellata</i> sp. nov. <i>C. aculeata</i> is a newly recorded species for Indonesia and the first documented finding after the original description. The re-examination of the type material of <i>C. burapha,</i> and indirectly the molecular approach, allowed us to confirm that <i>C. burapha</i> lives in sympatry with <i>C</i>. <i>pansinii</i> sp. nov. in Vietnam and with <i>C. lamellata</i> in Indonesia. Thanks to these findings, we relocated the paratype of <i>C. burapha</i> to the new species described here, i.e., <i>C. pansinii</i> sp. nov.
Recruiting participants for software engineering research has been a primary concern of the human factors community. This is particularly true for quantitative investigations that require a minimum sample size not to be statistically underpowered. Traditional data collection techniques, such as mailing lists, are highly doubtful due to self-selection biases. The introduction of crowdsourcing platforms allows researchers to select informants with the exact requirements foreseen by the study design, gather data in a concise time frame, compensate their work with fair hourly pay, and most importantly, have a high degree of control over the entire data collection process. This experience report discusses our experience conducting sample studies using Prolific, an academic crowdsourcing platform. Topics discussed are the type of studies, selection processes, and power computation.
Ontologies serve as a one of the formal means to represent and model knowledge in computer science, electrical engineering, system engineering and other related disciplines. Ontologies within requirements engineering may be used for formal representation of system requirements. In the Internet of Things, ontologies may be used to represent sensor knowledge and describe acquired data semantics. Designing an ontology comprehensive enough with an appropriate level of knowledge expressiveness, serving multiple purposes, from system requirements specifications to modeling knowledge based on data from IoT sensors, is one of the great challenges. This paper proposes an approach towards ontology-based requirements engineering for well-being, aging and health supported by the Internet of Things. Such an ontology design does not aim at creating a new ontology, but extending the appropriate one already existing, SAREF4EHAW, in order align with the well-being, aging and health concepts and structure the knowledge within the domain. Other contributions include a conceptual formulation for Well-Being, Aging and Health and a related taxonomy, as well as a concept of One Well-Being, Aging and Health. New attributes and relations have been proposed for the new ontology extension, along with the updated list of use cases and particular ontological requirements not covered by the original ontology. Future work envisions full specification of the new ontology extension, as well as structuring system requirements and sensor measurement parameters to follow description logic.
Analytics corresponds to a relevant and challenging phase of Big Data. The generation of knowledge from extensive data sets (petabyte era) of varying types, occurring at a speed able to serve decision makers, is practiced using multiple areas of knowledge, such as computing, statistics, data mining, among others. In the Big Data domain, Analytics is also considered as a process capable of adding value to the organizations. Besides the demonstration of value, Analytics should also consider operational tools and models to support decision making. To adding value, Analytics is also presented as part of some Big Data value chains, such the Information Value Chain presented by NIST among others, which are detailed in this article. As well, some maturity models are presented, since they represent important structures to favor continuous implementation of Analytics for Big Data, using specific technologies, techniques and methods. Hence, through an in-depth research, using specific literature references and use cases, we seeks to outline an approach to determine the Analytical Engineering for Big Data Analytics considering four pillars: Data, Models, Tools and People; and three process groups: Acquisition, Retention and Revision; in order to make feasible and to define an organization, possibly designated as an Analytics Organization, responsible for generating knowledge from the data in the field of Big Data Analytics.