Analytical methods underpin geotechnical engineering practice, yet their implementation remains fragmented across error-prone spreadsheets and opaque proprietary software. While Large Language Models (LLMs) offer transformative potential for streamlining engineering workflows, their statistical nature fundamentally conflicts with the strict determinism required for safety-critical calculations. Their tendency to hallucinate formulas, misinterpret units, or alter methodologies between sessions creates a critical trust gap. This paper introduces GeoMCP, a framework built to bridge this gap via a key insight: engineering methods should be represented as structured data, not embedded code. GeoMCP captures analytical methods as "method cards", declarative JSON files defining formulas, units, applicability limits, and literature citations. A constrained symbolic engine executes these cards with verified dimensional consistency, while structured "Agent Skills" guide LLMs to apply engineering judgment and orchestrate the analysis. By exposing these verified capabilities through the Model Context Protocol (MCP), GeoMCP shifts the role of the AI from an unreliable calculator to an intelligent orchestrator. Validated against an official JRC Eurocode~7 worked example, the framework demonstrates computational parity with traditional approaches while ensuring complete mathematical transparency. Ultimately, GeoMCP provides a blueprint for transitioning the industry from isolated legacy software to an interoperable, AI-ready ecosystem where engineers can leverage modern AI without surrendering professional responsibility.
Nenad Petrovic, Yurui Zhang, Moaad Maaroufi
et al.
Multimodal summarization integrating information from diverse data modalities presents a promising solution to aid the understanding of information within various processes. However, the application and advantages of multimodal summarization have not received much attention in model-based engineering (MBE), where it has become a cornerstone in the design and development of complex systems, leveraging formal models to improve understanding, validation and automation throughout the engineering lifecycle. UML and EMF diagrams in model-based engineering contain a large amount of multimodal information and intricate relational data. Hence, our study explores the application of multimodal large language models within the domain of model-based engineering to evaluate their capacity for understanding and identifying relationships, features, and functionalities embedded in UML and EMF diagrams. We aim to demonstrate the transformative potential benefits and limitations of multimodal summarization in improving productivity and accuracy in MBE practices. The proposed approach is evaluated within the context of automotive software development, while many promising state-of-art models were taken into account.
This proposal discusses the growing challenges in reverse engineering modern software binaries, particularly those compiled from newer system programming languages such as Rust, Go, and Mojo. Traditional reverse engineering techniques, developed with a focus on C and C++, fall short when applied to these newer languages due to their reliance on outdated heuristics and failure to fully utilize the rich semantic information embedded in binary programs. These challenges are exacerbated by the limitations of current data-driven methods, which are susceptible to generating inaccurate results, commonly referred to as hallucinations. To overcome these limitations, we propose a novel approach that integrates probabilistic binary analysis with fine-tuned large language models (LLMs). Our method systematically models the uncertainties inherent in reverse engineering, enabling more accurate reasoning about incomplete or ambiguous information. By incorporating LLMs, we extend the analysis beyond traditional heuristics, allowing for more creative and context-aware inferences, particularly for binaries from diverse programming languages. This hybrid approach not only enhances the robustness and accuracy of reverse engineering efforts but also offers a scalable solution adaptable to the rapidly evolving landscape of software development.
Edson OliveiraJr, Fernanda Madeiral, Alcemir Rodrigues Santos
et al.
Open Science aims to foster openness and collaboration in research, leading to more significant scientific and social impact. However, practicing Open Science comes with several challenges and is currently not properly rewarded. In this paper, we share our vision for addressing those challenges through a conceptual framework that connects essential building blocks for a change in the Software Engineering community, both culturally and technically. The idea behind this framework is that Open Science is treated as a first-class requirement for better Software Engineering research, practice, recognition, and relevant social impact. There is a long road for us, as a community, to truly embrace and gain from the benefits of Open Science. Nevertheless, we shed light on the directions for promoting the necessary culture shift and empowering the Software Engineering community.
Muhammad Hamza, Dominik Siemon, Muhammad Azeem Akbar
et al.
This paper investigates the dynamics of human AI collaboration in software engineering, focusing on the use of ChatGPT. Through a thematic analysis of a hands on workshop in which 22 professional software engineers collaborated for three hours with ChatGPT, we explore the transition of AI from a mere tool to a collaborative partner. The study identifies key themes such as the evolving nature of human AI interaction, the capabilities of AI in software engineering tasks, and the challenges and limitations of integrating AI in this domain. The findings show that while AI, particularly ChatGPT, improves the efficiency of code generation and optimization, human oversight remains crucial, especially in areas requiring complex problem solving and security considerations. This research contributes to the theoretical understanding of human AI collaboration in software engineering and provides practical insights for effectively integrating AI tools into development processes. It highlights the need for clear role allocation, effective communication, and balanced AI human collaboration to realize the full potential of AI in software engineering.
This paper has a dual character, combining a philosophical ontological exploration with a conceptual modeling approach in systems and software engineering. Such duality is already practiced in software engineering, in which the current dominant modeling thesis is object orientation. This work embraces an anti-thesis that centers solely on the process rather than emphasizing the object. The approach is called occurrence-only modeling, in which an occurrence means an event or process where a process is defined as an orchestrated net of events that form a semantical whole. In contrast to object orientation, in this occurrence-only modeling objects are nothing more than long events. We apply this paradigm to (1) a UML/BPMN inventory system in simulation engineering and (2) an event-based system that represents medical occurrences that occur on a timeline. The aim of such a venture is to enhance the field of conceptual modeling by adding yet a new alternative methodology and clarifying differences among approaches. Conceptual modeling s importance has been recognized in many research areas. An active research community in simulation engineering demonstrates the growing interest in conceptual modeling. In the clinical domains, temporal information elucidates the occurrence of medical events (e.g., visits, laboratory tests). These applications give an opportunity to propose a new approach that includes (a) a Stoic ontology that has two types of being, existence and subsistence; (b) Thinging machines that limit activities to five generic actions; and (c) Lupascian logic, which handles negative events. With such a study, we aim to substantiate the assertion that the occurrence only approach is a genuine philosophical base for conceptual modeling. The results in this paper seem to support such a claim.
Majid Haghparast, Tommi Mikkonen, Jukka K. Nurminen
et al.
Despite the increasing interest in quantum computing, the aspect of development to achieve cost-effective and reliable quantum software applications has been slow. One barrier is the software engineering of quantum programs, which can be approached from two directions. On the one hand, many software engineering practices, debugging in particular, are bound to classical computing. On the other hand, quantum programming is closely associated with the phenomena of quantum physics, and consequently, the way we express programs resembles the early days of programming. Moreover, much of the software engineering research today focuses on agile development, where computing cycles are cheap and new software can be rapidly deployed and tested, whereas in the quantum context, executions may consume lots of energy, and test runs may require lots of work to interpret. In this paper, we aim at bridging this gap by starting with the quantum computing workflow and by mapping existing software engineering research to this workflow. Based on the mapping, we then identify directions for software engineering research for quantum computing.
Aiming at the current problems of theory-oriented,practice-light,and lack of innovation ability in the teaching of postgraduate software engineering courses,a multi-stage feedback teaching mode for software engineering postgraduates based on competition project_driven is proposed. The model is driven by the competition project,and implementing suggestions are given in terms of stage allocation of software engineering course tasks and ability cultivation,competition case design and process evaluation improvement,etc. Through the implementation of this teaching mode,students enthusiasm and initiative are expected to be stimulated,and the overall development of students professional skills and comprehension ability would be improved to meet the demand of society for software engineering technical talents.
One of the main challenges that developers face when testing their systems lies in engineering test cases that are good enough to reveal bugs. And while our body of knowledge on software testing and automated test case generation is already quite significant, in practice, developers are still the ones responsible for engineering test cases manually. Therefore, understanding the developers' thought- and decision-making processes while engineering test cases is a fundamental step in making developers better at testing software. In this paper, we observe 13 developers thinking-aloud while testing different real-world open-source methods, and use these observations to explain how developers engineer test cases. We then challenge and augment our main findings by surveying 72 software developers on their testing practices. We discuss our results from three different angles. First, we propose a general framework that explains how developers reason about testing. Second, we propose and describe in detail the three different overarching strategies that developers apply when testing. Third, we compare and relate our observations with the existing body of knowledge and propose future studies that would advance our knowledge on the topic.
C. Spitaleri, C. A. Bertulani, L. Fortunato
et al.
Accurate measurements of nuclear reactions of astrophysical interest within, or close to, the Gamow peak, show evidence of an unexpected effect attributed to the presence of atomic electrons in the target. The experiments need to include an effective "screening" potential to explain the enhancement of the cross sections at the lowest measurable energies. Despite various theoretical studies conducted over the past 20 years and numerous experimental measurements, a theory has not yet been found that can explain the cause of the exceedingly high values of the screening potential needed to explain the data. In this letter we show that instead of an atomic physics solution of the "electron screening puzzle", the reason for the large screening potential values is in fact due to clusterization effects in nuclear reactions, in particular for reaction involving light nuclei.
Citadel is an advanced information-stealing malware which targets financial information. This malware poses a real threat against the confidentiality and integrity of personal and business data. A joint operation was recently conducted by the FBI and the Microsoft Digital Crimes Unit in order to take down Citadel command-and-control servers. The operation caused some disruption in the botnet but has not stopped it completely. Due to the complex structure and advanced anti-reverse engineering techniques, the Citadel malware analysis process is both challenging and time-consuming. This allows cyber criminals to carry on with their attacks while the analysis is still in progress. In this paper, we present the results of the Citadel reverse engineering and provide additional insight into the functionality, inner workings, and open source components of the malware. In order to accelerate the reverse engineering process, we propose a clone-based analysis methodology. Citadel is an offspring of a previously analyzed malware called Zeus; thus, using the former as a reference, we can measure and quantify the similarities and differences of the new variant. Two types of code analysis techniques are provided in the methodology, namely assembly to source code matching and binary clone detection. The methodology can help reduce the number of functions requiring manual analysis. The analysis results prove that the approach is promising in Citadel malware analysis. Furthermore, the same approach is applicable to similar malware analysis scenarios.
This paper examines the use of Bayesian Networks to tackle one of the tougher problems in requirements engineering, translating user requirements into system requirements. The approach taken is to model domain knowledge as Bayesian Network fragments that are glued together to form a complete view of the domain specific system requirements. User requirements are introduced as evidence and the propagation of belief is used to determine what are the appropriate system requirements as indicated by user requirements. This concept has been demonstrated in the development of a system specification and the results are presented here.
Mehrdad Pakmehr, Timothy Wang, Romain Jobredeaux
et al.
A control software verification framework for gas turbine engines is developed. A stability proof is presented for gain scheduled closed-loop engine system based on global linearization and linear matrix inequality (LMI) techniques. Using convex optimization tools, a single quadratic Lyapunov function is computed for multiple linearizations near equilibrium points of the closed-loop system. With the computed stability matrices, ellipsoid invariant sets are constructed, which are used efficiently for DGEN turbofan engine control code stability analysis. Then a verifiable linear gain scheduled controller for DGEN engine is developed based on formal methods, and tested on the engine virtual test bench. Simulation results show that the developed verifiable gain scheduled controller is capable of regulating the engine in a stable fashion with proper tracking performance.
The information on pygmy resonances reveals new aspects on the isospin dynamics of the nucleus with important astrophysical consequences. In this connection, the precise knowledge of nuclear response functions plays a key role in the determination of photonuclear reactions cross sections which are of importance for the synthesis of heavy neutron-rich elements. For that purpose, a theoretical method based on density functional theory and multi-phonon approach is applied for investigations of nuclear excitations with different multipolarities and energies in stable and exotic nuclei. The possible relation of low-energy modes to the properties of neutron or proton skins is systematically investigated for isotonic and isotopic chains. Our studies of dipole and quadrupole response functions and the corresponding transition densities indicate new pygmy dipole and pygmy quadrupole resonances, describing oscillations of the nuclear skin. Also, the presence of skins is found to affect the magnetic response of nuclei.
Particles with resonant short-range interactions have universal properties that do not depend on the details of their structure or their interactions at short distances. In the three-body system, these properties include the existence of a geometric spectrum of three-body Efimov states and a discrete scaling symmetry leading to log-periodic dependence of observables on the scattering length. Similar universal properties appear in the four-body system and possibly higher-body systems as well. For example, universal four-body states have recently been predicted and observed in experiment. These phenomena are often referred to as "Efimov Physics". We review their theoretical description and discuss applications in different areas of physics with a special emphasis on nuclear and particle physics.
We evaluate the antikaon and hyperon spectral functions in a self-consistent and covariant many-body approach. The computation is based on coupled-channel dynamics derived from the chiral SU(3) Lagrangian. A novel subtraction scheme is developed that avoids kinematical singularities and medium-induced power divergencies all together. Scalar and vector mean fields are used to model nuclear binding and saturation. The effect of the latter is striking for the antikaon spectral function that becomes significantly more narrow at small momenta. Attractive mass shifts of about 30 and 40 MeV are predicted for the Lambda(1405) and Sigma(1385) resonances. Once scalar and vector mean fields for the nucleon are switched on the Lambda(1520) resonances dissolves almost completely in nuclear matter. All together only moderate attraction is predicted for the nuclear antikaon systems at saturation density. However, at larger densities we predict a sizable population of soft antikaon modes that arise from the coupling of the antikaon to a highly collective Lambda(1115) nucleon-hole state. This may lead to the formation of exotic nuclear systems with strangeness and antikaon condensation in compact stars at moderate densities.