The quantum threat to cybersecurity has accelerated the standardization of Post-Quantum Cryptography (PQC). Migrating legacy software to these quantum-safe algorithms is not a simple library swap, but a new software engineering challenge: existing vulnerability detection, refactoring, and testing tools are not designed for PQC's probabilistic behavior, side-channel sensitivity, and complex performance trade-offs. To address these challenges, this paper outlines a vision for a new class of tools and introduces the Automated Quantum-safe Adaptation (AQuA) framework, with a three-pillar agenda for PQC-aware detection, semantic refactoring, and hybrid verification, thereby motivating Quantum-Safe Software Engineering (QSSE) as a distinct research direction.
Large Language Models, particularly decoder-only generative models such as GPT, are increasingly used to automate Software Engineering tasks. These models are primarily guided through natural language prompts, making prompt engineering a critical factor in system performance and behavior. Despite their growing role in SE research, prompt-related decisions are rarely documented in a systematic or transparent manner, hindering reproducibility and comparability across studies. To address this gap, we conducted a two-phase empirical study. First, we analyzed nearly 300 papers published at the top-3 SE conferences since 2022 to assess how prompt design, testing, and optimization are currently reported. Second, we surveyed 105 program committee members from these conferences to capture their expectations for prompt reporting in LLM-driven research. Based on the findings, we derived a structured guideline that distinguishes essential, desirable, and exceptional reporting elements. Our results reveal significant misalignment between current practices and reviewer expectations, particularly regarding version disclosure, prompt justification, and threats to validity. We present our guideline as a step toward improving transparency, reproducibility, and methodological rigor in LLM-based SE research.
A novel group-drag-anchor system (GDAS), comprising a Delta anchor and a four-tooth anchor, was developed to enhance mooring capacity for floating offshore wind turbines in soft clay seabeds. This study focuses on the influence of the installation method on the embedment performance and dynamic response of the GDAS. Large-deformation finite element analyses were conducted using the coupled Eulerian–Lagrangian (CEL) technique to simulate the installation process under different configurations. A dedicated subroutine was implemented to monitor the evolution of excess pore pressure around the GDAS during the subsequent dynamic loading. Results show that asynchronous installation yields significantly deeper embedment than synchronous installation, especially in seabeds with steep strength gradients. The dynamic response of the GDAS under wave-only, combined wave–current, and mooring-line-failure loading scenarios was further investigated. The asynchronously installed GDAS exhibits considerably more stable long-term performance and lower risk of progressive failure under extreme environmental conditions. This superiority is most evident in clays with a relatively steep strength gradient. These findings provide valuable guidance for the optimal design and installation sequencing of GDASs in engineering practice.
Qiaolin Qin, Ronnie de Souza Santos, Rodrigo Spinola
Context. The rise of generative AI (GenAI) tools like ChatGPT and GitHub Copilot has transformed how software is learned and written. In software engineering (SE) education, these tools offer new opportunities for support, but also raise concerns about over-reliance, ethical use, and impacts on learning. Objective. This study investigates how undergraduate SE students use GenAI tools, focusing on the benefits, challenges, ethical concerns, and instructional expectations that shape their experiences. Method. We conducted a survey with 130 undergraduate students from two universities. The survey combined structured Likert-scale items and open-ended questions to investigate five dimensions: usage context, perceived benefits, challenges, ethical and instructional perceptions. Results. Students most often use GenAI for incremental learning and advanced implementation, reporting benefits such as brainstorming support and confidence-building. At the same time, they face challenges including unclear rationales and difficulty adapting outputs. Students highlight ethical concerns around fairness and misconduct, and call for clearer instructional guidance. Conclusion. GenAI is reshaping SE education in nuanced ways. Our findings underscore the need for scaffolding, ethical policies, and adaptive instructional strategies to ensure that GenAI supports equitable and effective learning.
Mauro Marcelino, Marcos Alves, Bianca Trinkenreich
et al.
[Context] An evidence briefing is a concise and objective transfer medium that can present the main findings of a study to software engineers in the industry. Although practitioners and researchers have deemed Evidence Briefings useful, their production requires manual labor, which may be a significant challenge to their broad adoption. [Goal] The goal of this registered report is to describe an experimental protocol for evaluating LLM-generated evidence briefings for secondary studies in terms of content fidelity, ease of understanding, and usefulness, as perceived by researchers and practitioners, compared to human-made briefings. [Method] We developed an RAG-based LLM tool to generate evidence briefings. We used the tool to automatically generate two evidence briefings that had been manually generated in previous research efforts. We designed a controlled experiment to evaluate how the LLM-generated briefings compare to the human-made ones regarding perceived content fidelity, ease of understanding, and usefulness. [Results] To be reported after the experimental trials. [Conclusion] Depending on the experiment results.
Systems engineering (SE) is evolving with the availability of generative artificial intelligence (AI) and the demand for a systems-of-systems perspective, formalized under the purview of mission engineering (ME) in the US Department of Defense. Formulating ME problems is challenging because they are open-ended exercises that involve translation of ill-defined problems into well-defined ones that are amenable for engineering development. It remains to be seen to which extent AI could assist problem formulation objectives. To that end, this paper explores the quality and consistency of multi-purpose Large Language Models (LLM) in supporting ME problem formulation tasks, specifically focusing on stakeholder identification. We identify a relevant reference problem, a NASA space mission design challenge, and document ChatGPT-3.5's ability to perform stakeholder identification tasks. We execute multiple parallel attempts and qualitatively evaluate LLM outputs, focusing on both their quality and variability. Our findings portray a nuanced picture. We find that the LLM performs well in identifying human-focused stakeholders but poorly in recognizing external systems and environmental factors, despite explicit efforts to account for these. Additionally, LLMs struggle with preserving the desired level of abstraction and exhibit a tendency to produce solution specific outputs that are inappropriate for problem formulation. More importantly, we document great variability among parallel threads, highlighting that LLM outputs should be used with caution, ideally by adopting a stochastic view of their abilities. Overall, our findings suggest that, while ChatGPT could reduce some expert workload, its lack of consistency and domain understanding may limit its reliability for problem formulation tasks.
To address the computational challenge posed by the “curse of dimensionality” inherent in traditional branch and bound algorithms for large-scale power grid unit commitment problems, an optimization method for iteration path search of unit commitment state is proposed. To prevent the loss of the optimal solution due to the simplification of the problem and the reduction of the feasible region, the determination of the unit state scheme is divided into a two-stage process of depth traverse and breadth iteration. Based on an initial solution, the unit dynamic priority list is used as the search direction for the unit state iteration path. In deep traverse stage, the optimal shutdown redundant units and their corresponding shutdown time are determined. Breadth iteration is then used to expand the feasible region of the problem to improve the optimality of the solution. The results of a comparative case study conducted on the IEEE 118 system and ACTIVSg10k system indicate that the proposed method effectively reduces the scale of the problem, minimizes the number of unit state attempts, and achieves efficient search and iteration of unit states, exhibiting fast computational speed, high efficiency, which has practical applicability for solving problems of large-scale unit commitment.
Engineering (General). Civil engineering (General), Chemical engineering
This study addresses structural distortions in the IMO Carbon Intensity Indicator (<i>CII</i>) for short-voyage training vessels and proposes corrective strategies combining denominator adjustments with controllable pitch propeller (CPP) mode optimization. Using 2024 operational data from a training ship, we computed monthly and annual <i>CII</i> values, identifying significant inflation when time-at-sea fractions are low due to extensive port stays. Two correction methods were evaluated: a hybrid denominator approach converting port-stay <i>CO</i><sub>2</sub> to equivalent distance, and a Braidotti functional correction. The CPP operating maps for combination and fixed modes revealed a crossover point at approximately 12 kn (~50% engine load), where the combination mode shows superior efficiency at low speeds and the fixed mode at higher speeds. The hybrid correction effectively stabilized <i>CII</i> values across varying operational conditions, while the speed-band CPP optimization provided additional reductions. Results demonstrate that combining optimized CPP mode selection with hybrid <i>CII</i> correction achieves compliance with required standards, attaining a B rating. The integrated framework offers practical solutions for <i>CII</i> management in short-voyage operations, addressing regulatory fairness while improving operational efficiency for training vessels and similar ship types.
As the former International Chair of IMDC, the initiator of the continuing series of IMDC State of Art (SoA) Reports and the lead author of most IMDC SoA Reports on design methodology from 1997 to 2018, the author has both pioneered and observed an increasingly broader scope in the practice of the design of particularly complex vessels. The paper commences with reviewing some key publications, not just to recent IMDCs, that have tracked the manner in which “ship design” (in the broadest sense) has become more sophisticated – especially in the crucial early stages of design. The diversity of ship design practice, not just due to computer-based methods, is readily observable. Moreover, the impact of computer aided design in ship design has not just been to better analyse ship performance (e. g. in hydrodynamics, strength and ship infrastructure systems behaviour) but also in the increasing use of graphical tools and design methods to enable “better ship design”. In a growing number of, mainly, academic centres, but also in some government agencies and design consultancies, there is a clear desire to better understand how to design “ships” and to manage the ship design process, especially for the most complex and novel classes of vessels. In particular, the objectives being sought when conceptualising and synthesising a range of ship options (as part of the requirement elucidation approach) in an ever-increasing scoping of the relevant issues, amounts to developing a more holistic approach. This is not just due to an increasingly complex ship acquisition and ownership environment, but also due to environmental and socio-economic (especially system safety) concerns. Overlaying all this are the opportunities or the spectre of Artificial Intelligence (or perhaps more immediately those of Machine Learning) and its likely impact on engineering practice as well as those other professions in the “marine design enterprise”. The paper concludes by emphasising that while ship design has distinct differences, when compared with most other large scale engineering design practice, the lead ship design profession of the naval architect has somehow to deal with this expanding scope in the practice of “ship design”. This means the education and on the job development thrust must broaden if the ship design profession is not to be side-lined into acting as mere hull engineers. It is argued, such a specific role will be more vulnerable in an increasingly Machine Learning dominated future, than the holistic ship creating and systems architectural alternative. Finally an ambitious vision for future ship designers is given alongside a summary of the specific main contributions by the author to ship design methodology.
Diana Robinson, Christian Cabrera, Andrew D. Gordon
et al.
What if end users could own the software development lifecycle from conception to deployment using only requirements expressed in language, images, video or audio? We explore this idea, building on the capabilities that generative Artificial Intelligence brings to software generation and maintenance techniques. How could designing software in this way better serve end users? What are the implications of this process for the future of end-user software engineering and the software development lifecycle? We discuss the research needed to bridge the gap between where we are today and these imagined systems of the future.
Piotr Sowinski, Ignacio Lacalle, Rafael Vano
et al.
The landscape of computing technologies is changing rapidly, straining existing software engineering practices and tools. The growing need to produce and maintain increasingly complex multi-architecture applications makes it crucial to effectively accelerate and automate software engineering processes. At the same time, artificial intelligence (AI) tools are expected to work hand-in-hand with human developers. Therefore, it becomes critical to model the software accurately, so that the AI and humans can share a common understanding of the problem. In this contribution, firstly, an in-depth overview of these interconnected challenges faced by modern software engineering is presented. Secondly, to tackle them, a novel architecture based on the emerging WebAssembly technology and the latest advancements in neuro-symbolic AI, autonomy, and knowledge graphs is proposed. The presented system architecture is based on the concept of dynamic, knowledge graph-based WebAssembly Twins, which model the software throughout all stages of its lifecycle. The resulting systems are to possess advanced autonomous capabilities, with full transparency and controllability by the end user. The concept takes a leap beyond the current software engineering approaches, addressing some of the most urgent issues in the field. Finally, the efforts towards realizing the proposed approach as well as future research directions are summarized.
Unsupervised representation learning presents new opportunities for advancing Quantum Architecture Search (QAS) on Noisy Intermediate-Scale Quantum (NISQ) devices. QAS is designed to optimize quantum circuits for Variational Quantum Algorithms (VQAs). Most QAS algorithms tightly couple the search space and search algorithm, typically requiring the evaluation of numerous quantum circuits, resulting in high computational costs and limiting scalability to larger quantum circuits. Predictor-based QAS algorithms mitigate this issue by estimating circuit performance based on structure or embedding. However, these methods often demand time-intensive labeling to optimize gate parameters across many circuits, which is crucial for training accurate predictors. Inspired by the classical neural architecture search algorithm Arch2vec, we investigate the potential of unsupervised representation learning for QAS without relying on predictors. Our framework decouples unsupervised architecture representation learning from the search process, enabling the learned representations to be applied across various downstream tasks. Additionally, it integrates an improved quantum circuit graph encoding scheme, addressing the limitations of existing representations and enhancing search efficiency. This predictor-free approach removes the need for large labeled datasets. During the search, we employ REINFORCE and Bayesian Optimization to explore the latent representation space and compare their performance against baseline methods. We further validate our approach by executing the best-discovered MaxCut circuits on IBM's ibm_sherbrooke quantum processor, confirming that the architectures retain optimal performance even under real hardware noise. Our results demonstrate that the framework efficiently identifies high-performing quantum circuits with fewer search iterations.
Sander Schulhoff, Michael Ilie, Nishant Balepur
et al.
Generative Artificial Intelligence (GenAI) systems are increasingly being deployed across diverse industries and research domains. Developers and end-users interact with these systems through the use of prompting and prompt engineering. Although prompt engineering is a widely adopted and extensively researched area, it suffers from conflicting terminology and a fragmented ontological understanding of what constitutes an effective prompt due to its relatively recent emergence. We establish a structured understanding of prompt engineering by assembling a taxonomy of prompting techniques and analyzing their applications. We present a detailed vocabulary of 33 vocabulary terms, a taxonomy of 58 LLM prompting techniques, and 40 techniques for other modalities. Additionally, we provide best practices and guidelines for prompt engineering, including advice for prompting state-of-the-art (SOTA) LLMs such as ChatGPT. We further present a meta-analysis of the entire literature on natural language prefix-prompting. As a culmination of these efforts, this paper presents the most comprehensive survey on prompt engineering to date.
This work presents a new model for surf and swash zone morphology evolution induced by nonlinear waves. Wave transformation in the surf and swash zones is computed by a nonlinear wave model based on the higher order Boussinesq equations for breaking and non-breaking waves. Regarding sediment transport, the model builds on previous research by the authors and incorporates the latest update of a well-founded sediment transport formula. The wave and morphology evolution model is validated against two sets of experiments on beach profile change and is afterwards used to test the performance of a widely-adopted erosion/accretion criterion. The innovation of this work is the validation of a new Boussinesq-type morphology model under both erosive and accretive conditions at the foreshore (accretion is rarely examined in similar studies), which the model reproduces very well without modification of the empirical coefficients of the sediment transport formula used; furthermore, the model confirms the empirical erosion/accretion criterion even for conditions beyond the ones it was developed for and without imposing any model constraints. The presented set of applications highlights model capabilities in simulating swash morphodynamics, as well as its suitability for coastal erosion mitigation and beach restoration design