As LLM-based multi-agent systems (MAS) become more autonomous, their free-form interactions increasingly dominate system behavior. However, scaling the number of agents often amplifies context pressure, coordination errors, and system drift. It is well known that building robust MAS requires more than prompt tuning or increased model intelligence. It necessitates engineering discipline focused on architecture to manage complexity under uncertainty. We characterize agentic software by a core property: \emph{runtime generation and evolution under uncertainty}. Drawing upon and extending software engineering experience, especially object-oriented programming, this paper introduces \emph{Loosely-Structured Software (LSS)}, a new class of software systems that shifts the engineering focus from constructing deterministic logic to managing the runtime entropy generated by View-constructed programming, semantic-driven self-organization, and endogenous evolution. To make this entropy governable, we introduce design principles under a three-layer engineering framework: \emph{View/Context Engineering} to manage the execution environment and maintain task-relevant Views, \emph{Structure Engineering} to organize dynamic binding over artifacts and agents, and \emph{Evolution Engineering} to govern the lifecycle of self-rewriting artifacts. Building on this framework, we develop LSS design patterns as semantic control blocks that stabilize fluid, inference-mediated interactions while preserving agent adaptability. Together, these abstractions improve the \emph{designability}, \emph{scalability}, and \emph{evolvability} of agentic infrastructure. We provide basic experimental validation of key mechanisms, demonstrating the effectiveness of LSS.
IntroductionArtificial intelligence (AI) has been widely used to detect faults and failures in photovoltaic (PV) systems, particularly those that conventional protection devices fail to identify. However, previous AI-based approaches still face major limitations, including neglecting critical detection conditions, relying on large and complex datasets, and lacking simultaneous and accurate multi-fault detection and classification.MethodsTo address these challenges, a novel PV fault detection framework is proposed by combining a fuzzy logic (FL) system with a particle swarm optimization (PSO) algorithm. An initial dataset is generated from the current–voltage (I–V) curve of a PV array. Manhattan distance (MD) and Chebyshev distance (CD) features are extracted from the I–V characteristics. A wide set of machine-learning classifiers is evaluated, and the FL system nominates the most reliable models based on mean accuracy, F1-score, and standard deviation. PSO is then used to determine the optimal subset of classifiers and to assign optimized weights for ensemble prediction. Several output-combining techniques are also examined to obtain the most accurate final classification.ResultsModel verification is performed using a dataset that includes normal operation as well as line-to-line (LL), open-circuit (OC), and degradation (DEG) faults under various environmental (irradiance, temperature) and electrical (mismatch, impedance) conditions. The proposed FL+PSO-based model achieves outstanding accuracy in detecting and classifying multiple PV faults and outperforms recent state-of-the-art approaches.DiscussionThe integration of distance-based feature extraction, fuzzy-driven classifier selection, and PSO-optimized weighting significantly enhances robustness and reduces sensitivity to environmental variations. These improvements enable reliable multi-fault detection even when fault signatures closely resemble normal conditions.ConclusionThe proposed FL and PSO-based ensemble provides a highly accurate and reliable solution for multi-fault detection in PV arrays. Its performance surpasses existing approaches, making it a strong candidate for practical implementation in real PV monitoring systems.
A conceptual model of intermittent joints is introduced to the cyclic shear test in the laboratory to explore the effects of loading parameters on its shear behavior under cyclic shear loading. The results show that the loading parameters (initial normal stress, normal stiffness, and shear velocity) determine propagation paths of the wing and secondary cracks in rock bridges during the initial shear cycle, creating different morphologies of macroscopic step-path rupture surfaces and asperities on them. The differences in stress state and rupture surface induce different cyclic shear responses. It shows that high initial normal stress accelerates asperity degradation, raises shear resistance, and promotes compression of intermittent joints. In addition, high normal stiffness provides higher normal stress and shear resistance during the initial cycles and inhibits the dilation and compression of intermittent joints. High shear velocity results in a higher shear resistance, greater dilation, and greater compression. Finally, shear strength is most sensitive to initial normal stress, followed by shear velocity and normal stiffness. Moreover, average dilation angle is most sensitive to initial normal stress, followed by normal stiffness and shear velocity. During the shear cycles, frictional coefficient is affected by asperity degradation, backfilling of rock debris, and frictional area, exhibiting a non-monotonic behavior.
Engineering geology. Rock mechanics. Soil mechanics. Underground construction
Irina IGNATESCU - MANEA, Oana NECULAI, Ana – Maria TOMA
Nowadays, the quality of human life in terms of the human-nature connection is greatly diminished because the activities take place mainly inside buildings. The authors aim to offer a solution for the kindergarten children from temperate climatic regions, especially for children from Romania, to spend more time in contact with nature. Thus, the designed kindergarten is aiming to meet the essential requirements in construction (mechanical resistance and stability; fire safety; hygiene, health and environment; safety and accessibility in operation; noise protection; energy saving and thermal insulation; sustainable use of natural resources) but also to offer beneficiaries direct contact with land and green plants, regardless of season or climatic conditions.
Architectural engineering. Structural engineering of buildings, Engineering design
Large Language Models (LLMs) have shown prominent performance in various downstream tasks and prompt engineering plays a pivotal role in optimizing LLMs' performance. This paper, not only as an overview of current prompt engineering methods, but also aims to highlight the limitation of designing prompts based on an anthropomorphic assumption that expects LLMs to think like humans. From our review of 50 representative studies, we demonstrate that a goal-oriented prompt formulation, which guides LLMs to follow established human logical thinking, significantly improves the performance of LLMs. Furthermore, We introduce a novel taxonomy that categorizes goal-oriented prompting methods into five interconnected stages and we demonstrate the broad applicability of our framework. With four future directions proposed, we hope to further emphasize the power and potential of goal-oriented prompt engineering in all fields.
Due to developments in the field of fast transportation, increase in permitted speed and load capacity, moving loads can have significant effects on the dynamic forces of bridges. To consider the dynamic effect in the design of the structure, the dynamic impact factor is introduced as ratio of the dynamic response to the static response. Accurate evaluation of these coefficients helps in safe and economic designs for new bridges. However, the evaluation of the dynamic impact factor is difficult due to the vehicle-bridge interaction and the influence of many parameters that affect the dynamic impact factor, including the dynamic characteristics of the bridge and the vehicle, road surface conditions, vehicle speed, traffic conditions,. In this research, by applying live load of vehicles step by step and performing time history analysis, dynamic analysis under moving load has been done. Three different types of cable-stayed bridges with different spans and cable layouts have been investigated in the form of two-dimensional models. This study analyzes the impact coefficient of bending and shear forces of deck components and pylons, as well as the axial forces of cables, and the results are compared with the coefficients proposed in the design regulations. Also, the effect of changes in load passing speed on the dynamic impact factor in cable-stayed bridges has also been evaluated and studied. This research tries to improve and optimize the design and performance of cable-stayed bridges in order to deal with dynamic changes and increase the speed of loads.
Exploiting the recent advancements in artificial intelligence, showcased by ChatGPT and DALL-E, in real-world applications necessitates vast, domain-specific, and publicly accessible datasets. Unfortunately, the scarcity of such datasets poses a significant challenge for researchers aiming to apply these breakthroughs in engineering design. Synthetic datasets emerge as a viable alternative. However, practitioners are often uncertain about generating high-quality datasets that accurately represent real-world data and are suitable for the intended downstream applications. This study aims to fill this knowledge gap by proposing comprehensive guidelines for generating, annotating, and validating synthetic datasets. The trade-offs and methods associated with each of these aspects are elaborated upon. Further, the practical implications of these guidelines are illustrated through the creation of a turbo-compressors dataset. The study underscores the importance of thoughtful sampling methods to ensure the appropriate size, diversity, utility, and realism of a dataset. It also highlights that design diversity does not equate to performance diversity or realism. By employing test sets that represent uniform, real, or task-specific samples, the influence of sample size and sampling strategy is scrutinized. Overall, this paper offers valuable insights for researchers intending to create and publish synthetic datasets for engineering design, thereby paving the way for more effective applications of AI advancements in the field. The code and data for the dataset and methods are made publicly accessible at https://github.com/cyrilpic/radcomp .
In this paper, the adoption patterns of Generative Artificial Intelligence (AI) tools within software engineering are investigated. Influencing factors at the individual, technological, and societal levels are analyzed using a mixed-methods approach for an extensive comprehension of AI adoption. An initial structured interview was conducted with 100 software engineers, employing the Technology Acceptance Model (TAM), the Diffusion of Innovations theory (DOI), and the Social Cognitive Theory (SCT) as guiding theories. A theoretical model named the Human-AI Collaboration and Adaptation Framework (HACAF) was deduced using the Gioia Methodology, characterizing AI adoption in software engineering. This model's validity was subsequently tested through Partial Least Squares - Structural Equation Modeling (PLS-SEM), using data collected from 183 software professionals. The results indicate that the adoption of AI tools in these early integration stages is primarily driven by their compatibility with existing development workflows. This finding counters the traditional theories of technology acceptance. Contrary to expectations, the influence of perceived usefulness, social aspects, and personal innovativeness on adoption appeared to be less significant. This paper yields significant insights for the design of future AI tools and supplies a structure for devising effective strategies for organizational implementation.
For the optimal design and accurate prediction of structural behavior, the nonlinear analysis of large deformation of elastic beams has broad applications in various engineering fields. In this study, the nonlinear equation of flexure of an elastic beam, also known as an elastica, was solved by the Galerkin method for a highly accurate solution. The numerical results showed that the third-order solution of the rotation angle at the free end of the beam is more accurate and efficient in comparison with results of other approximate methods, and is perfectly consistent with the exact solution in elliptic functions. A general procedure with the Galerkin method is demonstrated for efficient solutions of nonlinear differential equations with the potential for adoption and implementation in more applications.
Cryo-electron microscopy (cryo-EM) is unique among tools in structural biology in its ability to image large, dynamic protein complexes. Key to this ability is image processing algorithms for heterogeneous cryo-EM reconstruction, including recent deep learning-based approaches. The state-of-the-art method cryoDRGN uses a Variational Autoencoder (VAE) framework to learn a continuous distribution of protein structures from single particle cryo-EM imaging data. While cryoDRGN can model complex structural motions, the Gaussian prior distribution of the VAE fails to match the aggregate approximate posterior, which prevents generative sampling of structures especially for multi-modal distributions (e.g. compositional heterogeneity). Here, we train a diffusion model as an expressive, learnable prior in the cryoDRGN framework. Our approach learns a high-quality generative model over molecular conformations directly from cryo-EM imaging data. We show the ability to sample from the model on two synthetic and two real datasets, where samples accurately follow the data distribution unlike samples from the VAE prior distribution. We also demonstrate how the diffusion model prior can be leveraged for fast latent space traversal and interpolation between states of interest. By learning an accurate model of the data distribution, our method unlocks tools in generative modeling, sampling, and distribution analysis for heterogeneous cryo-EM ensembles.
Andre T. Beck, Rubia M. Bosse, Isabela D. Rodrigues
In the Performance-Based Engineering (PBE) framework, uncertainties in system parameters, or modelling uncertainties, have been shown to have significant effects on capacity fragilities and annual collapse rates of buildings. Yet, since modelling uncertainties are non-ergodic variables, their consideration in failure rate calculations offends the Poisson assumption of independent crossings. This problem has been addressed in the literature, and errors found negligible for small annual collapse failure rates. However, the errors could be significant for serviceability limit states, and when failure rates are integrated in time, to provide lifetime failure probabilities. Herein, we present a novel formulation to fully avoid the error in integration of non-ergodic variables. The proposed product-of-lognormals formulation is fully compatible with popular fragility modelling approaches in PBE context. Moreover, we address collapse limit states of realistic reinforced concrete buildings, and find errors of the order of 5 to 8% for 50-year lifetimes, up to 14% for 100 years. Computation of accurate lifetime failure probabilities in a PBE context is clearly important, as it allows comparison with lifetime target reliability values for other structural analysis formulations.
Shape memory alloys (SMA) have the fascinating characteristic of recovering apparent permanent deformations up to 10% and more. Moreover, they are metals and exhibit the typical characteristics of metals like resistance, stiffness, workability, and so on. The combination of all these properties makes it easy to understand why these materials are attractive to the field of engineering and gave rise to a new way of thinking about the design of mechanical systems. Some SMA have the incredible property to be almost perfectly bio-compatible, and since medical structural systems are usually simple, health was the field where shape memory alloys started being used extensively. Their properties makes it easy to understand why these materials are attractive to the field of engineering and gave rise to a new way of thinking about the design of mechanical systems. There are many innovative ideas for the application of shape memory alloys and, in general, shape memory materials, and the number of assessed products is growing. Research worldwide is trying to better understand their behavior and their characteristics for better and larger utilization.
Bearing units of lifting machines, products of construction, road, aviation, space and other branches of technology are very important structural elements, since the failure of even one bearing can cause the failure of the entire product. The results of experimental verification of the theoretical model of bearing operation under combined loading conditions are presented. The behavior under load of bearing units in the most general case can be represented by a sequence of five design schemes, expressed in the form of five statically indeterminate beams. The purpose of the experiments was to test this model under real loading conditions. The experiments were based on the analysis of the geometric shape of the curved elastic line, which the shaft of the bearing assembly acquires under load. The experimental results confirmed the validity of the model and showed that the previously generally accepted model of a two-support beam is not implemented. The conclusion is confirmed that in responsible lifting machines, as well as in responsible products of construction, road, aviation, space and other branches of technology, it is impractical to calculate bearings according to the traditional method, since an erroneous value of bearing durability can be obtained, overestimated from 28.37 to 26.663.9 times.
Architectural engineering. Structural engineering of buildings
Leidy Barón, Jocilene Otila da Costa, Francisco Soares
et al.
This paper identifies and analyzes variables that influence pedestrian safety based on the definition of models of pedestrian crash frequency for urban areas in Portugal. It considers three groups of explanatory variables, namely: (i) built environment; (ii) pedestrian infrastructure, and (iii) road infrastructure, as well as exposure variables combining pedestrian and vehicular traffic volumes. Data on the 16 variables considered were gathered from locations in the counties of Braga and Guimarães. The inclusion of pedestrian infrastructure variables in studies of this type is an innovation that allows for measuring the impacts of the dimensions recommended for this type of infrastructure and assessing the implementation of policies to support the mobility of vulnerable users, especially pedestrians. Examples of such variables are unobstructed space for pedestrian mobility and the recommendable distance separating regulated crossings. Zero-Truncated Negative Binomial Regression Models (ZTNB) and Generalized Estimation Equations (GEE) are used to develop crash prediction models. Results show that in addition to the variables identified in similar studies such as carriageway width, other statistically significant variables like longitudinal slope and distance between crosswalks have a negative influence on pedestrian safety. On-street parking places, one-way streets, and the existence of raised medians have an opposite contribution to safety.
Abbas Ghalandarzadeh, Mahmood Reza Abdi, Leila Shafiei Chafi
Soil tensile strength is important in different geotechnical structures such as earth dams, roads, airports, landfills, and retaining walls. There are several experiments to study the tensile behavior of soils with different advantages and disadvantages. One of the methods used to investigate soil tensile behavior is tensile hollow cylinder apparatus, which has seldom been used. In this research, a hollow cylindrical device to measure the tensile properties of soil was built and operated. This device can apply tensile stress evenly to the entire soil sample so that stress concentration does not occur at any point in the sample. After designing, manufacturing, and assembling the apparatus, validation tests were performed to ensure the device was operating well. The results of the repeatability tests show the accurate performance of the device. Also, in this study, the effect of plasticity index (PI) on the tensile behavior of kaolinite clay was investigated. Clayey soils with plasticity indices of 10 and 24% were selected. The results show that for clays with a similar mineral, the tensile strength increases and the tensile failure strain decreases with increasing the plasticity index.
Issam Jebreen, Robert Wellington, Stephen G. MacDonell
Small to medium sized business enterprises (SMEs) generally thrive because they have successfully done something unique within a niche market. For this reason, SMEs may seek to protect their competitive advantage by avoiding any standardization encouraged by the use of packaged software (PS). Packaged software implementation at SMEs therefore presents challenges relating to how best to respond to misfits between the functionality offered by the packaged software and each SME's business needs. An important question relates to which processes small software enterprises - or Small to Medium-Sized Software Development Companies (SMSSDCs) - apply in order to identify and then deal with these misfits. To explore the processes of packaged software (PS) implementation, an ethnographic study was conducted to gain in-depth insights into the roles played by analysts in two SMSSDCs. The purpose of the study was to understand PS implementation in terms of requirements engineering (or 'PSIRE'). Data collected during the ethnographic study were analyzed using an inductive approach. Based on our analysis of the cases we constructed a theoretical model explaining the requirements engineering process for PS implementation, and named it the PSIRE Parallel Star Model. The Parallel Star Model shows that during PSIRE, more than one RE process can be carried out at the same time. The Parallel Star Model has few constraints, because not only can processes be carried out in parallel, but they do not always have to be followed in a particular order. This paper therefore offers a novel investigation and explanation of RE practices for packaged software implementation, approaching the phenomenon from the viewpoint of the analysts, and offers the first extensive study of packaged software implementation RE (PSIRE) in SMSSDCs.
Background: Research software is software developed by and/or used by researchers, across a wide variety of domains, to perform their research. Because of the complexity of research software, developers cannot conduct exhaustive testing. As a result, researchers have lower confidence in the correctness of the output of the software. Peer code review, a standard software engineering practice, has helped address this problem in other types of software. Aims: Peer code review is less prevalent in research software than it is in other types of software. In addition, the literature does not contain any studies about the use of peer code review in research software. Therefore, through analyzing developers perceptions, the goal of this work is to understand the current practice of peer code review in the development of research software, identify challenges and barriers associated with peer code review in research software, and present approaches to improve the peer code review in research software. Method: We conducted interviews and a community survey of research software developers to collect information about their current peer code review practices, difficulties they face, and how they address those difficulties. Results: We received 84 unique responses from the interviews and surveys. The results show that while research software teams review a large amount of their code, they lack formal process, proper organization, and adequate people to perform the reviews. Conclusions: Use of peer code review is promising for improving the quality of research software and thereby improving the trustworthiness of the underlying research results. In addition, by using peer code review, research software developers produce more readable and understandable code, which will be easier to maintain.
Hardi M. Mohammed, Zrar Kh. Abdul, Tarik A. Rashid
et al.
Purpose: The development of metaheuristic algorithms has increased by researchers to use them extensively in the field of business, science, and engineering. One of the common metaheuristic optimization algorithms is called Grey Wolf Optimization (GWO). The algorithm works based on imitation of the wolves' searching and the process of attacking grey wolves. The main purpose of this paper to overcome the GWO problem which is trapping into local optima. Design or Methodology or Approach: In this paper, the K-means clustering algorithm is used to enhance the performance of the original Grey Wolf Optimization by dividing the population into different parts. The proposed algorithm is called K-means clustering Grey Wolf Optimization (KMGWO). Findings: Results illustrate the efficiency of KMGWO is superior to GWO. To evaluate the performance of the KMGWO, KMGWO applied to solve 10 CEC2019 benchmark test functions. Results prove that KMGWO is better compared to GWO. KMGWO is also compared to Cat Swarm Optimization (CSO), Whale Optimization Algorithm-Bat Algorithm (WOA-BAT), and WOA, so, KMGWO achieves the first rank in terms of performance. Statistical results proved that KMGWO achieved a higher significant value compared to the compared algorithms. Also, the KMGWO is used to solve a pressure vessel design problem and it has outperformed results. Originality/value: Results prove that KMGWO is superior to GWO. KMGWO is also compared to cat swarm optimization (CSO), whale optimization algorithm-bat algorithm (WOA-BAT), WOA, and GWO so KMGWO achieved the first rank in terms of performance. Also, the KMGWO is used to solve a classical engineering problem and it is superior