The adoption of Generative AI (GenAI) suggests major changes for software engineering, including technical aspects but also human aspects of the professionals involved. One of these aspects is how individuals perceive themselves regarding their work, i.e., their work identity, and the processes they perform to form, adapt and reject these identities, i.e., identity work. Existent studies provide evidence of such identity work of software professionals triggered by the adoption of GenAI, however they do not consider differences among diverse roles, such as developers and testers. In this paper, we argue the need for considering the role as a factor defining the identity work of software professionals. To support our claim, we review some studies regarding different roles and also recent studies on how to adopt GenAI in software engineering. Then, we propose a research agenda to better understand how the role influences identity work of software professionals triggered by the adoption of GenAI, and, based on that, to propose new artifacts to support this adoption. We also discuss the potential implications for practice of the results to be obtained.
iyad Alumari, Muhanad D. Hashim Almawlawe, Zainab Al-Araji
et al.
In recent years, arrays have been widely used in many applications, especially those applications that need high guidance, whether in military, medical, or other applications. Arrays are highly oriental, unlike single antennas. In this work, we analyze the effect of the insulating substrate insulation constant on the 2X2 microstrip patch antenna array at a frequency of 2.4 GHz, which is suitable for WLAN applications. CST Microwave Studio Version 2019 was used to design, simulate, and analyze the proposed model. The bar line feeding technique was used to feed the array. A set of mathematical relationships was used to calculate the dimensions of the array. We simulated and analyzed the array on three dielectric pillars: (FR-4) with a dielectric constant of 4.3, (Polycarbonate) with an isolation constant of 2.9, and (Porcelain) with an isolation constant of 6. The 2X2 array was manufactured and printed on the insulating substrate (FR-4) and the practical results were compared with the simulation results. There was a good agreement between the practical and theoretical results
Inadequate use of mobile phones from an early age, coupled with reduced hours dedicated to developing practical skills, has led to a decline in spatial skills at secondary level, resulting in a decline in the spatial skills of engineering graduates. There are students enrolling in engineering have had limited experience in graphics. The decrease in time allocated to the basic engineering courses has necessitated the condensing of material taught in these courses. Descriptive geometry is one such course and this paper aims to present a method to bridge this gap, by providing skills that enable increased working memory to maintain the mental construction after the visual stimulus has disappeared. A question appears: “Can exercises in visualizing the intersection of a polyhedron and a plan influence the development of spatial skills?” Visibility rules are the main knowledge used to solve them. The results’ comparison of two samples of spatial visualization and perception is analysed. Data analysis and average formula allow us to observe better results for male than female for the use of spatial visualization abilities and spatial perception one.
Architectural engineering. Structural engineering of buildings, Engineering design
Abstract A continuous beam‐type connection has been proposed as a new type of concrete‐filled steel tubular beam‐to‐column connection without a diaphragm exhibits good seismic performance. However, in a 2‐way configuration, the performance of the connection in a direction orthogonal to the continuous beam may be suboptimal due to beam splitting. In this study, a new detail of a split orthogonal beam joint was proposed and 4 specimens were constructed and tested under quasi‐static conditions. All the specimens attained the full‐plastic flexural strength for the beams and exhibited a stable inelastic behavior. The specimen in which the depth of the orthogonal continuous beam was 100 mm greater than that of the split beam exhibited good inelastic cyclic behavior, whereas two other specimens with a split beam exhibited slipping and pinching. Minor fractures were observed in the tube wall; however, the strength and stiffness did not deteriorate significantly, though a slight pinched hysteresis was observed. In addition, a finite element analysis was conducted. The split beam was largely rotationally constrained by the bearing pressure of the concrete above and/or below the beam flange. If the split beam's flange was sufficiently inserted into the connection panel, the stiffness and strength were ensured.
Architecture, Architectural engineering. Structural engineering of buildings
Refactoring is the process of restructuring existing code without changing its external behavior while improving its internal structure. Refactoring engines are integral components of modern Integrated Development Environments (IDEs) and can automate or semi-automate this process to enhance code readability, reduce complexity, and improve the maintainability of software products. Similar to traditional software systems such as compilers, refactoring engines may also contain bugs that can lead to unexpected behaviors. In this paper, we propose a novel approach called RETESTER, a LLM-based framework for automated refactoring engine testing. Specifically, by using input program structure templates extracted from historical bug reports and input program characteristics that are error-prone, we design chain-of-thought (CoT) prompts to perform refactoring-preserving transformations. The generated variants are then tested on the latest version of refactoring engines using differential testing. We evaluate RETESTER on two most popular modern refactoring engines (i.e., ECLIPSE, and INTELLIJ IDEA). It successfully revealed 18 new bugs in the latest version of those refactoring engines. By the time we submit our paper, seven of them were confirmed by their developers, and three were fixed.
Marina Araújo, Júlia Araújo, Romeu Oliveira
et al.
[Context] Domain knowledge is recognized as a key component for the success of Requirements Engineering (RE), as it provides the conceptual support needed to understand the system context, ensure alignment with stakeholder needs, and reduce ambiguity in requirements specification. Despite its relevance, the scientific literature still lacks a systematic consolidation of how domain knowledge can be effectively used and operationalized in RE. [Goal] This paper addresses this gap by offering a comprehensive overview of existing contributions, including methods, techniques, and tools to incorporate domain knowledge into RE practices. [Method] We conducted a systematic mapping study using a hybrid search strategy that combines database searches with iterative backward and forward snowballing. [Results] In total, we found 75 papers that met our inclusion criteria. The analysis highlights the main types of requirements addressed, the most frequently considered quality attributes, and recurring challenges in the formalization, acquisition, and long-term maintenance of domain knowledge. The results provide support for researchers and practitioners in identifying established approaches and unresolved issues. The study also outlines promising directions for future research, emphasizing the development of scalable, automated, and sustainable solutions to integrate domain knowledge into RE processes. [Conclusion] The study contributes by providing a comprehensive overview that helps to build a conceptual and methodological foundation for knowledge-driven requirements engineering.
Since tremendous resources are consumed in the architecture, engineering, and construction (AEC) industry, the sustainability and efficiency in this field have received increasing concern in the past few decades. With the advent and development of computational tools and information technologies, structural optimization based on mathematical computation has become one of the most commonly used methods for the sustainable and efficient design in the field of civil engineering. However, despite the wide attention of researchers, there has not been a critical review of the recent research progresses on structural optimization yet. Therefore, the main objective of this paper is to comprehensively review the previous research on structural optimization, provide a thorough analysis on the optimization objectives and their temporal and spatial trends, optimization process, and summarize the current research limitations and recommendations of future work. The paper first introduces the significance of sustainability and efficiency in the AEC industry as well as the background of this review work. Then, relevant articles are retrieved and selected, followed by a statistical analysis of the selected articles. Thereafter, the selected articles are analyzed regarding the optimization objectives and their temporal and spatial trends. The four major steps in the structural optimization process, including structural analysis and modelling, formulation of optimization problems, optimization techniques, and computational tools and design platforms, are also reviewed and discussed in detail based on the collected articles. Finally, research gaps of the current works and potential directions of future works are proposed. This paper critically reviews the achievements and limitations of the current research on structural optimization, which provide guidelines for future research on structural optimization in the field of civil engineering.
Designing efficient and sustainable buildings is an essential focus in architecture and engineering, often tackled through various different optimization methods. However, the complexities of the design process and its multiple criteria pose challenges. In this sense, multi-objective optimization has emerged as a valuable tool for integrating these diverse requirements. This study synthesizes research on multi-objective optimization in building design, identifying key characteristics, and evaluating the correlation between its variables and objectives. A literature review was conducted to assess the main studies about this type of optimization, determining the most common variables and objectives and their combinations. The results show that although multiple objectives exist, they often belong to the same field. The findings reveal that volume shape and structural shape are standard variables in optimization, linking architecture, structure, and environmental impact. Overall, the review showed the opportunities for exploring multi-objective optimization to integrate these three fields.
Reducing concrete consumption is important as part of the global effort of fighting the climate change, and specifically in concrete flat slabs as these are among the largest cement consumers. In this study we formulate an efficient gradient-based optimization of column locations, that minimizes the slabs’ thickness with constraints on the deflections, bending moments and shear stresses while accounting for architectural considerations. The results show that the columns’ optimal locations are not trivial and that the slab thickness is very sensitive to the columns’ exact locations. Thus, concrete savings in slabs of up to 20% are possible with minor modification to traditional layouts of columns, and up to 50% with more pronounced updates, which emphasizes the importance of early collaboration between architects and engineers. The results indicate the critical trade-off between structural efficiency and architectural freedom and demonstrate the potential of formal optimization in structural design.
Recently, the employment of topology optimization (TO) in structural engineering design has gained a significant structural performance, also, TO is employed by designers for developing aesthetically and efficient buildings. In this work, TO is employed to design novel and rigged structures of communication towers. The way that the TO algorithm works is to intelligently create the 3D model by preserving architectural features with tradeoffs between the stiffness and weight ratio. The present work focuses on the investigation of creating optimal self-supported communication towers. Results concerning the prime observation of optimization analyses and the potential benefits of TO in designing telecommunication tower lattices are drawn.
Abstract: In her extensive experience working in civil and industrial construction sites, the author has noticed frequent errors in adapting the sizes of the sanitary objects to the bathroom dimensions as well as the positioning of the sinks, bidets, bathtubs, or showers in the designated spaces. In this article, the author sought to utilize the functions available in AutoCAD software to present calculation formulas that could assist architects and users in making informed decisions when designing, constructing, or renovating a bathroom.
Architectural engineering. Structural engineering of buildings, Engineering design
Artificial intelligence (AI) permeates all fields of life, which resulted in new challenges in requirements engineering for artificial intelligence (RE4AI), e.g., the difficulty in specifying and validating requirements for AI or considering new quality requirements due to emerging ethical implications. It is currently unclear if existing RE methods are sufficient or if new ones are needed to address these challenges. Therefore, our goal is to provide a comprehensive overview of RE4AI to researchers and practitioners. What has been achieved so far, i.e., what practices are available, and what research gaps and challenges still need to be addressed? To achieve this, we conducted a systematic mapping study combining query string search and extensive snowballing. The extracted data was aggregated, and results were synthesized using thematic analysis. Our selection process led to the inclusion of 126 primary studies. Existing RE4AI research focuses mainly on requirements analysis and elicitation, with most practices applied in these areas. Furthermore, we identified requirements specification, explainability, and the gap between machine learning engineers and end-users as the most prevalent challenges, along with a few others. Additionally, we proposed seven potential research directions to address these challenges. Practitioners can use our results to identify and select suitable RE methods for working on their AI-based systems, while researchers can build on the identified gaps and research directions to push the field forward.
Juan M. Murillo, Jose Garcia-Alonso, Enrique Moguel
et al.
As quantum computers advance, the complexity of the software they can execute increases as well. To ensure this software is efficient, maintainable, reusable, and cost-effective -key qualities of any industry-grade software-mature software engineering practices must be applied throughout its design, development, and operation. However, the significant differences between classical and quantum software make it challenging to directly apply classical software engineering methods to quantum systems. This challenge has led to the emergence of Quantum Software Engineering as a distinct field within the broader software engineering landscape. In this work, a group of active researchers analyse in depth the current state of quantum software engineering research. From this analysis, the key areas of quantum software engineering are identified and explored in order to determine the most relevant open challenges that should be addressed in the next years. These challenges help identify necessary breakthroughs and future research directions for advancing Quantum Software Engineering.
Ranim Khojah, Mazen Mohamad, Philipp Leitner
et al.
Large Language Models (LLMs) are frequently discussed in academia and the general public as support tools for virtually any use case that relies on the production of text, including software engineering. Currently there is much debate, but little empirical evidence, regarding the practical usefulness of LLM-based tools such as ChatGPT for engineers in industry. We conduct an observational study of 24 professional software engineers who have been using ChatGPT over a period of one week in their jobs, and qualitatively analyse their dialogues with the chatbot as well as their overall experience (as captured by an exit survey). We find that, rather than expecting ChatGPT to generate ready-to-use software artifacts (e.g., code), practitioners more often use ChatGPT to receive guidance on how to solve their tasks or learn about a topic in more abstract terms. We also propose a theoretical framework for how (i) purpose of the interaction, (ii) internal factors (e.g., the user's personality), and (iii) external factors (e.g., company policy) together shape the experience (in terms of perceived usefulness and trust). We envision that our framework can be used by future research to further the academic discussion on LLM usage by software engineering practitioners, and to serve as a reference point for the design of future empirical LLM research in this domain.
With the advent of large language models (LLMs) in the artificial intelligence (AI) area, the field of software engineering (SE) has also witnessed a paradigm shift. These models, by leveraging the power of deep learning and massive amounts of data, have demonstrated an unprecedented capacity to understand, generate, and operate programming languages. They can assist developers in completing a broad spectrum of software development activities, encompassing software design, automated programming, and maintenance, which potentially reduces huge human efforts. Integrating LLMs within the SE landscape (LLM4SE) has become a burgeoning trend, necessitating exploring this emergent landscape's challenges and opportunities. The paper aims at revisiting the software development life cycle (SDLC) under LLMs, and highlighting challenges and opportunities of the new paradigm. The paper first summarizes the overall process of LLM4SE, and then elaborates on the current challenges based on a through discussion. The discussion was held among more than 20 participants from academia and industry, specializing in fields such as software engineering and artificial intelligence. Specifically, we achieve 26 key challenges from seven aspects, including software requirement & design, coding assistance, testing code generation, code review, code maintenance, software vulnerability management, and data, training, and evaluation. We hope the achieved challenges would benefit future research in the LLM4SE field.
ABSTRACT Digital analytical tools combined with 3D documentation are used incrementally in building rehabilitation in the conservation state analysis process. In the last decade, due to the current advancements in the Architecture Engineering Construction (AEC) industry, the application of BIM methods in heritage building conservation started becoming more attractive for specialists and practitioners. In light of the latest concepts in data management at city level, as a result of the discussion about smart city representations, the use of a shared digital environment that caters to technical studies related to conservation analysis, building provenance, structural changes, and urban context transformations can lead to reduced time, improved quality, and lowered cost of city management for all domain experts and city stakeholders. This paper explores the benefits of multi-scale and discipline digitization for the restoration of heritage buildings, highlighting the potential impact of innovative data integration, methods, and workflows on architectural renovation and energy upgrades. Specifically, it focuses on the integration of conservation information for heritage buildings and large-scale environmental analysis data for historic clusters in modern cities.
Aref Maksoud, A. Hussien, Emad S. N. Mushtaha
et al.
Virtual reality was investigated with various computational design approaches to improve users’ ability to communicate, share, and grasp the design’s requirements to better conceptualize ideas during various design and review stages. The study aims to show how computational design and virtual reality are utilized to forecast challenges, address design problems/limitations in a specific study space, and validate results. A case study of the main Architectural Engineering department building at the University of Sharjah (UoS) campus in Sharjah, United Arab Emirates, was considered. The study focused on indoor daylight intake, ventilation, functionality, user comfortability, structural integrity, coherency and consistency, and performance optimization as factors to further evaluate and aid in the selection process of the optimal design. Consequently, innovative computational design tools were used in the study’s methodology to assess offered alternatives, such as altering and fabricating the building’s skin to deal with the challenges described above and improving the selected room’s visual and environmental conditions, such as optimal daylighting and ensuring users’ comfortability. The users’ immersive experience resulted in more accurate visualization and navigation around the to-be-built environment, allowing for more significant analysis and comprehension that further validated the results obtained. The chosen case study thus demonstrated the potential for computational design, mixed reality techniques, and strategies to enable an efficient process that ultimately verifies approaches taken toward a much more optimal solution through better visualization and contextualizing.
The assessment of the seismic behaviour of historic residential buildings and the estimation of their possible losses in the event of an earthquake, is a must for defining strategic mitigation plans to prevent irreplaceable heritage losses. In this study an integrated performance based probabilistic risk assessment methodology is developed. An archival study and a field survey allow to identify architectural and construction characteristics of heritage residential buildings in urban areas and determine realistic structural models. These are analysed by using a limit state approach, coded in the FaMIVE method, considering different construction hypotheses, to produce capacity curves which support the identification of a discrete number of typologies representative of the entire building stock in the area. Their fragility functions are then derived using the modified N2 method. Because of the difficulty in quantifying the expected probable losses in purely economic terms, given the heritage value of these assets, losses are computed in terms of damaged floor surface area and mean damage ratio. These have been obtained through the earthquake loss estimation platform SELENA, considering different possible seismic scenarios. The procedure is applied to masonry residential buildings in Pla del Remei area of Valencia, Spain, built between the end of the 19th Century and the end of the Spanish War (1939). This neighbourhood embodies the cultural values, construction techniques and historic legacy of a new and brief era of modernity, inspired by the new urban theories and architectural styles of Eclecticism and Modernism. Despite Valencia being located in an area of low to moderate seismicity, the results show that the maximum percentage of built damaged area ranges from 5.8 to 11.6% for 475 years return period, increasing to 33.59–51.59% for 975 years return period. The high level of resolution of the study allows mapping and identifying the structures at higher risk and is therefore a valuable tool to support sensitive and targeted retrofitting policies.
The study analyzed the structural characteristics of carbon nanomaterials obtained at different time parameters of the synthesis based on X-ray diffractometry, Raman spectroscopy, and scanning microscopy. According to the Raman spectroscopy and X-ray scattering data, the crystallite size of nanotubes is estimated to be in the range from 9 to 38 nm. With the synthesis time of 90 minutes, the nanotube crystallite size remains minimal in comparison with other samples, which is confirmed, among other things, by various diagnostic methods. Based on the X-ray diffraction data, the Lc and La crystallite sizes (longitudinal and perpendicular to the direction of the carbon layers) were calculated using the Selyakov-Scherrer formula. The sizes of nanotube crystallites as a result of increasing the synthesis time are in the range of 9-12 nm in the longitudinal direction and 22-38 nm in the perpendicular direction. The diffraction patterns of the samples do not reflect the presence of a significant amount of graphite; the intensity structure is predominantly in the (002) and (004) peaks, which are characteristic of nanotubes. As a result of the use of nanotubes as a modifier component with a synthesis duration from 40 to 90 minutes, an increase in the performance of the composite up to 20-25 % relative to the control sample is observed.
Architectural engineering. Structural engineering of buildings
Urbanization has profound effects on administrative boundaries, resulting in the expansion of urban areas, particularly at the periphery. This rapid growth leads to significant changes in landcover and land use, as agricultural and natural open areas are progressively transformed into densely populated urban landscapes characterized by housing, commercial infrastructure, and transportation systems.
The capital city of Jordan, Amman, faces exceptional urban growth, with its population surpassing 4.5 million people. This unprecedented expansion has given rise to extensive urban landscapes, presenting challenges for planners who lack a holistic understanding of the wide-ranging impacts.
To address these complexities and make well-informed decisions, planners urgently require comprehensive, up-to-date information on the causes, chronology, and consequences of urbanization. Integrating high-precision satellite imagery, geoinformatics data, and topographic insights offers a promising avenue to develop comprehensive inventories of urban change and growth. Such knowledge acts as a vital resource, enabling accurate assessments of expanding built-up areas and their associated implications.
The use of high geometric resolution satellite imagery and geoinformatics data combined with topographic information and GIS could provide effective information to develop urban change and growth inventory which could be explored towards producing a very important signature for the built-up area changes.