The proliferation of AI-powered search engines has shifted information discovery from traditional link-based retrieval to direct answer generation with selective source citation, creating new challenges for content visibility. While existing Generative Engine Optimization (GEO) approaches focus primarily on semantic content modification, the role of structural features in influencing citation behavior remains underexplored. In this paper, we propose GEO-SFE, a systematic framework for structural feature engineering in generative engine optimization. Our approach decomposes content structure into three hierarchical levels: macro-structure (document architecture), meso-structure (information chunking), and micro-structure (visual emphasis), and models their impact on citation probability across different generative engine architectures. We develop architecture-aware optimization strategies and predictive models that preserve semantic integrity while improving structural effectiveness. Experimental evaluation across six mainstream generative engines demonstrates consistent improvements in citation rate (17.3 percent) and subjective quality (18.5 percent), validating the effectiveness and generalizability of the proposed framework. This work establishes structural optimization as a foundational component of GEO, providing a data-driven methodology for enhancing content visibility in LLM-powered information ecosystems.
Empirical research in reverse engineering and software protection is crucial for evaluating the efficacy of methods designed to protect software against unauthorized access and tampering. However, conducting such studies with professional reverse engineers presents significant challenges, including access to professionals and affordability. This paper explores the use of students as participants in empirical reverse engineering experiments, examining their suitability and the necessary training; the design of appropriate challenges; strategies for ensuring the rigor and validity of the research and its results; ways to maintain students' privacy, motivation, and voluntary participation; and data collection methods. We present a systematic literature review of existing reverse engineering experiments and user studies, a discussion of related work from the broader domain of software engineering that applies to reverse engineering experiments, an extensive discussion of our own experience running experiments ourselves in the context of a master-level software hacking and protection course, and recommendations based on this experience. Our findings aim to guide future empirical studies in RE, balancing practical constraints with the need for meaningful, reproducible results.
Software Engineering (SE) faces simultaneous pressure from AI automation (reducing code production costs) and hardware-energy constraints (amplifying failure costs). We position that SE must redefine itself around human discernment-intent articulation, architectural control, and verification-rather than code construction. This shift introduces accountability collapse as a central risk and requires fundamental changes to research priorities, educational curricula, and industrial practices. We argue that Software Engineering, as traditionally defined around code construction and process management, is no longer sufficient. Instead, the discipline must be redefined around intent articulation, architectural control, and systematic verification. This redefinition shifts Software Engineering from a production-oriented field to one centered on human judgment under automation, with profound implications for research, practice, and education.
The discussion around AI-Engineering, that is, Software Engineering (SE) for AI-enabled Systems, cannot ignore a crucial class of software systems that are increasingly becoming AI-enhanced: Those used to enable or support the SE process, such as Computer-Aided SE (CASE) tools and Integrated Development Environments (IDEs). In this paper, we study the energy efficiency of these systems. As AI becomes seamlessly available in these tools and, in many cases, is active by default, we are entering a new era with significant implications for energy consumption patterns throughout the Software Development Lifecycle (SDLC). We focus on advanced Machine Learning (ML) capabilities provided by Large Language Models (LLMs). Our proposed approach combines Retrieval-Augmented Generation (RAG) with Prompt Engineering Techniques (PETs) to enhance both the quality and energy efficiency of LLM-based code generation. We present a comprehensive framework that measures real-time energy consumption and inference time across diverse model architectures ranging from 125M to 7B parameters, including GPT-2, CodeLlama, Qwen 2.5, and DeepSeek Coder. These LLMs, chosen for practical reasons, are sufficient to validate the core ideas and provide a proof of concept for more in-depth future analysis.
Cláudio Lúcio do Val Lopes, João Marcus Pitta, Fabiano Belém
et al.
The integration of Artificial Intelligence (AI) into clinical settings presents a software engineering challenge, demanding a shift from isolated models to robust, governable, and reliable systems. However, brittle, prototype-derived architectures often plague industrial applications and a lack of systemic oversight, creating a ``responsibility vacuum'' where safety and accountability are compromised. This paper presents an industry case study of the ``Maria'' platform, a production-grade AI system in primary healthcare that addresses this gap. Our central hypothesis is that trustworthy clinical AI is achieved through the holistic integration of four foundational engineering pillars. We present a synergistic architecture that combines Clean Architecture for maintainability with an Event-driven architecture for resilience and auditability. We introduce the Agent as the primary unit of modularity, each possessing its own autonomous MLOps lifecycle. Finally, we show how a Human-in-the-Loop governance model is technically integrated not merely as a safety check, but as a critical, event-driven data source for continuous improvement. We present the platform as a reference architecture, offering practical lessons for engineers building maintainable, scalable, and accountable AI-enabled systems in high-stakes domains.
Esteban Parra, Sonia Haiduc, Preetha Chatterjee
et al.
Peer review is the main mechanism by which the software engineering community assesses the quality of scientific results. However, the rapid growth of paper submissions in software engineering venues has outpaced the availability of qualified reviewers, creating a growing imbalance that risks constraining and negatively impacting the long-term growth of the Software Engineering (SE) research community. Our vision of the Future of the SE research landscape involves a more scalable, inclusive, and resilient peer review process that incorporates additional mechanisms for: 1) attracting and training newcomers to serve as high-quality reviewers, 2) incentivizing more community members to serve as peer reviewers, and 3) cautiously integrating AI tools to support a high-quality review process.
The heterogeneity in the organization of software engineering (SE) research historically exists, i.e., funded research model and hands-on model, which makes software engineering become a thriving interdisciplinary field in the last 50 years. However, the funded research model is becoming dominant in SE research recently, indicating such heterogeneity has been seriously and systematically threatened. In this essay, we first explain why the heterogeneity is needed in the organization of SE research, then present the current trend of SE research nowadays, as well as the consequences and potential futures. The choice is at our hands, and we urge our community to seriously consider maintaining the heterogeneity in the organization of software engineering research.
The evolution of tall buildings has been shaped by distinct architectural styles, beginning around 1875 and progressing through various stylistic architectural movements. These changes were driven by advancements in structural engineering and digital design technologies, leading to greater experimentation with form and function. Energy and resource conservation of the late 20th century instigated a noteworthy focus on sustainability. Beyond that, the early 21st century saw a significant shift toward a new breed of tall buildings, a suitable architectural vocabulary for “high-performance” tall buildings, in which sustainability with a focus on energy efficiency is joined with the performance of other active and passive functional systems. This paper presents an overview of high-performance tall buildings by exploring key technologies, materials, innovations, safety, durability, and indoor environmental quality. Strategies that have emerged to address skyscrapers’ environmental and economic challenges are also crucial in such a building. It highlights the importance of optimizing and integrating building systems, improving energy efficiency, minimizing resource consumption, and ensuring long-term occupant health and productivity. Furthermore, this study identifies five key dimensions—structural materials and systems, energy-efficient design, high-performance façades, performance monitoring, and integrating building services systems—demonstrating how these factors contribute to environment-conscious urban development and resilient architectural and engineering design. It is concluded that these buildings are poised to redefine urban environments by leveraging advanced technologies, AI-driven management, IoT interconnectivity, health-focused elements, and climate resilience. Also, tall, high-performance buildings will be increasingly automated to an unknown limit, and AI will play a prominent role in the future.
The structural system is an essential component in engineering and architecture, determining the stability, strength, and functionality of buildings. This study addresses the integration of geophysical data obtained through techniques such as Multichannel Analysis of Surface Waves (MASW), microtremors, and seismic refraction in architectural and structural design, with special attention to its application in expanding urban areas and vulnerable communities. These methods allow for the characterization of the soil’s dynamic properties, identifying critical vibration periods that influence structural behavior, especially in sandy soils near rock outcrops up to 30 m deep. The discrepancy between soil vibration periods and structural periods can induce resonance phenomena, highlighting the need to incorporate geophysical analyses in the design to avoid structural failures. By using adapted equations, the dimensions of load-bearing elements like columns are optimized, considering stiffness, mass, and local seismic conditions. The results obtained through computational tools validate the effectiveness of this approach, ensuring safer and more sustainable designs. This study emphasizes the importance of merging geophysical and dynamic knowledge to optimize structural performance and promote resilience in complex geophysical environments. Incorporating soil vibration analysis not only improves building safety but also contributes to sustainable urban development, especially in regions prone to seismic events.
Resilience has emerged as a critical focus in architecture, addressing the pressing challenges posed by climate change, rapid urbanization, and socio‐economic disruptions. This bibliometric study provides a comprehensive analysis of resilience research in architecture, leveraging data from Scopus and Web of Science to explore trends, influential contributions, and thematic clusters. Key findings highlight the evolution of resilience concepts, including adaptive reuse, climate adaptation, and socio‐ecological integration, and underscore the interdisciplinary nature of the field, bridging architecture, engineering, and social sciences. Emerging technologies, such as artificial intelligence and digital twins, are identified as transformative tools for resilience planning, with potential to enhance adaptability and robustness in architectural and urban systems. The study also maps resilience challenges across scales, from individual buildings to city‐wide systems, illustrating the interplay between environmental, structural, and social dimensions. Comparative analysis of global and local resilience strategies emphasizes the need for region‐specific approaches, particularly in underrepresented contexts like low‐income and vernacular settings. By identifying gaps, such as the lack of standardized metrics and harmonized evaluation frameworks, this research proposes future directions to support a more resilient built environment. This work contributes actionable insights for architects, urban planners, and policymakers, providing a foundation for integrating resilience into sustainable development practices. It calls for collaborative, interdisciplinary efforts to advance resilience research and address the complexities of contemporary urban challenges.
Saanchi S. Kaushal, Yishuang Wang, Shayla Hines
et al.
ABSTRACT During an off-season tornado outbreak in December 2021, multiple states in the Midwest part of the United States were affected. As a result of this outbreak, multiple tornadoes formed with intensities ranging from EF-2 to EF-4 on the Enhanced Fujita Scale, causing widespread damage to the built environment. Many of the impacted buildings were listed in the National Register of Historic Places, highlighting their historical significance and connection to the area’s heritage. Given the historic relevance of these structures, this article focuses on developing a exhaustive dataset of them. Data was gathered through both on-site and virtual investigations, capturing structural details, damage assessments, and any retrofits that may have been implemented. This unique dataset serves as a valuable resource for researchers in disaster science, architectural engineering, and cultural heritage, providing insights into the impact of tornadoes on historic buildings.
Architectural design holds a pivotal role in shaping the development of university campuses. This study critically examines the extent to which the architectural plans of four campuses correspond with the conceptual framework of an educational campus. The evaluation of design quality is crucial, as it significantly influences both academic and non-academic environments, thereby contributing to the overall enhancement of higher education quality. Adopting a mixed methods research (MMR) approach, the initial phase employs the Analytic Hierarchy Process (AHP) to establish the Priority Design Principles for Educational Campuses. Subsequently, the quantitative phase focuses on formulating a decision-making model to assess the quality of the educational atmosphere through the application of a fuzzy logic system. The findings of this research reveal that the top three design criteria, identified as having the highest priority, serve as the basis for developing a comprehensive model to evaluate campus designs grounded in the educational campus paradigm. These three key criteria functions as benchmarks for determining whether a campus’s architectural design satisfies the essential standards of an educational campus. An empirical assessment conducted across four university campuses indicates that only the campus design of Malikussaleh University falls short of meeting the educational campus design criteria. In contrast, the remaining three universities have successfully adhered to the principles underpinning the educational campus concept.
Architecture, Architectural engineering. Structural engineering of buildings
Agile software development relies on self-organized teams, underlining the importance of individual responsibility. How developers take responsibility and build ownership are influenced by external factors such as architecture and development methods. This paper examines the existing literature on ownership in software engineering and in psychology, and argues that a more comprehensive view of ownership in software engineering has a great potential in improving software team's work. Initial positions on the issue are offered for discussion and to lay foundations for further research.
Nasir U. Eisty, Jeffrey C. Carver, Johanna Cohoon
et al.
In the evolving landscape of scientific and scholarly research, effective collaboration between Research Software Engineers (RSEs) and Software Engineering Researchers (SERs) is pivotal for advancing innovation and ensuring the integrity of computational methodologies. This paper presents ten strategic guidelines aimed at fostering productive partnerships between these two distinct yet complementary communities. The guidelines emphasize the importance of recognizing and respecting the cultural and operational differences between RSEs and SERs, proactively initiating and nurturing collaborations, and engaging within each other's professional environments. They advocate for identifying shared challenges, maintaining openness to emerging problems, ensuring mutual benefits, and serving as advocates for one another. Additionally, the guidelines highlight the necessity of vigilance in monitoring collaboration dynamics, securing institutional support, and defining clear, shared objectives. By adhering to these principles, RSEs and SERs can build synergistic relationships that enhance the quality and impact of research outcomes.
Large Language Models (LLMs) are increasingly integrated into software applications, giving rise to a broad class of prompt-enabled systems, in which prompts serve as the primary 'programming' interface for guiding system behavior. Building on this trend, a new software paradigm, promptware, has emerged, which treats natural language prompts as first-class software artifacts for interacting with LLMs. Unlike traditional software, which relies on formal programming languages and deterministic runtime environments, promptware is based on ambiguous, unstructured, and context-dependent natural language and operates on LLMs as runtime environments, which are probabilistic and non-deterministic. These fundamental differences introduce unique challenges in prompt development. In practice, prompt development remains largely ad hoc and relies heavily on time-consuming trial-and-error, a challenge we term the promptware crisis. To address this, we propose promptware engineering, a new methodology that adapts established Software Engineering (SE) principles to prompt development. Drawing on decades of success in traditional SE, we envision a systematic framework encompassing prompt requirements engineering, design, implementation, testing, debugging, evolution, deployment, and monitoring. Our framework re-contextualizes emerging prompt-related challenges within the SE lifecycle, providing principled guidance beyond ad-hoc practices. Without the SE discipline, prompt development is likely to remain mired in trial-and-error. This paper outlines a comprehensive roadmap for promptware engineering, identifying key research directions and offering actionable insights to advance the development of prompt-enabled systems.
The rapid emergence of generative AI models like Large Language Models (LLMs) has demonstrated its utility across various activities, including within Requirements Engineering (RE). Ensuring the quality and accuracy of LLM-generated output is critical, with prompt engineering serving as a key technique to guide model responses. However, existing literature provides limited guidance on how prompt engineering can be leveraged, specifically for RE activities. The objective of this study is to explore the applicability of existing prompt engineering guidelines for the effective usage of LLMs within RE. To achieve this goal, we began by conducting a systematic review of primary literature to compile a non-exhaustive list of prompt engineering guidelines. Then, we conducted interviews with RE experts to present the extracted guidelines and gain insights on the advantages and limitations of their application within RE. Our literature review indicates a shortage of prompt engineering guidelines for domain-specific activities, specifically for RE. Our proposed mapping contributes to addressing this shortage. We conclude our study by identifying an important future line of research within this field.
Large language model-specific inference engines (in short as \emph{LLM inference engines}) have become a fundamental component of modern AI infrastructure, enabling the deployment of LLM-powered applications (LLM apps) across cloud and local devices. Despite their critical role, LLM inference engines are prone to bugs due to the immense resource demands of LLMs and the complexities of cross-platform compatibility. However, a systematic understanding of these bugs remains lacking. To bridge this gap, we present the first empirical study on bugs in LLM inference engines. We mine official repositories of 5 widely adopted LLM inference engines, constructing a comprehensive dataset of 929 real-world bugs. Through a rigorous open coding process, we analyze these bugs to uncover their symptoms, root causes, commonality, fix effort, fix strategies, and temporal evolution. Our findings reveal six bug symptom types and a taxonomy of 28 root causes, shedding light on the key challenges in bug detection and location within LLM inference engines. Based on these insights, we propose a series of actionable implications for researchers, inference engine vendors, and LLM app developers, along with general guidelines for developing LLM inference engines.
Space efficiency in Singaporean tall buildings results from a complex interplay of historical, architectural, engineering, technological, socioeconomic, and environmental factors. The city-state’s innovative and adaptive approach has enabled it to overcome the challenges associated with skyscraper construction, leading to the development of some of the most advanced and sustainable high-rise structures in the world. However, there is currently a lack of detailed analysis on space utilization in Singaporean high-rise buildings. This study addresses this gap by examining 63 cases. The main findings of this research: 1. Residential functions, central core layouts, and prismatic shapes are the most frequent. 2. Concrete material with a shear-walled frame system is the preferred structural choice. 3. Average spatial efficiency is 80%, and the core-to-GFA (Gross Floor Area) ratio averages 17%. These metrics vary from a minimum of 68% and 5% to a maximum of 91% and 32%, respectively. These insights offer valuable guidance for Singaporean construction professionals, particularly architects, helping them make informed design decisions for high-rise projects.
Masonry buildings constitute a large part of the European building heritage. This building stock often presents plan or vertical irregularity generally caused by the architectural and structural modifications undergone over the time. In the context of historical city centres, the most recurring irregularity is the vertical one, due to sudden variations in mass, stiffness (and strength) of walls along the building height. In particular, in the case of Florence city centre (Italy), vertical irregularity is caused by the removing of large portions of masonry walls at the ground floor as a consequence of the changed use of these parts of the building; the functional modification of the openings scheme at the different levels of the building due to the internal renovation of the flats; the rooftop addition. In this paper vertical irregularity in historical masonry buildings is investigated through the analysis of single masonry walls. A simplified numerical procedure is adopted in order to evaluate the influence of vertical irregularity on the seismic response of masonry walls along the building height. The masonry structure is modelled through an assemblage of rigid and infinitely strong blocks, linked in-between and to the soil by means of deformable joints. Numerical results demonstrated that this simplified procedure is able to predict the behavior of masonry walls both before and after the typical structural modifications which involved, particularly, the historical buildings of Florence city center. This simplified procedure is suggested as a useful tool for both research purposes and professional practice.