Covalent organic frameworks (COFs) have emerged as promising candidates for electrocatalysis and photocatalysis applications due to their structurally ordered architectures and tunable physicochemical properties. In COFs, organic building blocks are linked via covalent bonds, and the structural and electronic characteristics of COFs are critically governed by their linkage chemistry. These linkages influence essential materials attributes including surface areas, crystallinity, hydrophobicity, chemical stability, and the optoelectronic behavior (e.g., photoelectron separation efficiency, electron conductivity, and reductive activity), which collectively determine catalytic performance in energy conversion systems. A systematic understanding of linkage engineering in COFs not only advances synthetic methodologies but also provides innovative solutions to global energy and environmental challenges, thereby accelerating the development of sustainable technologies for clean energy production and environmental remediation.
Diego Firmenich, Leandro Antonelli, Bruno Pazos
et al.
User stories are one of the most widely used artifacts in the software industry to define functional requirements. In parallel, the use of high-fidelity mockups facilitates end-user participation in defining their needs. In this work, we explore how combining these techniques with large language models (LLMs) enables agile and automated generation of user stories from mockups. To this end, we present a case study that analyzes the ability of LLMs to extract user stories from high-fidelity mockups, both with and without the inclusion of a glossary of the Language Extended Lexicon (LEL) in the prompts. Our results demonstrate that incorporating the LEL significantly enhances the accuracy and suitability of the generated user stories. This approach represents a step forward in the integration of AI into requirements engineering, with the potential to improve communication between users and developers.
Replication packages are crucial for enabling transparency, validation, and reuse in software engineering (SE) research. While artifact sharing is now a standard practice and even expected at premier SE venues such as ICSE, the practical usability of these replication packages remain underexplored. In particular, there is a marked lack of studies that comprehensively examine the executability and reproducibility of replication packages in SE research. In this paper, we aim to fill this gap by evaluating 100 replication packages published in ICSE proceedings over the past decade (2015 - 2024). We assess the (1) executability of the replication packages, (2) efforts and modifications required to execute them, (3) challenges that prevent executability, and (4) reproducibility of the original findings for those that are executable. We spent approximately 650 person-hours in total to execute the artifacts and reproduce the study findings. Our analysis shows that only 40 of the 100 evaluated artifacts were fully executable. Among these, 32.5% ran without any modification. However, even executable artifacts required varying levels of effort: 17.5% required low effort, while 82.5% required moderate to high effort to execute successfully. We identified five common types of modifications and 13 challenges that lead to execution failure, encompassing environmental, documentation, and structural issues. Among the executable artifacts, only 35% (14 out of 40) reproduced the original results. These findings highlight a notable gap between artifact availability, executability, and reproducibility. Our study proposes three actionable guidelines to improve the preparation, documentation, and review of research artifacts, thereby strengthening the rigor and sustainability of open science practices in SE research.
This research analyzes the structural vulnerability and seismic influence of self-built buildings in the Los Constructores Human Settlement in Nuevo Chimbote, with the aim of identifying the factors that increase structural risk in the event of seismic events. An applied methodology was used that integrated technical inspections, geotechnical tests, and structural modeling using ETABS, evaluating both soil conditions and the dynamic behavior of representative dwellings. Field and laboratory results determined that the foundation soil corresponds to a rigid S2 type soil, which rules out failures due to bearing capacity and concentrates vulnerability in the structure. The vulnerability assessment revealed a medium level of vulnerability, mainly associated with geometric deficiencies and the absence of adequate confinement elements. The structural analysis confirmed that insufficient lateral stiffness in the X direction is the critical mechanism that triggers high drifts and fragile structural behavior under seismic demand. It is concluded that the self-built buildings in the study area present a significant risk in the event of earthquakes, due to the low density of walls and the lack of technical criteria during their construction, which highlights the need to apply specific structural reinforcement strategies.
Piero Colajanni, Laura Inzerillo, Alessandro Pisciotta
et al.
Achieving reliable interoperability between architectural and structural models remains one of the main challenges in BIM-based design workflows. Despite the widespread adoption of Building Information Modeling, the automatic transfer of information between modeling software and FEM analysis tools continues to generate inconsistencies, information loss, and the need for manual interventions. This study examines these issues through the case study of a reinforced-concrete residential building located in Palermo, used to evaluate BIM-to-FEM exchanges between Revit®, Robot Structural Analysis®, PRO_SAP®, and JASP®. The interoperability tests highlight significant limitations in both native and IFC-based workflows. The direct Revit–Robot link ensures good geometric consistency but still requires manual correction of analytical axes, connections, and boundary conditions. Indirect transfers via IFC exhibit greater instability: both IFC2x3 Coordination View 2.0 and IFC4 Reference View show difficulties in correctly interpreting structural elements and do not adequately preserve analytical relationships, resulting in unconnected slabs, disconnected nodes, and missing constraint information. In PRO_SAP®, several elements are also absent after IFC import. To address these issues, the study proposes a workflow based on the integration of Revit® and JASP® aimed at generating a reliable federated model. This model was further validated in Navisworks®, Solibri Anywhere®, BIM Vision®, and Enscape® to assess its correct interpretation across different software environments. This approach enhances interdisciplinary coordination, supports clash detection, facilitates immersive VR-based review, and centralizes architectural, structural, and MEP models into a unified environment. The results show that structured workflows and careful management of native and IFC transfers significantly improve model reliability and reduce design inconsistencies.
The 5th Industrial Revolution (5IR) is reshaping the global business landscape by integrating artificial intelligence, robotics, and the Internet of Things with a renewed focus on human-centered innovation. Talent management (TM), traditionally regarded as a human resources function, must re-envision itself within this paradigm. This paper develops a conceptual framework that applies systems thinking and design thinking to talent management in the context of the 5IR, enabling organizations to remain agile, innovative, and resilient. Systems thinking offers a holistic perspective on understanding the interconnections within the talent ecosystem, while design thinking promotes creative, empathetic, and human-centered solutions. Drawing on recent research on coopetition in SMEs, project-based talent development, global talent practices, and digital readiness in the public sector, the framework highlights the importance of upskilling, leadership support, and the responsible adoption of AI. The outcomes suggest that organizations should adopt holistic and adaptive talent management practices to address skills gaps, foster innovation, and maintain a competitive advantage in the rapidly evolving global environment.
Context: Jupyter Notebook has emerged as a versatile tool that transforms how researchers, developers, and data scientists conduct and communicate their work. As the adoption of Jupyter notebooks continues to rise, so does the interest from the software engineering research community in improving the software engineering practices for Jupyter notebooks. Objective: The purpose of this study is to analyze trends, gaps, and methodologies used in software engineering research on Jupyter notebooks. Method: We selected 146 relevant publications from the DBLP Computer Science Bibliography up to the end of 2024, following established systematic literature review guidelines. We explored publication trends, categorized them based on software engineering topics, and reported findings based on those topics. Results: The most popular venues for publishing software engineering research on Jupyter notebooks are related to human-computer interaction instead of traditional software engineering venues. Researchers have addressed a wide range of software engineering topics on notebooks, such as code reuse, readability, and execution environment. Although reusability is one of the research topics for Jupyter notebooks, only 64 of the 146 studies can be reused based on their provided URLs. Additionally, most replication packages are not hosted on permanent repositories for long-term availability and adherence to open science principles. Conclusion: Solutions specific to notebooks for software engineering issues, including testing, refactoring, and documentation, are underexplored. Future research opportunities exist in automatic testing frameworks, refactoring clones between notebooks, and generating group documentation for coherent code cells.
Bianca Trinkenreich, Fabio Calefato, Geir Hanssen
et al.
The adoption of Large Language Models (LLMs) is not only transforming software engineering (SE) practice but is also poised to fundamentally disrupt how research is conducted in the field. While perspectives on this transformation range from viewing LLMs as mere productivity tools to considering them revolutionary forces, we argue that the SE research community must proactively engage with and shape the integration of LLMs into research practices, emphasizing human agency in this transformation. As LLMs rapidly become integral to SE research - both as tools that support investigations and as subjects of study - a human-centric perspective is essential. Ensuring human oversight and interpretability is necessary for upholding scientific rigor, fostering ethical responsibility, and driving advancements in the field. Drawing from discussions at the 2nd Copenhagen Symposium on Human-Centered AI in SE, this position paper employs McLuhan's Tetrad of Media Laws to analyze the impact of LLMs on SE research. Through this theoretical lens, we examine how LLMs enhance research capabilities through accelerated ideation and automated processes, make some traditional research practices obsolete, retrieve valuable aspects of historical research approaches, and risk reversal effects when taken to extremes. Our analysis reveals opportunities for innovation and potential pitfalls that require careful consideration. We conclude with a call to action for the SE research community to proactively harness the benefits of LLMs while developing frameworks and guidelines to mitigate their risks, to ensure continued rigor and impact of research in an AI-augmented future.
Hydro-Science and Engineering (Hydro-SE) is a critical and irreplaceable domain that secures human water supply, generates clean hydropower energy, and mitigates flood and drought disasters. Featuring multiple engineering objectives, Hydro-SE is an inherently interdisciplinary domain that integrates scientific knowledge with engineering expertise. This integration necessitates extensive expert collaboration in decision-making, which poses difficulties for intelligence. With the rapid advancement of large language models (LLMs), their potential application in the Hydro-SE domain is being increasingly explored. However, the knowledge and application abilities of LLMs in Hydro-SE have not been sufficiently evaluated. To address this issue, we propose the Hydro-SE LLM evaluation benchmark (Hydro-SE Bench), which contains 4,000 multiple-choice questions. Hydro-SE Bench covers nine subfields and enables evaluation of LLMs in aspects of basic conceptual knowledge, engineering application ability, and reasoning and calculation ability. The evaluation results on Hydro-SE Bench show that the accuracy values vary among 0.74 to 0.80 for commercial LLMs, and among 0.41 to 0.68 for small-parameter LLMs. While LLMs perform well in subfields closely related to natural and physical sciences, they struggle with domain-specific knowledge such as industry standards and hydraulic structures. Model scaling mainly improves reasoning and calculation abilities, but there is still great potential for LLMs to better handle problems in practical engineering application. This study highlights the strengths and weaknesses of LLMs for Hydro-SE tasks, providing model developers with clear training targets and Hydro-SE researchers with practical guidance for applying LLMs.
Metallic yielding devices have been widely used for improving seismic performance of buildings. However, metallic dampers currently in use are often attached to structural systems through brace components, potentially causing conflicts with architectural requirements. In this study, a metallic damper that utilizes the angular deformation generated at the beam‐column connection under lateral loads is proposed. The seismic input energy can be dissipated through inelastic deformations of hyperbolic‐shaped steel bars. Firstly, this paper introduces the configuration and design concept of the newly proposed rotation‐based metallic damper (RMD). Then, in order to investigate the hysteretic behavior and failure modes of the proposed devices, a total of twelve RMD specimens were fabricated, and quasistatic tests were conducted. Subsequently, the influences of physical characteristics associated with hyperbolic‐shaped steel bars on the energy dissipation performance of RMD were studied. Finally, finite element analysis was conducted based on the detailed models of RMD specimens, and the results showed a good agreement with the experimental data. The results demonstrate that the RMD exhibits a sound energy dissipation capacity. It is replaceable and flexible in architectural arrangements due to its low space requirements, which is friendly in engineering practice.
Victor ADIR, Nicoleta Elisabeta PASCU, George ADIR
et al.
This paper has tried to explain the importance of the general and special principles in logo design. The study was realized on a lot of logos to understand when a designer could use the principles to create wonderful graphic representations. It is about symmetry, asymmetry, proportion, rhythm and harmony, substitution, juxtaposition, using different geometric shapes, lines, curves, silhouettes and stylizations, mirror and illustrative representation and so on. We have explained how to use graphic representations in some fields of activity and to choose the best symbol for a company/university etc. And, of course, it was a significant part about the redesign working. This paper presents a few interpretations and conclusions concerning the design principles applied for logos.
Architectural engineering. Structural engineering of buildings, Engineering design
A partir de obras que conjugam arte e arquitetura, este ensaio procura discutir meios pelos quais a percepção contemporânea pode ser turvada e ofuscada quando lida com estruturas históricas de amplos efeitos subjetivos, sociais e políticos. Duas obras, produzidas no início do século XXI e promotoras de experiências próprias a uma era de indeterminação, trabalham tais efeitos alegoricamente: as imersivas Blur Building, de Elizabeth Diller e Ricardo Scofidio, e Weather Project, de Olafur Eliasson. Frente à instabilidade desenfreada do contemporâneo, procura-se, assim, pensar suas possíveis manifestações por meio de obras que, carregadas de ambiguidades, conformam alegorias que podem ajudar a decifrar um sistema vigente e expansivo em um contexto global.
Architectural engineering. Structural engineering of buildings
Understanding how cooperation emerges in public goods games is crucial for addressing societal challenges. While optional participation can establish cooperation without identifying cooperators, it relies on specific assumptions -- that individuals abstain and receive a non-negative payoff, or that non-participants cause damage to public goods -- which limits our understanding of its broader role. We generalize this mechanism by considering non-participants' payoffs and their potential direct influence on public goods, allowing us to examine how various strategic motives for non-participation affect cooperation. Using replicator dynamics, we find that cooperation thrives only when non-participants are motivated by individualistic or prosocial values, with individualistic motivations yielding optimal cooperation. These findings are robust to mutation, which slightly enlarges the region where cooperation can be maintained through cyclic dominance among strategies. Our results suggest that while optional participation can benefit cooperation, its effectiveness is limited and highlights the limitations of bottom-up schemes in supporting public goods.
Óscar Pedreira, Félix García, Mario Piattini
et al.
Gamification has been applied in software engineering to improve quality and results by increasing people's motivation and engagement. A systematic mapping has identified research gaps in the field, one of them being the difficulty of creating an integrated gamified environment comprising all the tools of an organization, since most existing gamified tools are custom developments or prototypes. In this paper, we propose a gamification software architecture that allows us to transform the work environment of a software organization into an integrated gamified environment, i.e., the organization can maintain its tools, and the rewards obtained by the users for their actions in different tools will mount up. We developed a gamification engine based on our proposal, and we carried out a case study in which we applied it in a real software development company. The case study shows that the gamification engine has allowed the company to create a gamified workplace by integrating custom developed tools and off-the-shelf tools such as Redmine, TestLink, or JUnit, with the gamification engine. Two main advantages can be highlighted: (i) our solution allows the organization to maintain its current tools, and (ii) the rewards for actions in any tool accumulate in a centralized gamified environment.
Bayesian Optimization (BO) is a foundational strategy in the field of engineering design optimization for efficiently handling black-box functions with many constraints and expensive evaluations. This paper introduces a fast and accurate BO framework that leverages Pre-trained Transformers for Bayesian Optimization (PFN4sBO) to address constrained optimization problems in engineering. Unlike traditional BO methods that rely heavily on Gaussian Processes (GPs), our approach utilizes Prior-data Fitted Networks (PFNs), a type of pre-trained transformer, to infer constraints and optimal solutions without requiring any iterative retraining. We demonstrate the effectiveness of PFN-based BO through a comprehensive benchmark consisting of fifteen test problems, encompassing synthetic, structural, and engineering design challenges. Our findings reveal that PFN-based BO significantly outperforms Constrained Expected Improvement and Penalty-based GP methods by an order of magnitude in speed while also outperforming them in accuracy in identifying feasible, optimal solutions. This work showcases the potential of integrating machine learning with optimization techniques in solving complex engineering challenges, heralding a significant leap forward for optimization methodologies, opening up the path to using PFN-based BO to solve other challenging problems, such as enabling user-guided interactive BO, adaptive experiment design, or multi-objective design optimization. Additionally, we establish a benchmark for evaluating BO algorithms in engineering design, offering a robust platform for future research and development in the field. This benchmark framework for evaluating new BO algorithms in engineering design will be published at https://github.com/rosenyu304/BOEngineeringBenchmark.
Artificial construction from tendon to bone remains a formidable challenge in tissue engineering owing to their structural complexity. In this work, bioinspired calcium silicate nanowires and alginate composite hydrogels are utilized as building blocks to construct multiscale hierarchical bioactive scaffolds for versatile tissue engineering from tendon to bone. By integrating 3D printing technology and mechanical stretch post‐treatment in a confined condition, the obtained composite hydrogels possess bioinspired reinforcement architectures from nano‐ to submicron‐ to microscale with significantly enhanced mechanical properties. The biochemical and topographical cues of the composite hydrogel scaffolds provide much more efficient microenvironment to the rabbit bone mesenchymal stem cells and rabbit tendon stem cells, leading to ordered alignment and improved differentiation. The composite hydrogels markedly promote in vivo tissue regeneration from bone to tendon, especially fibrocartilage transitional tissue. Therefore, such calcium silicate nanowires/alginate composite hydrogels with multiscale hierarchical structures have potential application for tissue regeneration from tendon to bone. This work provides an innovative strategy to construct multiscale hierarchical architecture‐based scaffolds for tendon/bone engineering.
ConspectusExcitons are the molecular-scale currency of electronic energy. Control over excitons enables energy to be directed and harnessed for light harvesting, electronics, and sensing. Excitonic circuits achieve such control by arranging electronically active molecules to prescribe desired spatiotemporal dynamics. Photosynthetic solar energy conversion is a canonical example of the power of excitonic circuits, where chromophores are positioned in a protein scaffold to perform efficient light capture, energy transport, and charge separation. Synthetic systems that aim to emulate this functionality include self-assembled aggregates, molecular crystals, and chromophore-modified proteins. While the potential of this approach is clear, these systems lack the structural precision to control excitons or even test the limits of their power. In recent years, DNA origami has emerged as a designer material that exploits biological building blocks to construct nanoscale architectures. The structural precision afforded by DNA origami has enabled the pursuit of naturally inspired organizational principles in a highly precise and scalable manner. In this Account, we describe recent developments in DNA-based platforms that spatially organize chromophores to construct tunable excitonic systems. The high fidelity of DNA base pairing enables the formation of programmable nanoscale architectures, and sequence-specific placement allows for the precise positioning of chromophores within the DNA structure. The integration of a wide range of chromophores across the visible spectrum introduces spectral tunability. These excitonic DNA-chromophore assemblies not only serve as model systems for light harvesting, solar conversion, and sensing but also lay the groundwork for the integration of coupled chromophores into larger-scale nucleic acid architectures.We have used this approach to generate DNA-chromophore assemblies of strongly coupled delocalized excited states through both sequence-specific self-assembly and the covalent attachment of chromophores. These strategies have been leveraged to independently control excitonic coupling and system-bath interaction, which together control energy transfer. We then extended this framework to identify how scaffold configurations can steer the formation of symmetry-breaking charge transfer states, paving the way toward the design of dual light-harvesting and charge separation DNA machinery. In an orthogonal application, we used the programmability of DNA chromophore assemblies to change the optical emission properties of strongly coupled dimers, generating a series of fluorophore-modified constructs with separable emission properties for fluorescence assays. Upcoming advances in the chemical modification of nucleotides, design of large-scale DNA origami, and predictive computational methods will aid in constructing excitonic assemblies for optical and computing applications. Collectively, the development of DNA-chromophore assemblies as a platform for excitonic circuitry offers a pathway to identifying and applying design principles for light harvesting and molecular electronics.
Building information modeling (BIM) mandates are becoming more widespread because BIM allows design and construction teams to operate more productively and also enables them to collect the data they generate during the process for use in operations and maintenance tasks. As a result, professionals in the architecture, engineering and construction (AEC) industries are expected to possess excellent BIM expertise. Despite the fact that the developing world has largely not adopted BIM, many studies have been conducted on BIM usage, awareness, drivers and barriers with a focus on the developing world. Numerous studies have pointed to the professionals’ lack of BIM expertise in the developing world’s AEC sector as a major barrier to BIM deployment. Nevertheless, no research has been conducted to assess the variables impacting the level of BIM expertise among professionals. After a detailed review of the literature, the study developed five study hypotheses and created a conceptual model to help assess the variables impacting the level of BIM expertise of professionals in the AEC industry in the developing world. After that, a questionnaire survey was carried out to collect data from 103 seasoned professionals in the Ghanaian construction industry. Nonparametric tests, such as the Kruskal–Wallis, pairwise post hoc Dunn, Mann–Whitney, Pearson’s correlation and the partial least squares structural equation modeling (PLS SEM) tests, were adopted to assess the relationships between the level of BIM expertise of professionals (BE) and the following variables: (1) profession (P), (2) the frequency of BIM use by professionals (BF), (3) the highest dimension of BIM adopted by AEC firms and companies (BD), (4) professionals’ perception of BIM (PB) and (5) the BIM implementation barriers (BIMIBs). P, BF, BD and PB were found to have a substantial impact on the level of BIM expertise acquired by professionals. With regards to professionals’ perception of the BIM software and process, only one (PB3–BIM is not useful to our company at the moment) out of ten of them was found to have a significant impact on BE, highlighting the impact of employers on the level of BIM expertise of professionals. In addition, the study discovered that any resolution made in an attempt to tackle the lack of/insufficient level of BIM expertise among professionals would prove futile without significant effort from the higher education sector (HES) of the developing world and the entire world at large. The study’s conceptual, empirical, managerial and theoretical implications and findings would serve as a roadmap for researchers, professionals and academics in developing nations as they endeavor to seek more ways of increasing BIM expertise among their professionals and to encourage BIM usage throughout the project lifecycle.