The wick structure functions as a core component for heat pipe (HP) and stands as one of the critical factors that dictate the performance of HP. Consequently, this paper puts forward an three-segmented composite wick structure, which is manufactured using 80–100 mesh, 60–80 mesh, and 40–60 mesh copper powder. Experimentally, the influences of filling ratio as well as testing direction on the heat transport performance of HP equipped with segmented composite wick are explored, and the results are contrasted against those of HP featuring single wicks. Additionally, testing direction has a notable effect on the thermal behavior of HP. The maximum heat transfer capacity of HP using 40–60 mesh copper powder as the evaporation section (S1-P2) and that with 80–100 mesh copper powder as the evaporation section (S1-P1) are >90 W and 45 W, respectively. In comparison with S1-P1, S1-P2 has raised its maximum heat transfer capacity by 100 %, while its average thermal resistance of S1-P2 is no more than 0.028 °C/W. Compared with single wicks, the three-segmented composite wick design can effectively lower the thermal resistance of the HP while boosting the heat transfer capacity. Relevant research provides valuable references for optimizing the performance of HP.
Pablo Merino-Muñoz, Felipe Hermosilla-Palma, Nicolás Gómez-Álvarez
et al.
<b>Background/Objectives</b>: Groin and hip injuries are common in sport, and muscle weakness has been identified as an intrinsic risk factor. So, analyzing the strength of the hip musculature becomes important. To date, there are no hip adductor isometric strength tests on force platforms. This study aims to analyze the intra-test reliability of a hip adductor strength test using force platforms. <b>Methods:</b> The study sample comprised 13 male professional soccer players with an average age of 22.3 ± 3 years, body mass of 75.8 ± 5.4 kg, and height of 1.8 ± 0.1 m. Assessments were conducted on a uniaxial force platform. The variables analyzed are peak force (PF), rate of force development (RFD), and impulse. Intra-test reliability was evaluated using the coefficient of variation (CV), intraclass correlation coefficient (ICC), and Bland–Altman plots. <b>Results:</b> Acceptable levels of reliability were identified solely for the variable of peak force, with CV values of D = 5.7% for the dominant profile and ND = 5.4% for the non-dominant profile. Furthermore, moderate and good relative reliability were observed in peak force for the dominant (ICC = 0.706) and non-dominant (ICC = 0.819) profiles, respectively. However, the remaining time-related variables, RFD and impulse, did not achieve acceptable levels of absolute reliability (CV > 10%) and displayed poor to moderate relative reliability. <b>Conclusions</b>: In summary, PF during the hip adductor isometric strength test demonstrated acceptable absolute and commendable relative reliability. Conversely, the time-related variables, specifically RFD and impulse, yielded unsatisfactory absolute and relative reliability levels.
Mechanics of engineering. Applied mechanics, Descriptive and experimental mechanics
Giuseppe Altieri, Sabina Laveglia, Mahdi Rashvand
et al.
This study aims to evaluate and classify the ripening stages of yellow-fleshed kiwifruit by integrating spectral and physicochemical data collected from the pre-harvest phase through 60 days of storage. A portable near-infrared (NIR) spectrometer (900–1700 nm) was used to develop predictive models for soluble solids content (SSC) and firmness (FF), testing multiple preprocessing methods within a Partial Least Squares Regression (PLSR) framework. SNV preprocessing achieved the best predictions for FF (R<sup>2</sup>P = 0.74, RMSEP = 12.342 ± 0.274 N), while the Raw-PLS model showed optimal performance for SSC (R<sup>2</sup>P = 0.93, RMSEP = 1.142 ± 0.022°Brix). SSC was more robustly predicted than FF, as reflected by RPD values of 2.6 and 1.7, respectively. For ripening stage classification, an Artificial Neural Network (ANN) outperformed other models, correctly classifying 97.8% of samples (R<sup>2</sup> = 0.95, RMSE = 0.08, MAE = 0.03). These results demonstrate the potential of combining NIR spectroscopy with AI techniques for non-destructive quality assessment and accurate ripeness discrimination. The integration of regression and classification models further supports the development of intelligent decision-support systems to optimize harvest timing and postharvest handling.
Artificial intelligence (AI), including large language models and generative AI, is emerging as a significant force in software development, offering developers powerful tools that span the entire development lifecycle. Although software engineering research has extensively studied AI tools in software development, the specific types of interactions between developers and these AI-powered tools have only recently begun to receive attention. Understanding and improving these interactions has the potential to enhance productivity, trust, and efficiency in AI-driven workflows. In this paper, we propose a taxonomy of interaction types between developers and AI tools, identifying eleven distinct interaction types, such as auto-complete code suggestions, command-driven actions, and conversational assistance. Building on this taxonomy, we outline a research agenda focused on optimizing AI interactions, improving developer control, and addressing trust and usability challenges in AI-assisted development. By establishing a structured foundation for studying developer-AI interactions, this paper aims to stimulate research on creating more effective, adaptive AI tools for software development.
Large Language Model (LLM) agents have shown great potential for solving real-world problems and promise to be a solution for tasks automation in industry. However, more benchmarks are needed to systematically evaluate automation agents from an industrial perspective, for example, in Civil Engineering. Therefore, we propose DrafterBench for the comprehensive evaluation of LLM agents in the context of technical drawing revision, a representation task in civil engineering. DrafterBench contains twelve types of tasks summarized from real-world drawing files, with 46 customized functions/tools and 1920 tasks in total. DrafterBench is an open-source benchmark to rigorously test AI agents' proficiency in interpreting intricate and long-context instructions, leveraging prior knowledge, and adapting to dynamic instruction quality via implicit policy awareness. The toolkit comprehensively assesses distinct capabilities in structured data comprehension, function execution, instruction following, and critical reasoning. DrafterBench offers detailed analysis of task accuracy and error statistics, aiming to provide deeper insight into agent capabilities and identify improvement targets for integrating LLMs in engineering applications. Our benchmark is available at https://github.com/Eason-Li-AIS/DrafterBench, with the test set hosted at https://huggingface.co/datasets/Eason666/DrafterBench.
Many organisational problems are addressed through systemic change and re-engineering of existing Information Systems rather than radical new design. In the face of widespread IT project failure, devising effective ways to tackle this type of change remains an open challenge. This work discusses the motivation, theoretical foundation, characteristics and evaluation of a novel framework - referred to as POE-$Δ$, which is rooted in design and engineering and is aimed at providing systematic support for representing, structuring and exploring change problems of a socio-technical nature, including implementing their solutions when they exist. We generalise an existing framework of greenfield design as problem solving for application to change problems. From a theoretical perspective,POE-$Δ$ is a strict extension to its parent framework, allowing the seamless integration of greenfield and brownfield design to tackle change problems. A Design Science Research methodology was applied over a decade to define and evaluate POE-$Δ$, with significant case study research conducted to evaluate the framework in its application to real-world change problems of varying criticality and complexity. The results show that POE-$Δ$ exhibits desirable characteristics of a design approach to organisational change and can bring tangible benefits when applied in practice as a holistic and systematic approach to change in socio-technical contexts.
Sebastian Baltes, Florian Angermeir, Chetan Arora
et al.
Large Language Models (LLMs) are now ubiquitous in software engineering (SE) research and practice, yet their non-determinism, opaque training data, and rapidly evolving models threaten the reproducibility and replicability of empirical studies. We address this challenge through a collaborative effort of 22 researchers, presenting a taxonomy of seven study types that organizes the landscape of LLM involvement in SE research, together with eight guidelines for designing and reporting such studies. Each guideline distinguishes requirements (must) from recommended practices (should) and is contextualized by the study types it applies to. Our guidelines recommend that researchers: (1) declare LLM usage and role; (2) report model versions, configurations, and customizations; (3) document the tool architecture beyond the model; (4) disclose prompts, their development, and interaction logs; (5) validate LLM outputs with humans; (6) include an open LLM as a baseline; (7) use suitable baselines, benchmarks, and metrics; and (8) articulate limitations and mitigations. We complement the guidelines with an applicability matrix mapping guidelines to study types and a reporting checklist for authors and reviewers. We maintain the study types and guidelines online as a living resource for the community to use and shape (llm-guidelines$.$org).
In response to the challenge of single navigation methods failing to meet the high precision requirements for unmanned aerial vehicle (UAV) navigation in complex environments, a novel algorithm that integrates Global Navigation Satellite System/Inertial Navigation System (GNSS/INS) navigation information is proposed to enhance the positioning accuracy and robustness of UAV navigation systems. First, the fundamental principles of Kalman filtering and its application in navigation are introduced. Second, the basic principles of Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks and their applications in the navigation domain are elaborated. Subsequently, an algorithm based on a CNN and LSTM-assisted Kalman filtering fusion navigation is proposed. Finally, the feasibility and effectiveness of the proposed algorithm are validated through experiments. Experimental results demonstrate that the Kalman filtering fusion navigation algorithm assisted by a CNN and LSTM significantly improves the positioning accuracy and robustness of UAV navigation systems in highly interfered complex environments.
Marcos G. Alberti, Alejandro Enfedaque, Duarte M. V. Faria
et al.
Material optimization was one of the challenges for achieving cost-competitive solutions when concrete was introduced in construction, leading to new structural shapes for both civil works and buildings. As concrete construction became dominant, saving material was given less significance, and the selection of the structural typology was mostly influenced by construction or architectural considerations. Simple and non-time-consuming methods for building thus arose as the dominant criteria for design, and this led to the construction of less efficient structures. Currently, the awareness of the environmental footprint in concrete construction has brought the focus again to the topic of structural efficiency and material optimization. In addition, knowledge of material technology is pushing the use of cements and binders with lower environmental impact. Within this framework, Fiber-Reinforced Concrete (FRC) has been identified as a promising evolution of ordinary concrete construction. In this paper, a discussion is presented on the structural properties required for efficient design, focusing on the toughness and deformation capacity of the material. By means of several examples, the benefits and potential application of limit analysis to design at the Ultimate Limit State with FRC are shown. On this basis, the environmental impact of a tailored mix design and structural typology is investigated for the case of slabs in buildings, showing the significant impact that might be expected (potentially reducing CO<sub>2</sub>-eq emissions to half or even less in slabs when compared to ordinary solutions).
With the development and advancements of modern industry, the application of enhanced tubes is becoming more and more extensive in various engineering domains and technological applications. When water is used as the working fluid for thermal energy utilization, there is a large chance of accumulating a large amount of fouling on the surface of the enhanced tubes. This paper establishes a mathematical model of the local fouling deposition of calcium carbonate in enhanced tubes. The model is used to simulate and compare the local fouling characteristics of CaCO3 in different tubes. In addition, a study is conducted on the effects of different inlet flow rates, water temperature, and calcium carbonate concentrations on the local fouling resistance of bellows tubes. The results showed that the local fouling resistance (average) and local fouling resistance of enhanced tubes is smaller than that of circular tubes. Additionally, the bellows tubes yielded the best scale inhibition effect. The obtained local fouling resistance is also found to change periodically along the length of the tube. The local fouling resistance decrease with an increase in the inlet flow rate and wall temperature, and increases with an increase in the calcium carbonate solution concentration.
In the last two decades, several researchers provided snapshots of the "current" state and evolution of empirical research in requirements engineering (RE) through literature reviews. However, these literature reviews were not sustainable, as none built on or updated previous works due to the unavailability of the extracted and analyzed data. KG-EmpiRE is a Knowledge Graph (KG) of empirical research in RE based on scientific data extracted from currently 680 papers published in the IEEE International Requirements Engineering Conference (1994-2022). KG-EmpiRE is maintained in the Open Research Knowledge Graph (ORKG), making all data openly and long-term available according to the FAIR data principles. Our long-term goal is to constantly maintain KG-EmpiRE with the research community to synthesize a comprehensive, up-to-date, and long-term available overview of the state and evolution of empirical research in RE. Besides KG-EmpiRE, we provide its analysis with all supplementary materials in a repository. This repository contains all files with instructions for replicating and (re-)using the analysis locally or via executable environments and for repeating the research approach. Since its first release based on 199 papers (2014-2022), KG-EmpiRE and its analysis have been updated twice, currently covering over 650 papers. KG-EmpiRE and its analysis demonstrate how innovative infrastructures, such as the ORKG, can be leveraged to make data from literature reviews FAIR, openly available, and maintainable for the research community in the long term. In this way, we can enable replicable, (re-)usable, and thus sustainable literature reviews to ensure the quality, reliability, and timeliness of their research results.
Natural Language Processing (NLP) is now a cornerstone of requirements automation. One compelling factor behind the growing adoption of NLP in Requirements Engineering (RE) is the prevalent use of natural language (NL) for specifying requirements in industry. NLP techniques are commonly used for automatically classifying requirements, extracting important information, e.g., domain models and glossary terms, and performing quality assurance tasks, such as ambiguity handling and completeness checking. With so many different NLP solution strategies available and the possibility of applying machine learning alongside, it can be challenging to choose the right strategy for a specific RE task and to evaluate the resulting solution in an empirically rigorous manner. In this chapter, we present guidelines for the selection of NLP techniques as well as for their evaluation in the context of RE. In particular, we discuss how to choose among different strategies such as traditional NLP, feature-based machine learning, and language-model-based methods. Our ultimate hope for this chapter is to serve as a stepping stone, assisting newcomers to NLP4RE in quickly initiating themselves into the NLP technologies most pertinent to the RE field.
Benjamin Decardi-Nelson, Abdulelah S. Alshehri, Akshay Ajagekar
et al.
This article explores how emerging generative artificial intelligence (GenAI) models, such as large language models (LLMs), can enhance solution methodologies within process systems engineering (PSE). These cutting-edge GenAI models, particularly foundation models (FMs), which are pre-trained on extensive, general-purpose datasets, offer versatile adaptability for a broad range of tasks, including responding to queries, image generation, and complex decision-making. Given the close relationship between advancements in PSE and developments in computing and systems technologies, exploring the synergy between GenAI and PSE is essential. We begin our discussion with a compact overview of both classic and emerging GenAI models, including FMs, and then dive into their applications within key PSE domains: synthesis and design, optimization and integration, and process monitoring and control. In each domain, we explore how GenAI models could potentially advance PSE methodologies, providing insights and prospects for each area. Furthermore, the article identifies and discusses potential challenges in fully leveraging GenAI within PSE, including multiscale modeling, data requirements, evaluation metrics and benchmarks, and trust and safety, thereby deepening the discourse on effective GenAI integration into systems analysis, design, optimization, operations, monitoring, and control. This paper provides a guide for future research focused on the applications of emerging GenAI in PSE.
In the telecom industry, predicting customer churn is crucial for improving customer retention. In literature, the use of single classifiers is predominantly focused. Customer data is complex data due to class imbalance and contain multiple factors that exhibit nonlinear dependencies. In these complex scenarios, single classifiers may be unable to fully utilize the available information to capture the underlying interactions effectively. In contrast, ensemble learning that combines various base classifiers empowers a more thorough data analysis, leading to improved prediction performance. In this paper, a heterogeneous ensemble model is proposed for churn prediction in the telecom industry. The model involves exploratory data analysis, data pre-processing and data resampling to handle class imbalance. In this proposed model, multiple trained base classifiers with different characteristics are integrated through a stacking ensemble technique. Specifically, convolutional-based neural network, logistic regression, decision tree and Support Vector Machine (SVM) are considered as the base classifiers in this work. The proposed stacking ensemble model utilizes the unique strengths of each base classifier and leverages collective knowledge to improve prediction performance with a meta-learner. The efficacy of the proposed model is assessed on a real-world dataset, i.e., Cell2Cell. The empirical results demonstrate the superiority of the proposed model in churn prediction with 62.4% f1-score and 60.62% recall.
Mechanics of engineering. Applied mechanics, Technology