This article presents the findings of an action-research study conducted with 17 first-year Master's students in "Computer Modeling of Knowledge and Reasoning" (MICR) at Dr. Moulay Tahar University in Saïda, Algeria. The research investigates the effectiveness of educational podcasting as a multi-modal mediation tool designed to facilitate the complex transition from oral comprehension to advanced academic writing within a French for Academic Purposes (FAP/FOU) framework. Theoretically grounded in Sweller’s Cognitive Load Theory and Bruner’s scaffolding principles, the experimental protocol implements a structured pedagogical circuit. This process integrates segmented audio reception, translanguaging interactions, and analytical reading, ultimately leading to the production of rigorous scientific syntheses. The results demonstrate that aligning language instruction with core disciplinary content, specifically Artificial Intelligence, significantly boosts intrinsic motivation and mitigates linguistic insecurity among learners. A key finding of this analysis is that the strategic mobilization of translanguaging resources, particularly technical English, serves as a vital "cognitive bridge." This allows students to stabilize complex expert knowledge before successfully restructuring and articulating it in academic French. Ultimately, the study suggests that the educational podcast is a robust instrument for rehabilitating literacy skills in engineering students, effectively transforming a traditional language course into a high-level, specialized seminar tailored to the demands of the global scientific community.
Dectot--Le Monnier de Gouville Esteban, Mohammad Hamdaqa, Moataz Chouchen
YARA has established itself as the de facto standard for "Detection as Code," enabling analysts and DevSecOps practitioners to define signatures for malware identification across the software supply chain. Despite its pervasive use, the open-source YARA ecosystem remains characterized by ad-hoc sharing and opaque quality. Practitioners currently rely on public repositories without empirical evidence regarding the ecosystem's structural characteristics, maintenance and diffusion dynamics, or operational reliability. We conducted a large-scale mixed-method study of 8.4 million rules mined from 1,853 GitHub repositories. Our pipeline integrates repository mining to map supply chain dynamics, static analysis to assess syntactic quality, and dynamic benchmarking against 4,026 malware and 2,000 goodware samples to measure operational effectiveness. We reveal a highly centralized structure where 10 authors drive 80% of rule adoption. The ecosystem functions as a "static supply chain": repositories show a median inactivity of 782 days and a median technical lag of 4.2 years. While static quality scores appear high (mean = 99.4/100), operational benchmarking uncovers significant noise (false positives) and low recall. Furthermore, coverage is heavily biased toward legacy threats (Ransomware), leaving modern initial access vectors (Loaders, Stealers) severely underrepresented. These findings expose a systemic "double penalty": defenders incur high performance overhead for decayed intelligence. We argue that public repositories function as raw data dumps rather than curated feeds, necessitating a paradigm shift from ad-hoc collection to rigorous rule engineering. We release our dataset and pipeline to support future data-driven curation tools.
Abstract: The 2024 SITA Baggage IT Insights report highlights advancements and persistent
challenges in baggage handling within the aviation industry. Despite a drop in mishandled
bags from 7.6 to 6.9 per 1,000 passengers in 2023, the increasing number of passengers
translates to approximately 40 million mishandled bags in 2024. Delayed bags constitute 77%
of these, with transfer bags being the primary type affected, especially with the rise in long-
haul flights. Automation efforts, including self-service bag drop technologies and IATA
Resolution 753 for luggage tracking, are helping reduce mishandling, supported by RFID
technology improvements.
Simultaneously, aviation faces pressure to reduce greenhouse gas emissions by 70%
by 2050, as mandated by EU climate protection goals. Shifting short-haul traffic to rail is a
strategic focus, with existing rail-airport integrations supporting this transition. Programs like
“Check-in at the Train Station” in Switzerland showcase potential efficiencies.
Poland's planned CPK airport may pioneer similar initiatives, marking baggage with
RFID to ensure tracking amidst complex logistics. Successful tests by HADATAP and
implementations at Polish airports demonstrate the viability of such systems. Introducing
more logistics checkpoints could exacerbate delays unless effectively managed, emphasizing
RFID's role in sustaining the system’s integrity and passenger experience. The integration of
these technologies suggests a promising path forward, aligning with both operational
efficiency and environmental sustainability.
Keywords: RFID; IATA; Baggage; High speed rail; CPK; Mishandled bags
Highway engineering. Roads and pavements, Bridge engineering
The anchor cables of slopes are affected by long-term environmental corrosion, geotechnical creep, and adverse weather, resulting in gradual loss of tensile force, which can lead to structural failure and subsequent safety accidents. The authors of this paper conducted research based on the magnetic induction density distribution characteristics of permanent magnets, including model derivation, theoretical simulation, and indoor experiments, aiming to propose a new anchor cable force monitoring technology with high sensitivity, strong applicability, and good stability. Based on the molecular circulation model and the Biot–Savart law, the analytical expression of the spatial magnetic field distribution of a rectangular permanent magnet was derived and, combined with the stress–strain relationship characteristics of anchor cables, a theoretical model for the relationship between anchor cable tensile force and magnetic induction density was established. MATLAB (R2018b) was used to simulate and analyze the spatial magnetic field distribution and the force–magnetism relationship. The analysis showed that the magnetic induction density along the central axis of the permanent magnet approximately exhibited a symmetrical quadratic curve distribution, and its value was significantly negatively correlated with the anchor cable force. Based on this, a new anchor cable force monitoring technology was proposed, and an indoor experimental platform was established. The indoor experimental studies further confirmed the negative correlation between force and magnetism (i.e., as the tensile force increases, the magnetic induction strength decreases, and as the tensile force decreases, the magnetic induction strength increases). The fitting results of the force–magnetism curve show that a quadratic function can better describe the correspondence between magnetic induction density and anchor cable force. Reproducibility analysis of the experimental data showed low dispersion in magnetic induction values under various design loads, along with good stability, validating the effectiveness and applicability of the proposed anchor cable force monitoring technology.
Aiming at the shortcomings of the gazelle optimization algorithm, such as the imbalance between exploration and exploitation and the insufficient information exchange within the population, this paper proposes a multi-strategy improved gazelle optimization algorithm (MSIGOA). To address these issues, MSIGOA proposes an iteration-based updating framework that switches between exploitation and exploration according to the optimization process, which effectively enhances the balance between local exploitation and global exploration in the optimization process and improves the convergence speed. Two adaptive parameter tuning strategies improve the applicability of the algorithm and promote a smoother optimization process. The dominant population-based restart strategy enhances the algorithms ability to escape from local optima and avoid its premature convergence. These enhancements significantly improve the exploration and exploitation capabilities of MSIGOA, bringing superior convergence and efficiency in dealing with complex problems. In this paper, the parameter sensitivity, strategy effectiveness, convergence and stability of the proposed method are evaluated on two benchmark test sets including CEC2017 and CEC2022. Test results and statistical tests show that MSIGOA outperforms basic GOA and other advanced algorithms. On the CEC2017 and CEC2022 test sets, the proportion of functions where MSIGOA is not worse than GOA is 92.2% and 83.3%, respectively, and the proportion of functions where MSIGOA is not worse than other algorithms is 88.57% and 87.5%, respectively. Finally, the extensibility of MSIGAO is further verified by several engineering design optimization problems.
Florian Felten, Gabriel Apaza, Gerhard Bräunlich
et al.
Engineering design optimization seeks to automatically determine the shapes, topologies, or parameters of components that maximize performance under given conditions. This process often depends on physics-based simulations, which are difficult to install, computationally expensive, and require domain-specific expertise. To mitigate these challenges, we introduce EngiBench, the first open-source library and datasets spanning diverse domains for data-driven engineering design. EngiBench provides a unified API and a curated set of benchmarks -- covering aeronautics, heat conduction, photonics, and more -- that enable fair, reproducible comparisons of optimization and machine learning algorithms, such as generative or surrogate models. We also release EngiOpt, a companion library offering a collection of such algorithms compatible with the EngiBench interface. Both libraries are modular, letting users plug in novel algorithms or problems, automate end-to-end experiment workflows, and leverage built-in utilities for visualization, dataset generation, feasibility checks, and performance analysis. We demonstrate their versatility through experiments comparing state-of-the-art techniques across multiple engineering design problems, an undertaking that was previously prohibitively time-consuming to perform. Finally, we show that these problems pose significant challenges for standard machine learning methods due to highly sensitive and constrained design manifolds.
Modern engineering, spanning electrical, mechanical, aerospace, civil, and computer disciplines, stands as a cornerstone of human civilization and the foundation of our society. However, engineering design poses a fundamentally different challenge for large language models (LLMs) compared with traditional textbook-style problem solving or factual question answering. Although existing benchmarks have driven progress in areas such as language understanding, code synthesis, and scientific problem solving, real-world engineering design demands the synthesis of domain knowledge, navigation of complex trade-offs, and management of the tedious processes that consume much of practicing engineers' time. Despite these shared challenges across engineering disciplines, no benchmark currently captures the unique demands of engineering design work. In this work, we introduce EngDesign, an Engineering Design benchmark that evaluates LLMs' abilities to perform practical design tasks across nine engineering domains. Unlike existing benchmarks that focus on factual recall or question answering, EngDesign uniquely emphasizes LLMs' ability to synthesize domain knowledge, reason under constraints, and generate functional, objective-oriented engineering designs. Each task in EngDesign represents a real-world engineering design problem, accompanied by a detailed task description specifying design goals, constraints, and performance requirements. EngDesign pioneers a simulation-based evaluation paradigm that moves beyond textbook knowledge to assess genuine engineering design capabilities and shifts evaluation from static answer checking to dynamic, simulation-driven functional verification, marking a crucial step toward realizing the vision of engineering Artificial General Intelligence (AGI).
It is our pleasure to publish the October issue (4th issue) of Vol. 2 of the International Journal of Bridge engineering, Management and Research. You can find detailed information about the journal in the inaugural issue of the journal in September 2004 or at www.ijbemr.org. In this issue of the journal, we are pleased to bring to you the following six papers in innovative areas of bridge engineering: Clustering-Based Framework for Multi-Sensor Data Fusion in Bridge Deck Condition Assessment Influence of Nose Position of Edge Fairing on Aerodynamic Characteristics of Box Girder Bridge Deck Seismic Isolation in Newly Built Bridges in Italy: Historical Development, Regulations, and Recent Applications Smart Acoustic Sounding for Automated Delamination Detection in Concrete Bridge Decks Dynamic investigations before and after the strengthening of a masonry arch bridge History of Bridges: Materials and Structural Types of a Monument to Progress
This study proposes two curves that depict the vehicle–bridge contact force in a novel transportation system named AERORail, which is a lightweight cable-supported structure in which the rails and the prestressed cable form the load bearing system. Based on the contact force identified from a full-scale AERORail system, single and double-valley curves were obtained as the idealized contact force model for large- and small-span AERORail systems, respectively. This was achieved by utilizing the Bezier curves and the least squares method. The proposed curves were verified through a moving load model from a previous study under various spans and speeds. Moreover, the structural response of the AERORail structure under high-speed vehicle passing was explored using the idealized contact force model. The simulation results show that the proposed contact force model can predict the displacement response of 5 m and 15 m spans with a relative error of less than 5%, proving that the model can be used for dynamic analysis of AERORail.
Abstract To improve the energy dissipation and self-resetting ability of bridge structures under strong earthquakes, a new buckling-restrained SMA bar-based friction damper (SFD) is proposed. The damper is composed of buckling-restrained super-elastic SMA bars, friction pads, and a steel frame. The buckling-restrained SMA bars provide self-reset capability, while the friction pads provide additional energy dissipation capacity. Firstly, the configuration, working mechanism, and restoring force model of the SMA bar-based friction damper are introduced. Secondly, a specimen of the damper is made, and the pseudo-static test is carried out. Finally, the experimental results are analyzed based on the Abaqus finite element model. The results indicate that the damper has better self-resetting ability and energy dissipation capacity.
Slip-form concrete (JPCP) has a number of years of good performance experience. An alternative to slip-form concrete is roller-compacted concrete (RCC). The RCC mixture has a significantly larger number of fine aggregates, which leads the concrete mix to be non-slip and compacted by rollers. RCC has the strength and performance of conventional concrete or even higher. Due to all the advantages, the use of RCC pavement in industrial areas and low-volume rural roads is very beneficial. Experimental test section of RCC pavement structure with cement and special additives stabilized base (CTB) was installed on local road No. 130 in Lithuania, which was reconstructed in 2021. The main objective of this study is to learn about the environmental impact on the pavement structure. To reach our aim at the stage of reconstruction of the local road temperature, humidity sensors and a strain gauge were installed under the RCC layer and CTB. During the lifetime of pavement structure temperature and humidity data were collected daily and bearing capacity was measured during spring thaw. In addition, an artificial wheel load simulation using a falling weight deflectometer was performed at the location of the installed strain gauge to analyse deflections and to calculate stresses under RCC layer. The stresses under the RCC layer calculated from the strain gauge were also compared with the theoretical stress calculated at the design stage of the pavement structure to learn more about performance of the pavement structure. The results showed that slight changes in humidity at the bottom of the CTB had no significant influence on the deformations at the bottom of the RCC layer. Comparison of stresses under RCC layer showed that stresses calculated from strain gauge were 1.80 times lower than those calculated theoretically.
Highway engineering. Roads and pavements, Bridge engineering
This study focuses on the common key technologies of “environmentally friendly and resource-saving” asphalt pavement. Reactive asphalt deodorizers react with volatile chemicals with irritating odors in asphalt under high temperature conditions, converting them into stable and non-volatile macromolecules to remove odors and achieve a deodorizing effect. A goal is to develop clean asphalt pavement materials with the main characteristics of “low consumption, low emissions, low pollution, high efficiency”. In this experimental research, we used gas-emission detection devices and methods to detect and evaluate odor concentration, SO<sub>2</sub>, NO, volatile organic compounds, and other gases and volatile substances in the production and construction of clean asphalt and mixtures. By combining rheological experiments, mechanical experiments, and other means, this study investigates the effects of odor enhancers on the penetration, ductility, softening point, high-temperature rheological properties, construction, and workability of warm-mix asphalt and mixtures. Furthermore, infrared spectroscopy experiments are used to conduct in-depth research on the odor-enhancing mechanism of odor enhancers. The results indicate that the addition of odor enhancers has little effect on the penetration and softening point of asphalt and maintains the basic performance stability of asphalt. In terms of high-temperature rheological properties and construction workability, the addition of warm-mix agents has a significant impact on the high-temperature failure temperature and rotational viscosity of asphalt, while the influence of deodorizers is relatively small. At higher temperatures, the rotational viscosity increases with the increase in the amount of deodorant added. Functional group analysis shows that the newly added materials have little effect on the essential properties and chemical composition of asphalt. In addition, during the experimental process, it was found that the coupling effect and other chemical reactions between the deodorizing agent and the warm-mixing agent can effectively improve the degradation effect of harmful gases. After the coupling action of deodorant and the warm-mixing agent, the degradation rate of harmful gas can be increased by 5–20%, ensuring the stable performance of asphalt. The performance of powder deodorizing agent is better than that of liquid deodorizing agent, and an increase in the dosage of deodorizing agent will enhance the degradation effect. This study provides an important basis for a deeper understanding of the performance of warm-mix and odorless modified asphalt.
A shaking table test of a 1/60 scale cross-fault bridge model considering the effects of soil–bridge interactions was designed and implemented, in which the bridge model was placed in two individual soil boxes to simulate the bridge across a strike-slip fault. Three seismic ground motion time-histories with permanent displacements were selected as input excitations to investigate the influence of seismic ground motions with different frequency characteristics on the seismic response of the testing soil–bridge model. The one-side input method was used to simulate the seismic response of bridges across faults. The seismic responses of the soil and bridge in terms of acceleration, strain, and displacement were analyzed. The test results show that the one-side input method can simulate the seismic response of the main girder displacements well and the displacements and strains of piers and piles of the bridge structure spanning a fault. The strain responses at near-fault pile foundations are much larger than those farther away from the fault. Compared with other bridges, the cross-fault bridge is more prone to torsional and displacement responses during earthquakes. Surface fault rupture can lead to permanent inclination of the bridge piers, which should be paid more attention to in the practical engineering design of the bridges. Soil–bridge interactions can suppress the amplification effect of soil on ground motions. The test results can provide a reference for future research and the design of cross-fault bridges.
The available tools for damage identification in civil engineering structures are known to be computationally expensive and data-demanding. This paper proposes a comprehensive machine learning based damage identification (CMLDI) method that integrates modal analysis and dynamic analysis strategies. The proposed approach is applied to a real structure - KW51 railway bridge in Leuven. CMLDI diligently combines signal processing, machine learning (ML), and structural analysis techniques to achieve a fast damage identification solver that relies on minimal monitoring data. CMLDI considers modal analysis inputs and extracted features from acceleration responses to inform the damage identification based on the long-term and short-term monitoring data. Results of operational modal analysis, through the analysis of long-term monitoring data, are analyzed using pre-trained k-nearest neighbor (kNN) classifiers to identify damage existence, location, and magnitude. A well-crafted assembly of signal processing and ML methods is used to analyze acceleration time histories. Stacked gated recurrent unit (Stacked GRU) networks are used to identify damage existence, kNN classifiers are used to identify damage magnitude, and convolutions neural networks (CNN) are used to identify damage location. The damage identification results for the KW51 bridge demonstrate this approach's high accuracy, efficiency, and robustness. In this work, the training data is retrieved from the sensor of the KW51 bridge as well as the numerical finite element model (FEM). The proposed approach presents a systematic path to the generation of training data using a validated FEM. The data generation relies on modeling combinations of damage locations and magnitudes along the bridge.
We study the continual pretraining recipe for scaling language models' context lengths to 128K, with a focus on data engineering. We hypothesize that long context modeling, in particular \textit{the ability to utilize information at arbitrary input locations}, is a capability that is mostly already acquired through large-scale pretraining, and that this capability can be readily extended to contexts substantially longer than seen during training~(e.g., 4K to 128K) through lightweight continual pretraining on appropriate data mixture. We investigate the \textit{quantity} and \textit{quality} of the data for continual pretraining: (1) for quantity, we show that 500 million to 5 billion tokens are enough to enable the model to retrieve information anywhere within the 128K context; (2) for quality, our results equally emphasize \textit{domain balance} and \textit{length upsampling}. Concretely, we find that naively upsampling longer data on certain domains like books, a common practice of existing work, gives suboptimal performance, and that a balanced domain mixture is important. We demonstrate that continual pretraining of the full model on 1B-5B tokens of such data is an effective and affordable strategy for scaling the context length of language models to 128K. Our recipe outperforms strong open-source long-context models and closes the gap to frontier models like GPT-4 128K.
Ion channels play a vital role in regulating the flow of ions across cell membranes to maintain physiological functions. Mimicking this biological process and fabricating artificial nano-/micro-channels with similar functions are expected to solve challenges involving ion transport in fields such as energy, environment, and human health. As a flexible and controllable preparation technology, supramolecular self-assembly is a powerful tool for designing biomimetic channels for specific purposes. Although various artificial channels have been reported, ever-increasing research interest in their application potential call for a bridge between design principles and engineering applications. In this Perspective, we summarized the recent advances in this new field and analyzed the working mechanism based on the grounded theory of supramolecular chemistry and nanofluidic systems. To promote the progress of this field, the opportunities and key challenges in this field for future research and applications are highlighted.
Tadas Tamošiūnas, Gintaras Žaržojus, Šarūnas Skuodis
Simplified methods based on cone penetration test results are commonly used to determine soil deformation modulus, depending on the engineering geological and geotechnical conditions and the complexity of the computational approach. This paper reviews some empirical equations based on the results of the cone penetration test and gives recommendations for the assessment of Young’s modulus, oedometric modulus and residual modulus from the cone penetration test result, according to the Lithuanian technical requirements and other standards. Theoretical interpretations of results are presented together with practical examples for coarse and fine soils, limits of empirical equations application are explained.
Highway engineering. Roads and pavements, Bridge engineering
Using models for requirements engineering (RE) is uncommon in systems engineering, despite the widespread use of model-based engineering in general. One reason for this lack of use is that formal models do not match well the trend to move towards agile developing methods. While there exists work that investigates challenges in the adoption of requirements modeling and agile methods in systems engineering, there is a lack of work studying successful approaches of using requirements modelling in agile systems engineering. To address this gap, we conducted a case study investigating the application of requirements models at Ericsson AB, a Swedish telecommunications company. We studied a department using requirements models to bridge agile development and plan-driven development aspects. We find that models are used to understand how requirements relate to each other, and to keep track with the product's evolution. To cope with the effort to maintain models over time, study participants suggest to rely on text-based notations that bring the models closer to developers and allow integration into existing software development workflows. This results in tool trade-offs, e.g., losing the possibility to control diagram layout.
Abstract Floods, bridge scour, and flood-associated loads have caused over sixty percent of bridge failures in the U.S. Current practices for the vulnerability assessment of instream bridges under the effect of such flood largely rely on qualitative methods, such as visual inspection, without considering uncertainties associated with structural behaviors and flood loads. Recently, numerical methods have been investigated to quantitatively consider such uncertainty effects by adapting fragility analysis concept that has been well established in the earthquake engineering area. However, river hydraulics, geotechnical uncertainties of foundation, variable scour-depth effects, and their significance in structural fragility of bridges have rarely been systematically investigated. This study proposes a comprehensive fragility analysis framework that can effectively incorporate both flow hydraulics and geotechnical uncertainties, in addition to commonly considered components in flood-fragility analysis of bridges. The significance of flow hydraulics and geotechnical uncertainties has been demonstrated through a real-bridge case study. Conventional fragility curves with maximum scour depth may not represent actual vulnerability during floods, as the scour may not reach to the maximum in many cases. Therefore, fragility surface with two intensity measures, i.e. flow discharges and scour depths, is introduced for real-time vulnerability assessment during floods in this study.