Annex A of EN 1992-1-1:2023—recently revised and amended in the context of the Second Generation of Eurocodes—introduces a method to adjust partial safety factors for the resistance side alongside a set of factors for different conditions and design situations, both for new and existing structures. The method proposed in Annex A is complemented by a set of stochastic models for relevant basic variables and forms a rather simple and objective format to adjust the partial safety factors from the default values offered in EN 1990:2023. Yet, over the last few years, advanced reliability-based methods aligned with modern computational tools have proved to enable rather robust and efficient structural reliability assessments. A thorough comparative analysis is imperative to understand how distinct reliability-based methods can be applied to adjust partial safety factors in the design of new structural components composed of steel-reinforced concrete. This analysis sheds light on the use of different methods to derive partial safety factors for the resolution of common engineering problems and offers inferences regarding possible implications in terms of safety and economic efficiency of design solutions.
This study presents a comparative evaluation of three nonlinear state estimation filters, the Extended Kalman Filter (EKF), Unscented Kalman Filter (UKF), and Particle Filter (PF), for the task of 3D facial landmark tracking. Using a publicly available dataset, we assess each filter's performance under both deterministic (noise-free) and stochastic (noisy) conditions. Metrics such as mean squared error (MSE), convergence rates of state and covariance estimates, and consistency over time are used to quantify tracking performance. Results show that the EKF consistently outperforms the UKF and PF, achieving faster convergence and lower estimation error, particularly in scenarios characterized by mild nonlinearity. Heatmap analyses under varying noise conditions further highlight the EKF's robustness and accuracy, especially in low-noise regimes, while PF performance deteriorates with increased process noise. Our findings suggest that while UKF and PF offer advantages in highly nonlinear or non-Gaussian environments, the EKF provides the best trade-off between computational efficiency and estimation accuracy for the facial tracking task studied in mild nonlinearity.
Felipe Baesler, Oscar Cornejo, Carlos Obreque
et al.
This study presents a novel approach that integrates Discrete Event Simulation (DES) with Design of Experiments (DOE) techniques, framed within a stochastic optimization context and guided by a multi-objective goal programming methodology. The focus is on enhancing the operational efficiency of an emergency department (ED), illustrated through a real-world case study conducted in a Chilean hospital. The methodology employs Response Surface Methodology (RSM) to explore and optimize the impact of four critical resources: physicians, nurses, rooms, and radiologists. The response variable, formulated as a goal programming function, captures the aggregated patient flow time across four representative care tracks. The optimization process proceeded iteratively: early stages relied on linear approximations to identify promising improvement directions, while later phases applied a central composite design to model nonlinear interactions through a quadratic response surface. This progression revealed complex interdependencies among resources, ultimately leading to a local optimum. The proposed approach achieved a 50% reduction in the aggregated objective function and improved individual patient flow times by 7% to 26%. Compared to traditional metaheuristic methods, this simulation–optimization framework offers a computationally efficient alternative, particularly valuable when the simulation model is complex and resource-intensive. These findings underscore the value of combining simulation, RSM, and multi-objective optimization to support data-driven decision-making in complex healthcare settings. The methodology not only improves ED performance but also offers a flexible and scalable framework adaptable to other clinical environments seeking resource optimization and operational improvement.
Abstract Earthquakes are a significant cause of damage to buildings and pose a serious threat to human life, particularly when structures have irregularities which affect their ability to resist seismic forces. The present study evaluates the performance of irregular reinforced concrete buildings under seismic loading. Nonlinear static analysis (pushover analysis) was conducted in accordance with the procedures prescribed by the Applied Technology Council (ATC-40) and Federal Emergency Management Agency (FEMA-356). Models were developed and analysed using ETABS software, focusing on buildings with plan symmetric and asymmetric configurations along with vertical irregularities like stiffness, mass, and combination of both stiffness + mass. This study compares vertically regular and irregular buildings by introducing different types of irregularities, which intern helps in understanding the behaviour of those buildings during earthquake. The behaviour of these buildings under seismic loading was evaluated through key parameters including base shear, lateral displacement, bending moments and storey shear forces for critical corners in the structural system. Results indicated that the buildings with combined stiffness and mass irregularities exhibit the most critical seismic behaviour, showing higher displacements and lower base shear capacity compared to other configurations. To mitigate this critical seismic response, shear walls were provided at different locations for buildings with different structural configurations.
Quality management is a constant and significant concern in enterprises. Effective determination of correct solutions for comprehensive problems helps avoid increased backtesting costs. This study proposes an intelligent quality control method for manufacturing processes based on a human–cyber–physical (HCP) knowledge graph, which is a systematic method that encompasses the following elements: data management and classification based on HCP ternary data, HCP ontology construction, knowledge extraction for constructing an HCP knowledge graph, and comprehensive application of quality control based on HCP knowledge. The proposed method implements case retrieval, automatic analysis, and assisted decision making based on an HCP knowledge graph, enabling quality monitoring, inspection, diagnosis, and maintenance strategies for quality control. In practical applications, the proposed modular and hierarchical HCP ontology exhibits significant superiority in terms of shareability and reusability of the acquired knowledge. Moreover, the HCP knowledge graph deeply integrates the provided HCP data and effectively supports comprehensive decision making. The proposed method was implemented in cases involving an automotive production line and a gear manufacturing process, and the effectiveness of the method was verified by the application system deployed. Furthermore, the proposed method can be extended to other manufacturing process quality control tasks.
In this paper, the growing significance of data analysis in manufacturing environments is exemplified through a review of relevant literature and a generic framework to aid the ease of adoption of regression-based supervised learning in manufacturing environments. To validate the practicality of the framework, several regression learning techniques are applied to an open-source multi-stage continuous-flow manufacturing process data set to typify inference-driven decision-making that informs the selection of regression learning methods for adoption in real-world manufacturing environments. The investigated regression learning techniques are evaluated in terms of their training time, prediction speed, predictive accuracy (R-squared value), and mean squared error. In terms of training time (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>T</mi><mi>T</mi></mrow></semantics></math></inline-formula>), <i>k</i>-NN20 (<i>k</i>-Nearest Neighbour with 20 neighbors) ranks first with average and median values of 4.8 ms and 4.9 ms, and 4.2 ms and 4.3 ms, respectively, for the first stage and second stage of the predictive modeling of the multi-stage continuous-flow manufacturing process, respectively, over 50 independent runs. In terms of prediction speed (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>P</mi><mi>S</mi></mrow></semantics></math></inline-formula>), DTR (decision tree regressor) ranks first with average and median values of <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>5.6784</mn><mo>×</mo><msup><mn>10</mn><mn>6</mn></msup></mrow></semantics></math></inline-formula> observations per second (ob/s) and <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>4.8691</mn><mo>×</mo><msup><mn>10</mn><mn>6</mn></msup></mrow></semantics></math></inline-formula> observations per second (ob/s), and <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>4.9929</mn><mo>×</mo><msup><mn>10</mn><mn>6</mn></msup></mrow></semantics></math></inline-formula> observations per second (ob/s) and <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>5.8806</mn><mo>×</mo><msup><mn>10</mn><mn>6</mn></msup></mrow></semantics></math></inline-formula> observations per second (ob/s), respectively, for the first stage and second stage of the predictive modeling of the multi-stage continuous-flow manufacturing process, respectively, over 50 independent runs. In terms of R-squared value (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msup><mi>R</mi><mn>2</mn></msup></semantics></math></inline-formula>), BR (bagging regressor) ranks first with average and median values of 0.728 and 0.728, respectively, over 50 independent runs, for the first stage of the predictive modeling of the multi-stage continuous-flow manufacturing process, and RFR (random forest regressor) ranks first with average and median values of 0.746 and 0.746, respectively, over 50 independent runs, for the second stage of the predictive modeling of the multi-stage continuous-flow manufacturing process. In terms of mean squared error (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>M</mi><mi>S</mi><mi>E</mi></mrow></semantics></math></inline-formula>), BR (bagging regressor) ranks first with average and median values of 2.7 and 2.7, respectively, over 50 independent runs, for the first stage of the predictive modeling of the multi-stage continuous-flow manufacturing process, and RFR (random forest regressor) ranks first with average and median values of 3.5 and 3.5, respectively, over 50 independent runs, for the second stage of the predictive modeling of the multi-stage continuous-flow manufacturing process. All methods are further ranked inferentially using the statistics of their performance metrics to identify the best method(s) for the first and second stages of the predictive modeling of the multi-stage continuous-flow manufacturing process. A Wilcoxon rank sum test is then used to statistically verify the inference-based rankings. <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>D</mi><mi>T</mi><mi>R</mi></mrow></semantics></math></inline-formula> and <i>k</i>-NN20 have been identified as the most suitable regression learning techniques given the multi-stage continuous-flow manufacturing process data used for experimentation.
Engineering machinery, tools, and implements, Technological innovations. Automation
Fabrizia Cilento, Barbara Palmieri, Giovangiuseppe Giusto
et al.
In the aerospace sector, structural and non-structural composite components are usually subjected to a wide range of environmental conditions. Among all, moisture can seriously damage these materials’ performance, reducing their mechanical, thermal, electrical, and physical properties as well as their service time. Lightweight protective barrier coatings capable of reducing the diffusion of gases and/or liquids in a material can improve the material’s resistance in humid environments. In this work, nanolamellar nanocomposites characterized by a high in-plane orientation of nanoplatelets have been employed as protective coatings for Kevlar sandwich panels, reproducing the construction of a nacelle engine. The effectiveness of the protection against water uptake of nanocomposites reinforced with graphite nanoplatelets (GNPs) at high filler contents (70, 80 and 90 wt%) has been investigated using moisture uptake and Ground-Air-Ground (GAG) tests in an environmental chamber. GNP coatings effectively work as barrier by generating highly tortuous paths for molecule diffusion. Results showed a dependence of the absorption on the coating composition and inner structure. Films @70 wt% GNPs showed the best protection against moisture uptake by delaying the phenomenon and reducing the absorption by −80% after 3 days and −35% after 41 days.
Urban water bodies have a cooling effect and alter the local urban thermal environment. However, current research is unclear regarding the relationships between factors such as the spatial density, area proportion, and distribution pattern of water bodies and the cooling effect of water on the local thermal environment. To clarify these relationships, it is critical to quantify and evaluate the influence these factors have on the cooling effect of water in the urban landscape. Therefore, we analyzed the cooling effect of different water bodies on the local thermal environment at the microscale by comparing their area proportions and distribution patterns using numerical simulations. Furthermore, we analyzed the day–night variation in the cooling effect of urban water bodies with different areas and distribution patterns. We used the area proportion, separation index (SI), and landscape shape index (LSI) to indicate the layouts of water bodies. The results showed that the cooling effect of a water body was higher during the day than at night. These results also showed that area proportion and LSI were positively correlated with the water body’s cooling effect. However, the efficiency of the cooling effect gradually decreased with increasing area proportion. When the LSI increased, more areas within the region displayed larger cooling effect values, but the uniformity of the regional cooling diminished. Additional results showed that the cooling effect had no significant positive correlation with SI. A moderate SI could enhance the uniformity of the cooling effect in the region and link the cooling effect between water patches.
The use of deep machine learning methods for semantic classification of city mesh models is one of the current trends in geoscience development. Thanks to the thriving development of Convolutional Neural Networks (CNNs) it is now achievable to conduct fully automated process of building aforementioned 3D model by means of photogrammetric techniques and supplement it with additional semantic information obtained by Artificial Intelligence (AI) algorithms. In order to guarantee the comprehensiveness of said information it is essential to use an extensive range of 3D data including oblique aerial imagery and aerial laser scanning (ALS). Such comprehensive 3D mesh models may be later implemented in many Digital Twin class solutions additionally supported with modern GIS systems and its algorithms. To proof the validity of this thesis, the article showcases results of research conducted using deep learning based solutions tested on two datasets - real-world data in the form of oblique aerial images and ALS point clouds acquired in Bordeaux, France using novel Leica CityMapper-1 multisensoral system and large-scale dataset from SUM: A Benchmark Dataset of Semantic Urban Meshes. Both subalgorithms make use of CNNs as its core-feature. To perform accurate classification of oblique aerial scenes PSP-Net architecture accelerated by techniques of transfer learning has been used. Second algorithm destined for ALS point clouds utilizes CNN as well, but in this case implementation is based on proprietary architecture. The results of the experiments demonstrate that the utilizing these two mutually complementary solutions to extract new semantic information for city mesh models in proposed manner compared with the state-of-the-art methods achieves competitive classification performance.
Ahmed Sule, Zulkarnain Abdul Latiff, Mohammed Azman Abbas
et al.
Global emission of gases has increased rapidly due to higher combustion of fossil fuels arising from increasing world population which has led to a greater number of manufacturing industries and ‘on-road vehicle (ORV)’ users. Researchers have attributed cause of global warming to gases emissions which correspondingly lead to climate change with devastating repercussions. Currently, climate change is a general issue and world leaders have been tasked to cut down emissions of gases that directly affect the ecosystem and influence climate change. Biodiesel which is an alternative to fossil fuels face many challenges and to tackle some limitations with biodiesel researchers blends biodiesels in various proportional ratio to diesel fuel. This paper, therefore, concentrates on reviewing the use of additives specifically nano-additives by researchers recently to alter and boost desired characteristics in diesel-biodiesel fuel; it also examines the synthesis of nano-additives; challenges, and advances made. This paper further analysed, reviewed, and compared recent results from nano-additive use with respect to emissions, fuel consumption, brake thermal efficiency, and engine power, establishes the merits and demerits of diverse nano-additives, and finally presents a conclusive opinion on nano-additive usage with diesel fuels in diesel engines.
Mechanical engineering and machinery, Mechanics of engineering. Applied mechanics
The thumb is the most important finger of the human hand and has a great influence on grasp manipulations. However, the extent to which joints other than the thumb joints affect the grasp, and thus, which joints should be included in a prosthetic hand, remains an open issue. In this paper, we focus on the metacarpophalangeal joints of the four fingers, except the thumb, which can generate flexion/extension and abduction/adduction motions. The contribution of these joints to grasping was evaluated in four aspects: grasp size, grasp force, grasp quality and grasp success rate. Six subjects participated in experiments with respect to the maximum grasp size and grasp force. The results show that possessing abduction mobility of the metacarpophalangeal joints can increase the grasp size by 4.67 ± 1.93 mm and the grasp force by 5.27 ± 4.25 N. Then, the grasping quality and success rate were tested in a simulation platform and using a robotic hand, respectively. The results show that grasp quality was promoted by 76.7% in the simulated environment with abduction mobility compared to without abduction mobility, whereas the grasp success rate was promoted by 68.3%. We believe that the results of this work can benefit the understanding of hand function and prosthetic hand design.
The use of fire safety engineering and performance-based techniques continues to grow in prominence as building design becomes more ambitious, increasing complexity. National fire safety enforcement agencies are tasked with evaluating and approving the resulting fire strategies, which have similarly continued to become more advanced and specialist. To assist with the evaluation of fire strategies, this paper introduces a methodology dedicated to sustainable building fire safety level simulations. The methodology derives from ideas originally introduced in British Standard Specification PAS 911 in 2007 and combines a visual representation of fire strategies with a semi-quantitative approach to allow for their evaluation. The concept can be applied to a range of industrial fire safety assessments and can be modified for specific needs relative to different industries.
As time passes, some elements of the structures are affected by local damage, though insignificantly. With the development and expansion of the damage , these structures may be completely destroyed and impose high socioeconomic costs. This study investigated the damage identification problem in the column under the axial load using modal data (frequencies and mode shapes) and wavelet transform analysis. The obtained results showed that by increasing the axial load proportional to the critical buckling load, the frequency value decreases in all modes in healthy and faulty modes. Additionally, under the effect of the same axial load, the frequency of the faulty sample is less than that of the healthy one, and as the damage severity increases, the rate of frequency reduction increases. The wavelet transform input signal was defined on the basis of the angle between the healthy and faulty mode shapes. The output signal details in the damage locations indicated perturbation so that in all the studied modes in different ratios of critical load, the damage locations were identified with high accuracy. Moreover, the results from the research showed that the perturbations in different damage locations are independent of each other and are only affected by the severity of the location damage and the axial load has no effect on the damage detection sensitivity of the wavelet transform algorithm. Furthermore, the sum of the wavelet coefficients of damaged locations in several damage modes is equal to the wavelet coefficients of damaged locations of the sum of damage modes.