Ibrahima Ka, Ansoumana Noumou Djité, Seynabou Anna Chimére Diop
et al.
The global shift toward electric mobility represents a cornerstone of sustainable energy transitions; however, developing countries face distinct structural, economic, and infrastructural challenges that constrain their participation in this transformation. This paper examines the conditions, policy frameworks, and infrastructural requirements necessary for a successful electric vehicle (EV) transition in developing countries, with particular attention to the interplay between energy access, transportation policy, and grid readiness. Using a mixed-methods approach that integrates policy analysis, partial life-cycle assessment (LCA) with the second-hand market, and case studies across sub-Saharan Africa and South Asia, the study evaluates the implications of limited electricity access, unreliable power grids, and the dominance of informal transport systems on EV adoption. The findings reveal that, while EVs offer significant potential for reducing emissions and improving urban air quality, their deployment depends critically on coordinated investments in renewable-based electricity generation, charging infrastructure, and supportive regulatory frameworks. Policy strategies such as fiscal incentives, public–private partnerships, and decentralized charging networks can accelerate uptake when aligned with energy-access goals. The paper argues that the EV transition in developing economies must be policy-driven and context-adapted, integrating mobility electrification with broader agendas of energy justice, rural electrification, and industrial development. Ultimately, the research provides a roadmap for aligning electric mobility policies with sustainable infrastructure development to ensure that the global EV revolution becomes both inclusive and equitable.
Mechanical engineering and machinery, Machine design and drawing
Cities are transforming from Industry 4.0, defined by automation and advanced technologies, to Industry 5.0, which emphasises human–machine collaboration and a human-centric approach. While Industry 4.0 prioritises technological efficiency, it often overlooks the human scale, whereas Industry 5.0 fosters inclusivity, resilience and adaptability, making it essential for agile urban development. This study aims to provide guidelines for agile cities, using a qualitative approach utilising an online survey of diverse stakeholders (i.e., urban planners, policymakers, and citizens) with a review of literature and reports. Drawing on Egypt’s experience transitioning to Industry 4.0, our findings offer insights and directions relevant to the shift toward Industry 5.0. The results underscore the role of innovation and excellence in education, driven by integrated efforts in governance, curriculum design, professional development, inclusivity, emerging technologies, and entrepreneurship. This study contributes by investigating stakeholders’ insights from Egyptian experiences to inform the development of agile cities.
Pruthwiraj Santhosh, Darrell Robinette, Daniel Knopp
et al.
This paper presents an optimized vehicular reordering methodology designed to minimize energy consumption within heterogeneous cohorts operating at constant velocity on limited-access highways. The approach addresses the challenge of optimizing vehicle sequencing by considering both aerodynamic drag reduction benefits and the energy costs of reconfiguring a cohort from a stochastic initial state. This study provides empirical validation through on-road vehicle tests, demonstrating significant energy savings, achieving up to 10% reduction in axle energy for optimally configured cohorts compared to independent operation. A System of Systems (SoS) simulation environment, integrating micro-traffic, validated powertrain, and aerodynamic drag reduction models, was developed to simulate complex reconfiguration maneuvers and quantify associated energy expenditures. The methodology examines how powertrain characteristics influence optimal arrangements and quantifies the impact of individual vehicle placement on overall cohort efficiency. Findings indicate that while reconfiguration incurs a minor energy cost (typically <0.45% of total trip energy for a 20 km trip), the net energy savings over relevant travel distances are substantial. The study also highlights the sensitivity of drag reduction estimators for heterogeneous platoons and the current limitations in available models. Ultimately, a predictive optimization framework is proposed that leverages connectivity-enabled information to select the most energy-efficient cohort configuration, considering factors such as distance to destination and reconfiguration energy, thereby offering a practical strategy for enhancing fuel economy in future connected and automated transportation systems.
Mechanical engineering and machinery, Machine design and drawing
Alejandro Velez-Arce, Jesus Caraballo, Marinka Zitnik
Existing biomedical benchmarks do not provide end-to-end infrastructure for training, evaluation, and inference of models that integrate multimodal biological data and a broad range of machine learning tasks in therapeutics. We present PyTDC, an open-source machine-learning platform providing streamlined training, evaluation, and inference software for multimodal biological AI models. PyTDC unifies distributed, heterogeneous, continuously updated data sources and model weights and standardizes benchmarking and inference endpoints. This paper discusses the components of PyTDC's architecture and, to our knowledge, the first-of-its-kind case study on the introduced single-cell drug-target nomination ML task. We find state-of-the-art methods in graph representation learning and domain-specific methods from graph theory perform poorly on this task. Though we find a context-aware geometric deep learning method that outperforms the evaluated SoTA and domain-specific baseline methods, the model is unable to generalize to unseen cell types or incorporate additional modalities, highlighting PyTDC's capacity to facilitate an exciting avenue of research developing multimodal, context-aware, foundation models for open problems in biomedical AI.
Ahmet Bilal Arıkan, Şener Özönder, Mustafa Taha Koçyiğit
et al.
We present an integrated machine learning framework that transforms how manufacturing cost is estimated from 2D engineering drawings. Unlike traditional quotation workflows that require labor-intensive process planning, our approach about 200 geometric and statistical descriptors directly from 13,684 DWG drawings of automotive suspension and steering parts spanning 24 product groups. Gradient-boosted decision tree models (XGBoost, CatBoost, LightGBM) trained on these features achieve nearly 10% mean absolute percentage error across groups, demonstrating robust scalability beyond part-specific heuristics. By coupling cost prediction with explainability tools such as SHAP, the framework identifies geometric design drivers including rotated dimension maxima, arc statistics and divergence metrics, offering actionable insights for cost-aware design. This end-to-end CAD-to-cost pipeline shortens quotation lead times, ensures consistent and transparent cost assessments across part families and provides a deployable pathway toward real-time, ERP-integrated decision support in Industry 4.0 manufacturing environments.
The purpose of this paper is twofold. On a technical side, we propose an extension of the Hausdorff distance from metric spaces to spaces equipped with asymmetric distance measures. Specifically, we focus on the family of Bregman divergences, which includes the popular Kullback--Leibler divergence (also known as relative entropy). As a proof of concept, we use the resulting Bregman--Hausdorff divergence to compare two collections of probabilistic predictions produced by different machine learning models trained using the relative entropy loss. The algorithms we propose are surprisingly efficient even for large inputs with hundreds of dimensions. In addition to the introduction of this technical concept, we provide a survey. It outlines the basics of Bregman geometry, as well as computational geometry algorithms. We focus on algorithms that are compatible with this geometry and are relevant for machine learning.
Drift chambers have long been central to collider tracking, but future machines like a Higgs factory motivate higher granularity and cluster counting for particle ID, posing new data processing challenges. Machine learning (ML) at the "edge", or in cell-level readout, can dramatically reduce the off-detector data rate for high-granularity drift chambers by performing cluster counting at-source. We present machine learning algorithms for cluster counting in real-time readout of future drift chambers. These algorithms outperform traditional derivative-based techniques based on achievable pion-kaon separation. When synthesized to FPGA resources, they can achieve latencies consistent with real-time operation in a future Higgs factory scenario, thus advancing both R&D for future collider detectors as well as hardware-based ML for edge applications in high energy physics.
Tian Zheng, Subashree Venkatasubramanian, Shuolin Li
et al.
Machine learning has been increasingly applied in climate modeling on system emulation acceleration, data-driven parameter inference, forecasting, and knowledge discovery, addressing challenges such as physical consistency, multi-scale coupling, data sparsity, robust generalization, and integration with scientific workflows. This paper analyzes a series of case studies from applied machine learning research in climate modeling, with a focus on design choices and workflow structure. Rather than reviewing technical details, we aim to synthesize workflow design patterns across diverse projects in ML-enabled climate modeling: from surrogate modeling, ML parameterization, probabilistic programming, to simulation-based inference, and physics-informed transfer learning. We unpack how these workflows are grounded in physical knowledge, informed by simulation data, and designed to integrate observations. We aim to offer a framework for ensuring rigor in scientific machine learning through more transparent model development, critical evaluation, informed adaptation, and reproducibility, and to contribute to lowering the barrier for interdisciplinary collaboration at the interface of data science and climate modeling.
Gökhan Özbulak, Oscar Jimenez-del-Toro, Maíra Fatoretto
et al.
The evaluation of fairness models in Machine Learning involves complex challenges, such as defining appropriate metrics, balancing trade-offs between utility and fairness, and there are still gaps in this stage. This work presents a novel multi-objective evaluation framework that enables the analysis of utility-fairness trade-offs in Machine Learning systems. The framework was developed using criteria from Multi-Objective Optimization that collect comprehensive information regarding this complex evaluation task. The assessment of multiple Machine Learning systems is summarized, both quantitatively and qualitatively, in a straightforward manner through a radar chart and a measurement table encompassing various aspects such as convergence, system capacity, and diversity. The framework's compact representation of performance facilitates the comparative analysis of different Machine Learning strategies for decision-makers, in real-world applications, with single or multiple fairness requirements. In particular, this study focuses on the medical imaging domain, where fairness considerations are crucial due to the potential impact of biased diagnostic systems on patient outcomes. The proposed framework enables a systematic evaluation of multiple fairness constraints helping to identify and mitigate disparities among demographic groups while maintaining diagnostic performance. The framework is model-agnostic and flexible to be adapted to any kind of Machine Learning systems, that is, black- or white-box, any kind and quantity of evaluation metrics, including multidimensional fairness criteria. The functionality and effectiveness of the proposed framework is shown with different simulations, and an empirical study conducted on three real-world medical imaging datasets with various Machine Learning systems. Our evaluation framework is publicly available at https://pypi.org/project/fairical.
Generative Artificial Intelligence (Generative AI) is a collection of AI technologies that can generate new information such as texts and images. With its strong capabilities, Generative AI has been actively studied in creative design processes. However, limited studies have explored the roles of humans and Generative AI in conceptual design processes, leaving a gap for human-AI collaboration investigation. To address this gap, this study uncovers the contributions of different Generative AI technologies in assisting humans in the conceptual design process. Novice designers completed two design tasks with or without the assistance of Generative AI. Results revealed that Generative AI primarily assists humans in problem definition and idea generation stages, while idea selection and evaluation remain predominantly human-led. Additionally, with Generative AI assistance, the idea selection and evaluation stages were further enhanced. Based on the findings, we discuss the role of Generative AI in human-AI collaboration and implications for enhancing future conceptual design support with Generative AI assistance.
Lot sizing is a prevalent issue within manufacturing companies, where determining the optimal procurement and production lot sizes is crucial for maximizing profits. This problem has become more complex, given that numerous suppliers can provide the same raw materials with different prices and quantity discount schemes. A company should also determine optimal carriers to deliver materials to the company’s warehouse. In a manufacturing process, the company should determine the optimal production lot size and its schedules. In this paper, a model was developed to solve simultaneously procurement and production lot sizing, as well as production scheduling problems. The model encompasses multiple suppliers offering quantity discounts, aiming to maximize company profit by accounting for various costs, including procurement, production, inventory, and quality costs. A case study is taken from a company producing noodles and its related derivative products to illustrate the application of the model. Based on the optimization results, the company obtained a total profit of IDR. 14,656,550,000 or $950,921.30 (the exchange rate of $1 at IDR. 15,413). The sensitivity analysis results show that the objective function is sensitive to changes in the purchase cost, sale revenue, and discount rate parameters. The decision variables for accepted product demand, product quantity, and the starting and completion time of product family are only sensitive to changes in certain parameters. Meanwhile, the decision variables for product inventory, product backlog, raw material inventory, and purchased raw material quantity are sensitive to the changes in all the analyzed parameters.
Machine design and drawing, Engineering machinery, tools, and implements
The paper deals with the concept of centralized demand forecasting and logistical coordination in distribution networks. The aim of the paper is to relate the results provided by the forecasting tools to the basic aspects of logistical coordination. The case of 29 distribution networks in which a logistics operator (3PL) operates and provides contract logistics services to a manufacturing company is analysed. The paper partially confirms the hypothesis of better testability of forecasts based on machine learning algorithms and artificial neural networks for demand planning by the logistics operator to the manufacturer in the framework of logistics coordination in the distribution network. These algorithms perform better for networks with high specificity of flows and food networks. Traditional algorithms, on the other hand, have their better share in creating forecasts for more standard distribution networks. Additionally, the second hypothesis regarding the positive influence of modern technological solutions (such as the use of cloud technologies, EDI and flow tracking standards) was confirmed. Additionally, a number of factors that did not have a direct impact on forecasting errors were detailed.
Machine design and drawing, Engineering machinery, tools, and implements
Shigeki Yumoto, Takumi Kitsukawa, Alessandro Moro
et al.
Abstract In recent years, the number of pipes that have exceeded their service life has increased. For this reason, earthworm-type robots equipped with cameras have been developed to perform regularly inspections of sewer pipes. However, inspection methods have not yet been established. This paper proposes a method for anomaly detection from images in pipes using Generative Adversarial Network (GAN). A model that combines f-AnoGAN and Lightweight GAN is used to detect anomalies by taking the difference between input images and generated images. Since the GANs are only trained with non-defective images, they are able to convert an image containing defects into one without them. Subtraction images is used to estimate the location of anomalies. Experiments were conducted using actual images of cast iron pipes to confirm the effectiveness of the proposed method. It was also validated using sewer-ml, a public dataset.
Rahul Suryakant Sakhare, Yunchang Zhang, Howell Li
et al.
With the emergence of connected vehicle data and high-resolution weather data, there is an opportunity to develop models with high spatial-temporal fidelity to characterize the impact of weather on interstate traffic speeds. In this study, 275,422 trip records from 41,234 unique journeys on 42 rainy days in 2021 and 2022 were obtained. These trip records are categorized as no rain, slight rain, moderate rain, heavy rain, and very heavy rain periods using the precipitation rate from NOAA High-Resolution Rapid-Refresh (HRRR) data. It was observed that average speeds decreased by approximately 8.4% during conditions classified as very heavy rain compared to no rain. Similarly, the interquartile range of traffic speeds increased from 8.34 mph to 12.24 mph as the rain intensity increased. This study also developed a disaggregate approach using logit models to characterize the relationship between weather-related variables (precipitation rate, visibility, temperature, wind, and day or night) and interstate speed reductions. Estimation results reveal that the odds ratio of reducing speed is 5.8% higher for drivers if the precipitation rate is increased by 1 mm/h. The headwind was found to have a positive significant impact of only up to a 10% speed reduction, and speed reduction is greater during nighttime conditions compared to daytime conditions by a factor of 1.68. The additional explanatory variables shed light on drivers’ speed selection in adverse weather environments, providing more information than the single precipitation intensity measure. Results from this study will be particularly helpful for agencies and automobile manufacturers to provide advance warnings to drivers and establish thresholds for autonomous vehicle control.
Mechanical engineering and machinery, Machine design and drawing
Fabio Romagnuolo, Stefano Avolio, Gabriele Fichera
et al.
In the world of motorsports engineering, improving brake performance is a crucial goal. One significant factor that affects this performance is the increase in brake disc temperature due to reduced cooling airflow, a phenomenon called “blanking”. This temperature increase also impacts the rim and the air inside the tire, causing changes in tire temperature and pressure, which affects the vehicle’s performance. Properly adjusting the brake blanking can be essential to keep the tire running at the right temperature, resulting in maximization of the performance on track. To address this complex problem, this study describes the problem of cooling brake discs, and this problem is then used as an opportunity to introduce a new variable in order to optimize the performance of the vehicle. By changing the thermal evolution of the brake disc, through the blanking, it can change a large percentage of heat that heats the tire. When combining an existing brake model in the literature with a tire thermal model in a co-platform simulation, it was seen that it is possible to work these two models together with the aim of being able to obtain the prediction of the optimal blanking value to be adopted before proceeding on track, thus saving time and costs.
Mechanical engineering and machinery, Machine design and drawing
Selejdak Jacek, Bobalo Taras, Blikharskyy Yaroslav
et al.
Most of the modern computer software for the building structures‘ calculation is based on mathematical dependencies which make it possible to analyse rather complex stress-strain state of structures subjected to loading. As a rule, the calculation is based on the finite element method and is reduced to the calculation of deformations arising in structures due to the action of external forces with the use of real strain diagrams of materials, σ-ε diagrams for concrete and reinforcement. Modern normative regulations for reinforced concrete structures‘ calculation are also based on the deformation model using material deformation diagrams, which are as close to the real ones, as possible. Therefore, this study was aimed to investigate in more detail the stress-strain state and the physical essence of the processes occurring in reinforced concrete structures with combined reinforcement according to mathematical approaches and regulations of DBN B.2.6-98:2009 and DSTU B. In 2.6-156:2010. Namely, in the research is analysed the combined reinforcement of S245 steel tapes and A1000 rebar, which is used in the production of reinforced concrete elements. The results of mathematical modelling were compared with the calculation results, according to DBN B.2.6-98: 2009 and DSTU B. B 2.6-156:2010, as well as with field experimental data. Therefore, the conclusion could be made, whether it is possible to use this technique with sufficient accuracy to calculate reinforced concrete structures with combined reinforcement.
Machine design and drawing, Engineering machinery, tools, and implements
[Objective]The rural revitalization strategy presents novel requisites for the extension of agricultural technology. However, the conventional method encounters the issue of a contradiction between supply and demand. Therefore, there is a need for further innovation in the supply form of agricultural knowledge. Recent advancements in artificial intelligence technologies, such as deep learning and large-scale neural networks, particularly the advent of large language models (LLMs), render anthropomorphic and intelligent agricultural technology extension feasible. With the agricultural technology knowledge service of fruit and vegetable as the demand orientation, the intelligent agricultural technology question answering system was built in this research based on LLM, providing agricultural technology extension services, including guidance on new agricultural knowledge and question-and-answer sessions. This facilitates farmers in accessing high-quality agricultural knowledge at their convenience.[Methods]Through an analysis of the demands of strawberry farmers, the agricultural technology knowledge related to strawberry cultivation was categorized into six themes: basic production knowledge, variety screening, interplanting knowledge, pest diagnosis and control, disease diagnosis and control, and drug damage diagnosis and control. Considering the current situation of agricultural technology, two primary tasks were formulated: named entity recognition and question answering related to agricultural knowledge. A training corpus comprising entity type annotations and question-answer pairs was constructed using a combination of automatic machine annotation and manual annotation, ensuring a small yet high-quality sample. After comparing four existing Large Language Models (Baichuan2-13B-Chat, ChatGLM2-6B, Llama 2-13B-Chat, and ChatGPT), the model exhibiting the best performance was chosen as the base LLM to develop the intelligent question-answering system for agricultural technology knowledge. Utilizing a high-quality corpus, pre-training of a Large Language Model and the fine-tuning method, a deep neural network with semantic analysis, context association, and content generation capabilities was trained. This model served as a Large Language Model for named entity recognition and question answering of agricultural knowledge, adaptable to various downstream tasks. For the task of named entity recognition, the fine-tuning method of Lora was employed, fine-tuning only essential parameters to expedite model training and enhance performance. Regarding the question-answering task, the Prompt-tuning method was used to fine-tune the Large Language Model, where adjustments were made based on the generated content of the model, achieving iterative optimization. Model performance optimization was conducted from two perspectives: data and model design. In terms of data, redundant or unclear data was manually removed from the labeled corpus. In terms of the model, a strategy based on retrieval enhancement generation technology was employed to deepen the understanding of agricultural knowledge in the Large Language Model and maintain real-time synchronization of knowledge, alleviating the problem of LLM hallucination. Drawing upon the constructed Large Language Model, an intelligent question-answering system was developed for agricultural technology knowledge. This system demonstrates the capability to generate high-precision and unambiguous answers, while also supporting the functionalities of multi-round question answering and retrieval of information sources.[Results and Discussions]Accuracy rate and recall rate served as indicators to evaluate the named entity recognition task performance of the Large Language Models. The results indicated that the performance of Large Language Models was closely related to factors such as model structure, the scale of the labeled corpus, and the number of entity types. After fine-tuning, the ChatGLM Large Language Model demonstrated the highest accuracy and recall rate. With the same number of entity types, a higher number of annotated corpora resulted in a higher accuracy rate. Fine-tuning had different effects on different models, and overall, it improved the average accuracy of all models under different knowledge topics, with ChatGLM, Llama, and Baichuan values all surpassing 85%. The average recall rate saw limited increase, and in some cases, it was even lower than the values before fine-tuning. Assessing the question-answering task of Large Language Models using hallucination rate and semantic similarity as indicators, data optimization and retrieval enhancement generation techniques effectively reduced the hallucination rate by 10% to 40% and improved semantic similarity by more than 15%. These optimizations significantly enhanced the generated content of the models in terms of correctness, logic, and comprehensiveness.[Conclusion]The pre-trained Large Language Model of ChatGLM exhibited superior performance in named entity recognition and question answering tasks in the agricultural field. Fine-tuning pre-trained Large Language Models for downstream tasks and optimizing based on retrieval enhancement generation technology mitigated the problem of language hallucination, markedly improving model performance. Large Language Model technology has the potential to innovate agricultural technology knowledge service modes and optimize agricultural knowledge extension. This can effectively reduce the time cost for farmers to obtain high-quality and effective knowledge, guiding more farmers towards agricultural technology innovation and transformation. However, due to challenges such as unstable performance, further research is needed to explore optimization methods for Large Language Models and their application in specific scenarios.
The Highway Safety Manual (HSM) initial version provides several safety performances functions (SPFs) that can be used to predict collisions on a roadway network. The calibration of the HSM SPFs for Fatal and Injury (FI), Property Damage Only (PDO), and Total crashes for Urban Four-lane Divided Roadway Segments (U4D) in Muscat, Sultanate of Oman, and the development of new SPFs were investigated in this paper. The HSM SPFs were calibrated first with the HSM methodology, and then new forms of specific SPFs were evaluated for Muscat urban roads to determine the best model using the Poisson-Gamma regression technique. The results of this study show that the HSM calibrated SPFs provide the best fit of the data used in this study and would be the best SPFs for predicting collisions in the City of Muscat. The developed collision model describes the mean crash frequency as a function of the natural logarithm of the annual average daily traffic, segment length, and speed limit. Overall, this study provides an important foundation for the implementation of HSM methods in Muscat city, and it may aid in making SPFs established in more developed countries adaptable for use in less developed countries.
Mechanical engineering and machinery, Machine design and drawing
This study explores the long-term energy use implications of electrification, automation and sharing of road vehicles in British Columbia, Canada. Energy use is first analyzed for the years 1990–2016 for forward forecasting, and hypothetical scenarios ranging from conservative to disruptive, incorporating various effects of road vehicle electrification, sharing and automation, as well as influences of other technology disruptions, such as online shopping and e-learning are presented and used to project the road transportation energy use in B.C. to 2060. Transportation energy use projections are compared to those of the Canadian Energy Regulator (CER). When considering only the effect of vehicle electrification, the scenarios show higher energy savings compared to CER’s scenarios. The combined impact of vehicle electrification and automation leads to decreased energy use to 2060 for all scenarios considered. The energy savings for all scenarios, except for the conservative one, are higher than CER’s projections. When the effects of vehicle electrification, automation and sharing are merged, all scenarios yield energy savings beyond the CER projections. Inclusion of other technology disruptions and the effects of pandemics like COVID-19 reduce transportation demand and provide further energy savings. The BAU scenario given in this study shows energy use decreases compared to 2016 of 26.3%, 49%, 62.24%, 72.1% for the years 2030, 2040, 2050, and 2060 respectively.
Mechanical engineering and machinery, Machine design and drawing