Kampala faces increasing congestion, air pollution, and dependence on fossil fuels, driven by widespread reliance on diesel minibuses and motorcycle taxis. Existing models—KAMPALA-TIMES, KLAP-TIMES, and GKMA-TIMES–CGE—show strong potential for electrified mass transit to reduce emissions, change commuter behavior, and boost macroeconomic welfare. However, these studies assume electric-bus reliability without examining the mechanical conditions needed to achieve their projected outcomes. This study combines system-level modeling insights with vehicle-level engineering analysis to identify key mechanical factors necessary for the successful deployment of electric Bus Rapid Transit (e-BRT) in Kampala. It considers drivetrain torque for steep gradients, battery thermal management in hot equatorial climates, and regenerative braking efficiency in traffic congestion, alongside policy, infrastructure, and grid readiness. Mechanical performance links modeling to implementation—adequate torque, thermal stability, and regenerative braking efficiency directly affect service reliability, headway adherence, fleet uptime, and lifecycle costs. These operational factors influence commuter mode choices, the realism of bottom-up pathways, and the broader economic benefits predicted in top-down scenarios. Engineering reliability must be a core policy consideration, guiding procurement standards, charging infrastructure design, and multisector coordination among KCCA, MoWT, MEMD, and Uganda’s power utilities. Incorporating mechanical parameters into future bottom-up or hybrid models, combined with digital-twin testing and degradation-aware analytics, will enable Kampala to serve as a living laboratory for low-carbon mobility transitions across Sub-Saharan Africa.
The Rust programming language presents a steep learning curve and significant coding challenges, making the automation of issue resolution essential for its broader adoption. Recently, LLM-powered code agents have shown remarkable success in resolving complex software engineering tasks, yet their application to Rust has been limited by the absence of a large-scale, repository-level benchmark. To bridge this gap, we introduce Rust-SWE-bench, a benchmark comprising 500 real-world, repository-level software engineering tasks from 34 diverse and popular Rust repositories. We then perform a comprehensive study on Rust-SWE-bench with four representative agents and four state-of-the-art LLMs to establish a foundational understanding of their capabilities and limitations in the Rust ecosystem. Our extensive study reveals that while ReAct-style agents are promising, i.e., resolving up to 21.2% of issues, they are limited by two primary challenges: comprehending repository-wide code structure and complying with Rust's strict type and trait semantics. We also find that issue reproduction is rather critical for task resolution. Inspired by these findings, we propose RUSTFORGER, a novel agentic approach that integrates an automated test environment setup with a Rust metaprogramming-driven dynamic tracing strategy to facilitate reliable issue reproduction and dynamic analysis. The evaluation shows that RUSTFORGER using Claude-Sonnet-3.7 significantly outperforms all baselines, resolving 28.6% of tasks on Rust-SWE-bench, i.e., a 34.9% improvement over the strongest baseline, and, in aggregate, uniquely solves 46 tasks that no other agent could solve across all adopted advanced LLMs.
José Peixoto, Alexis Gonzalez, Janki Bhimani
et al.
Programmable caching engines like CacheLib are widely used in production systems to support diverse workloads in multi-tenant environments. CacheLib's design focuses on performance, portability, and configurability, allowing applications to inherit caching improvements with minimal implementation effort. However, its behavior under dynamic and evolving workloads remains largely unexplored. This paper presents an empirical study of CacheLib with multi-tenant settings under dynamic and volatile environments. Our evaluation across multiple CacheLib configurations reveals several limitations that hinder its effectiveness under such environments, including rigid configurations, limited runtime adaptability, lack of quality-of-service support and coordination, which lead to suboptimal performance, inefficient memory usage, and tenant starvation. Based on these findings, we outline future research directions to improve the adaptability, fairness, and programmability of future caching engines.
[Objective] By establishing a numerical seepage analysis model that aligns with real drainage systems and introducing the concept of a ′virtual permeability coefficient′ for secondary lining, the objective is to delve into the correlation between numerical methods and theoretical formulas, with expectation to leverage the efficiency and practicality of theoretical formulas in predicting external water pressure. [Method] Based on the principle of equivalent stable drainage volume in underwater tunnels, the concept of a ′virtual permeability coefficient′ for the secondary lining is introduced. On this basis, key factors, including the spacing of circumferential drainage blind pipes, the thickness of geotextiles, and their permeability coefficients, are selected as primary research factors. By adjusting these factors, multiple numerical seepage analysis models consistent with real drainage systems are established. [Result & Conclusion] The actual external water pressure acting on the secondary lining exhibits significant spatial distribution characteristics. Longitudinally, the variation in external water pressure displays periodic fluctuations corresponding to the spacing of circumferential drainage blind pipes. Circumferentially, the closer the position is to the longitudinal drainage blind pipe, the lower the external water pressure, with maximum circumferential water pressure occurring at the arch vault, followed by the inverted arch, and the smallest pressure on sidewalls. The reduction coefficients of external water pressure calculated with theoretical formulas are generally smaller than those derived from numerical methods. The stronger the drainage capacity of the design parameters, the smaller the difference between the two calculation results. The reduction coefficient consistently follows a decreasing trend from the vault to the invert to the sidewalls. When applying theoretical formulas directly in quantitative engineering design, it is necessary to introduce a comprehensive correction factor greater than 1.0 to ensure engineering safety. The value of comprehensive correction factor should be determined based on the specific structural location, with zones divided by the sidewalls. For the upper structure, a range of 1.48-1.97 is recommended, while a proper range of 1.21-1.39 for the lower structure
In recent years, gaze estimation has received a lot of interest in areas including human–computer interface, virtual reality, and user engagement analysis. Despite significant advances in convolutional neural network (CNN) techniques, directly and effectively predicting the point of gaze (PoG) in unconstrained situations remains a difficult task. This study proposes a gaze point estimation network (L1fcs-Net) that combines facial features with positional features derived from a two-dimensional array obtained by projecting the face relative to the screen. Our approach incorporates a Face-grid branch to enhance the network’s ability to extract features such as the relative position and distance of the face to the screen. Additionally, independent fully connected layers regress x and y coordinates separately, enabling the model to better capture gaze movement characteristics in both horizontal and vertical directions. Furthermore, we employ a multi-loss approach, balancing classification and regression losses to reduce gaze point prediction errors and improve overall gaze performance. To evaluate our model, we conducted experiments on the MPIIFaceGaz dataset, which was collected under unconstrained settings. The proposed model achieves state-of-the-art performance on this dataset with a gaze point prediction error of 2.05 cm, demonstrating its superior capability in gaze estimation.
Rui Yang, Michael Fu, Chakkrit Tantithamthavorn
et al.
Retrieval-augmented generation (RAG)-based applications are gaining prominence due to their ability to leverage large language models (LLMs). These systems excel at combining retrieval mechanisms with generative capabilities, resulting in more accurate, contextually relevant responses that enhance user experience. In particular, Transurban, a road operation company, is replacing its rule-based virtual assistant (VA) with a RAG-based VA (RAGVA) to offer more flexible customer interactions and support a wider range of scenarios. In this paper, drawing from the experience at Transurban, we present a comprehensive step-by-step guide for building a conversational application and how to engineer a RAGVA. These guides aim to serve as references for future researchers and practitioners. While the engineering processes for traditional software applications are well-established, the development and evaluation of RAG-based applications are still in their early stages, with numerous emerging challenges remaining uncharted. To address this gap, we conduct a focus group study with Transurban practitioners regarding developing and evaluating their RAGVA. We identified eight challenges encountered by the engineering team and proposed eight future directions that should be explored to advance the development of RAG-based applications. This study contributes to the foundational understanding of a RAG-based conversational application and the emerging AI software engineering challenges it presents.
Leonhard Applis, Yuntong Zhang, Shanchao Liang
et al.
The growth of Large Language Model (LLM) technology has raised expectations for automated coding. However, software engineering is more than coding and is concerned with activities including maintenance and evolution of a project. In this context, the concept of LLM agents has gained traction, which utilize LLMs as reasoning engines to invoke external tools autonomously. But is an LLM agent the same as an AI software engineer? In this paper, we seek to understand this question by developing a Unified Software Engineering agent or USEagent. Unlike existing work which builds specialized agents for specific software tasks such as testing, debugging, and repair, our goal is to build a unified agent which can orchestrate and handle multiple capabilities. This gives the agent the promise of handling complex scenarios in software development such as fixing an incomplete patch, adding new features, or taking over code written by others. We envision USEagent as the first draft of a future AI Software Engineer which can be a team member in future software development teams involving both AI and humans. To evaluate the efficacy of USEagent, we build a Unified Software Engineering bench (USEbench) comprising of myriad tasks such as coding, testing, and patching. USEbench is a judicious mixture of tasks from existing benchmarks such as SWE-bench, SWT-bench, and REPOCOD. In an evaluation on USEbench consisting of 1,271 repository-level software engineering tasks, USEagent shows improved efficacy compared to existing general agents such as OpenHands CodeActAgent. There exist gaps in the capabilities of USEagent for certain coding tasks, which provides hints on further developing the AI Software Engineer of the future.
Successfully engineering interactive industrial DTs is a complex task, especially when implementing services beyond passive monitoring. We present here an experience report on engineering a safety-critical digital twin (DT) for beer fermentation monitoring, which provides continual sampling and reduces manual sampling time by 91%. We document our systematic methodology and practical solutions for implementing bidirectional DTs in industrial environments. This includes our three-phase engineering approach that transforms a passive monitoring system into an interactive Type 2 DT with real-time control capabilities for pressurized systems operating at seven bar. We contribute details of multi-layered safety protocols, hardware-software integration strategies across Arduino controllers and Unity visualization, and real-time synchronization solutions. We document specific engineering challenges and solutions spanning interdisciplinary integration, demonstrating how our use of the constellation reporting framework facilitates cross-domain collaboration. Key findings include the critical importance of safety-first design, simulation-driven development, and progressive implementation strategies. Our work thus provides actionable guidance for practitioners developing DTs requiring bidirectional control in safety-critical applications.
Model-driven engineering (MDE) is believed to have a significant impact in software quality. However, researchers and practitioners may have a hard time locating consolidated evidence on this impact, as the available information is scattered in several different publications. Our goal is to aggregate consolidated findings on quality in MDE, facilitating the work of researchers and practitioners in learning about the coverage and main findings of existing work as well as identifying relatively unexplored niches of research that need further attention. We performed a tertiary study on quality in MDE, in order to gain a better understanding of its most prominent findings and existing challenges, as reported in the literature. We identified 22 systematic literature reviews and mapping studies and the most relevant quality attributes addressed by each of those studies, in the context of MDE. Maintainability is clearly the most often studied and reported quality attribute impacted by MDE. Eighty out of 83 research questions in the selected secondary studies have a structure that is more often associated with mapping existing research than with answering more concrete research questions (e.g., comparing two alternative MDE approaches with respect to their impact on a specific quality attribute). We briefly outline the main contributions of each of the selected literature reviews. In the collected studies, we observed a broad coverage of software product quality, although frequently accompanied by notes on how much more empirical research is needed to further validate existing claims. Relatively, little attention seems to be devoted to the impact of MDE on the quality in use of products developed using MDE.
Reliable aero-engine anomaly detection is crucial for ensuring aircraft safety and operational efficiency. This research explores the application of the Fisher autoencoder as an unsupervised deep learning method for detecting anomalies in aero-engine multivariate sensor data, using a Gaussian mixture as the prior distribution of the latent space. The proposed method aims to minimize the Fisher divergence between the true and the modeled data distribution in order to train an autoencoder that can capture the normal patterns of aero-engine behavior. The Fisher divergence is robust to model uncertainty, meaning it can handle noisy or incomplete data. The Fisher autoencoder also has well-defined latent space regions, which makes it more generalizable and regularized for various types of aero-engines as well as facilitates diagnostic purposes. The proposed approach improves the accuracy of anomaly detection and reduces false alarms. Simulations using the CMAPSS dataset demonstrate the model's efficacy in achieving timely anomaly detection, even in the case of an unbalanced dataset.
Miguel Herrera-Gavidia, Dalia Carbonel, Hugo Chirinos-Collantes
Chromium, a highly toxic heavy metal, poses significant risks to both human health and environmental quality. Its adsorption in wastewater using low-cost, easily implementable technologies has emerged as a crucial solution for mitigating its harmful impact. This study explores the effectiveness of a composite adsorbent made from bentonite and corn waste for chromium adsorption. Experiments were conducted in a laboratory-scale batch system. The research examined the adsorption kinetics and equilibrium, process optimization, and the mechanisms of chromium adsorption. For optimization, a response surface methodology was applied considering three variables: adsorption time (min), adsorbent dosage (g/L), and initial chromium concentration (mg/L). The findings suggest that the adsorption kinetics fit best with the pseudo-first-order model (R2 = 0.968), and the adsorption equilibrium fits with the Freundlich model (R2 = 0.997). During optimization, the adsorbent dosage emerged as the most critical factor for chromium removal. The optimal operating conditions were determined to be 103 minutes, 29.71 g/L of adsorbent, and an initial chromium concentration of 31.13 mg/L. The results indicate that chromium adsorption is a multifaceted process involving diffusion and subsequent interaction at the surface and edges of the bentonite layers. Chemical analysis, coupled with changes in the FTIR spectrum, suggests an interaction between chromium and the silicon and aluminum components of the bentonite. These findings underscore the potential of the composite adsorbent for effective chromium removal.
【Objective】Recent research efforts on Routing and Wavelength Assignment (RWA) for all optical networks are focused on Deep Reinforcement Learning (DRL) based algorithms. The DRL based RWA algorithms are mostly rely on the K Shortest Paths (KSP) routing to calculate candidate paths in advance, hence the DRL agent can choose possible actions from the precomputed paths. These KSP based models lack of flexibility and dynamicity, since they need to re-calculate the KSP for all the node pairs once the topology changes occur. To address this issue, this paper proposes an Adaptive and Efficient(ADE)-RWA algorithm based on DRL.【Methods】The key points and innovations of the ADE-RWA lie in that during the training process, the DRL agent takes actions in a step-by-step way instead of selecting from the precomputed K complete paths. Therefore, the routing strategies are dynamically adjustable in training even under the case of topology changes. It is because that the actions are open for the agent to take without concerning the limitations of the K fixed paths. Moreover, the ADE-RWA records the successfully assigned routes during the training in a LookUp Table (LUT). The algorithm turns to LUT checking for finding the available routes once the DRL training is converged, since at that time the LUT has acquired enough information for the RWA from the DRL training. The LUT based routing can effectively reduce the computational costs and improve the efficiency of RWA. In addition, the DRL training phase and LUT routing phase are real-time switchable. The algorithm turns to the DRL training phase when a link failure caused topology change occurs, and turns back to LUT checking when the model training is converged again.【Results】Experimental results show that compared with KSP-First Fit(FF)and Deep Reinforcement Learning Framework for Routing, Modulation and Spectrum Assignment (DeepRMSA), the blocking probability of ADE-RWA is reduced by 36% and 30% respectively. When a link failure occurs, the algorithm can quickly adapt to the changes in network topology.【Conclusion】The proposed DRL based RWA framework ADE-RWA can achieve adaptive routing and wavelength allocation under dynamic network conditions with low computational cost.
Fetcia Jackulin, P. Senthil Kumar, Gayathri Rangasamy
Among the azo dye, Tartrazine is widely used for most of applications like pharmaceuticals, cosmetics, food, etc. As the demand for dye application is increased, the disposal of dye is also increasing. However it is very difficult to cleave due to its stability. Different methods are available, but the Advanced Oxidation Process (AOP) is an emerging technique used for treating various contaminants. In this study, sulfate radical (SO4−.) based AOP was performed to degrade tartrazine dye using iron oxide (Fe3O4) nanoparticles (NP). This NP was synthesized using the co-precipitation method, analyzed by X-Ray Diffraction (XRD), revealed the crystalline structure of the material and the average size of the particle was 16.17 nm also High Resolution- Scanning Electron Microscope (HR-SEM) showed spherical and cube shape of the particles with agglomeration. Response surface methodology (RSM) was carried out to determine the optimum condition based on central composite design. The optimum conditions were found to be pH-5.34, time- 113.58 min, NP- 0.89 g, SPS- 15.40 mM, and predicted degradation efficiency - 97.22% which was correlated to the experimental value- 96.66% with minimal error. Application of SO4−. radical implied an efficient degradation due to the involvement of both SO4−. and hydroxyl (OH-.) radical. Excess formation of SO4−. radicals, Fe2+ was majorily responsible for suppressive degradation. The intermediate compound was identified from Gas Chromatography-Mass Spectrometry (GC-MS), proved the absence of parent dye and occurrence of degradation due to Fe3O4/PS system.
Semantic segmentation of rural roads presents unique challenges due to the unstructured nature of these environments, including irregular road boundaries, mixed surfaces, and diverse obstacles. In this study, we propose an enhanced PP-LiteSeg model specifically designed for rural road segmentation, incorporating a novel Strip Pooling Simple Pyramid Module (SP-SPPM) and a Bottleneck Unified Attention Fusion Module (B-UAFM). These modules improve the model’s ability to capture both global and local features, addressing the complexity of rural roads. To validate the effectiveness of our model, we constructed the Rural Roads Dataset (RRD), which includes a diverse set of rural scenes from different regions and environmental conditions. Experimental results demonstrate that our model significantly outperforms baseline models such as UNet, BiSeNetv1, and BiSeNetv2, achieving higher accuracy in terms of mean intersection over union (MIoU), Kappa coefficient, and Dice coefficient. Our approach enhances segmentation performance in complex rural road environments, providing practical applications for autonomous navigation, infrastructure maintenance, and smart agriculture.