Abstract Growing consumption of rare earth elements (REEs) due to their critical roles in various sectors (e.g., healthcare, energy, transportation, and electronics) has gained attention and stimulated research efforts in industry and academic communities. This study provides an overview of the existing REE production and recovery pathways, identifies critical challenges of the current techniques, and highlights opportunities for multidisciplinary research to achieve more effective solutions. A comprehensive classification of REE separation techniques is presented through narrative and systematic literature reviews, including qualitative analysis and classic bibliometric techniques, to assess the usefulness of identified methodologies and approaches. It is found that the top three most explored and mature separation techniques in various phases (solid and liquid) between 2015 and 2020 are leaching, solvent extraction, and plasma; and the top three study fields are chemistry, engineering, and metallurgy. It is further found that the dominant REE separation technique across over 40 fields of research is the use of acids, bases, ionic liquids, and salts for leaching REEs. It is concluded that agromining approach, using hyperaccumulator plants capable of absorbing REEs through their roots and leaves, can be a practical approach for sustainable REEs recovery from secondary sources and end-of-life products, such as electronic devices.
Among various commercially available energy storage devices, lithium‐ion batteries (LIBs) stand out as the most compact and rapidly growing technology. This multicomponent system operates on coupled dynamics to reversibly store and release electricity. With the hierarchical electrode architectures inside LIBs, versatile functionality can be realized by design, while considerable difficulties remain to be solved to fully exploit the capability of each constituent. With the rapid electrification of the transportation sector and an urgent need to overhaul electric grids in the context of renewable energy penetration, demands for concomitant high energy and high power batteries are continuously increasing. Although building an ideal battery requires effort from multiple scientific and engineering aspects, it is imperative to gain insight into multiscale transport behaviors arising in both spatial and temporal dimensions, and enable their harmonic integration inside the whole battery system. In this progress report, recent research efforts on characterizing and understanding transport kinetics in LIBs are reviewed covering a broad range of electrode materials and length scales. To demonstrate the crucial role of such information in revolutionary electrode design, examples of innovative high energy/power electrodes are provided with their unique hierarchical porous architectures highlighted. To conclude, perspectives on further approaches toward advanced thick electrode designs with fast kinetics and tailored properties are discussed.
Abstract In modern transportation, pavement is one of the most important civil infrastructures for the movement of vehicles and pedestrians. Pavement service quality and service life are of great importance for civil engineers as they directly affect the regular service for the users. Therefore, monitoring the health status of pavement before irreversible damage occurs is essential for timely maintenance, which in turn ensures public transportation safety. Many pavement damages can be detected and analyzed by monitoring the structure dynamic responses and evaluating road surface conditions. Advanced technologies can be employed for the collection and analysis of such data, including various intrusive sensing techniques, image processing techniques, and machine learning methods. This review summarizes the state-of-the-art of these three technologies in pavement engineering in recent years and suggests possible developments for future pavement monitoring and analysis based on these approaches.
Abstract The recent electronic appliances and hybrid vehicles need a high energy density supercapacitor that can deliver a burst and a quick power supply. The high energy density supercapacitor can be obtained by designing proper electrode materials along with appropriate electrolytes. This review begins with different mechanisms of energy storage, giving a brief idea regarding how to design and develop different materials to achieve proper electrodes in the pursuit of high-energy density supercapacitor without compromising its stability. This review later focuses on the engineering of different electrode materials like conducting polymer, metal oxides, chalcogenides, carbides, nitrides, and MXenes. Lastly, the hybrid electrodes made up from composites between graphene and other novel materials were investigated. The hybrid electrodes have high chemical stability, long cycle life, good electronic properties, and efficient ionic transportation at the electrode-electrolyte interface, showing great potential for commercial usage.
Abstract Hydrogen as an energy carrier allows the decarbonization of transport, industry, and space heating as well as storage for intermittent renewable energy. The objective of this paper is to assess the future engineering potential for hydrogen and provide insight to areas of research to help lower economic barriers for hydrogen adoption. This assessment was accomplished by creating top-level system models based on energy requirements for end-use services. Those models were used to investigate four case studies that provide a global view augmented with specific national examples. The first case study assesses the potential penetration of hydrogen using a global energy system model. The second applies the dynamic integrated climate–ecosystem–economics model to derive an estimate of the impact of the diffusion of hydrogen as an energy carrier. The third determines the required growth in renewable power and water usage to power transportation in the United States (US) with hydrogen. The fourth assesses the use of hydrogen for heating in the United Kingdom (UK). In all cases, there appeared to be significant potential for hydrogen adoption and net energetic benefit. Globally, hydrogen has the potential to account for approximately 3% of energy consumption by 2050. In the US, using hydrogen for on-road transportation could enable a reduction in rejected energy of nearly 10%. Also, hydrogen might provide the least cost alternative to decarbonizing space heating in the UK. The research highlights a challenge raised by widespread abandonment of nuclear power. It is currently unclear what the removal of nuclear would do to the cost of energy as nations attempt to limit global greenhouse gas emissions. Nuclear power has also been proposed as a source for large scale production of hydrogen. Finally, this analysis shows that with today's technological maturity making the transition to a hydrogen economy would incur significant costs.
Cellulosic ethanol production has received global attention to use as transportation fuels with gasoline blending virtue of carbon benefits and decarbonization. However, due to changing feedstock composition, natural resistance, and a lack of cost-effective pretreatment and downstream processing, contemporary cellulosic ethanol biorefineries are facing major sustainability issues. As a result, we've outlined the global status of present cellulosic ethanol facilities, as well as main roadblocks and technical challenges for sustainable and commercial cellulosic ethanol production. Additionally, the article highlights the technical and non-technical barriers, various R&D advancements in biomass pretreatment, enzymatic hydrolysis, fermentation strategies that have been deliberated for low-cost sustainable fuel ethanol. Moreover, selection of a low-cost efficient pretreatment method, process simulation, unit integration, state-of-the-art in one pot saccharification and fermentation, system microbiology/ genetic engineering for robust strain development, and comprehensive techno-economic analysis are all major bottlenecks that must be considered for long-term ethanol production in the transportation sector.
Baiyu Chen, Matthew J Eckelman, Michelle Laboy
et al.
The construction industry is increasingly focused on reducing embodied carbon emissions to address climate change. Steel—cross laminated timber (CLT) composite hybrid structures, where CLT floors and steel framings work in composite action to resist gravity forces, offers benefits such as carbon storage, recyclability, reduced use of carbon-intensive materials, and improved project schedule and quality control. This composite hybrid system can accelerate progress toward net-zero embodied carbon by integrating carbon-storing materials within the existing AEC (Architecture, Engineering, and Construction) ecosystem for commercial and high-rise buildings, where timber use is limited. This study analyzes two structural patterns within the composite hybrid system and uses life cycle assessment (LCA) to identify trade-offs in embodied carbon. A 12-story office prototype is designed using two framing spans of 12.5 feet (3.8 m, Basic Hybrid ) and 25 feet (7.6 m, Stretch ), resulting in a change in the wood-to-steel ratio. In the Stretch design, the structure’s mass increases by 20% due to thicker CLT panels for the longer span, despite reduced steel framing, resulting in a 5% heavier foundation. The LCA considers upfront emissions from the product and transportation stages (A1–A4). Excluding biogenic carbon, the Stretch design has 3% higher embodied carbon than Basic Hybrid ; however, including biogenic carbon storage shows an 83% greater carbon benefit for Stretch . A dynamic assessment of biogenic carbon storage reveals that the building must be in service for 23 years for forest regrowth to offset initial forestry emissions, while the 7-ply system achieves net-zero carbon for the whole building in 67 years, compared to 80 years for the 5-ply system.
Research Software Engineers (RSEs) have become indispensable to computational research and scholarship. The fast rise of RSEs in higher education and the trend of universities to be slow creating or adopting models for new technology roles means a lack of structured career pathways that recognize technical mastery, scholarly impact, and leadership growth. In response to an immense demand for RSEs at Princeton University, and dedicated funding to grow the RSE group at least two-fold, Princeton was forced to strategize how to cohesively define job descriptions to match the rapid hiring of RSE positions but with enough flexibility to recognize the unique nature of each individual position. This case study describes our design and implementation of a comprehensive RSE career ladder spanning Associate through Principal levels, with parallel team-lead and managerial tracks. We outline the guiding principles, competency framework, Human Resources (HR) alignment, and implementation process, including engagement with external consultants and mapping to a standard job leveling framework utilizing market benchmarks. We share early lessons learned and outcomes including improved hiring efficiency, clearer promotion pathways, and positive reception among staff.
Kirill Minchenkov, A. Vedernikov, A. Safonov
et al.
Pultrusion is one of the most efficient methods of producing polymer composite structures with a constant cross-section. Pultruded profiles are widely used in bridge construction, transportation industry, energy sector, and civil and architectural engineering. However, in spite of the many advantages thermoplastic composites have over the thermoset ones, the thermoplastic pultrusion market demonstrates significantly lower production volumes as compared to those of the thermoset one. Examining the thermoplastic pultrusion processes, raw materials, mechanical properties of thermoplastic composites, process simulation techniques, patents, and applications of thermoplastic pultrusion, this overview aims to analyze the existing gap between thermoset and thermoplastic pultrusions in order to promote the development of the latter one. Therefore, observing thermoplastic pultrusion from a new perspective, we intend to identify current shortcomings and issues, and to propose future research and application directions.
The advent of foundation models (FMs), large-scale pre-trained models with strong generalization capabilities, has opened new frontiers for financial engineering. While general-purpose FMs such as GPT-4 and Gemini have demonstrated promising performance in tasks ranging from financial report summarization to sentiment-aware forecasting, many financial applications remain constrained by unique domain requirements such as multimodal reasoning, regulatory compliance, and data privacy. These challenges have spurred the emergence of financial foundation models (FFMs): a new class of models explicitly designed for finance. This survey presents a comprehensive overview of FFMs, with a taxonomy spanning three key modalities: financial language foundation models (FinLFMs), financial time-series foundation models (FinTSFMs), and financial visual-language foundation models (FinVLFMs). We review their architectures, training methodologies, datasets, and real-world applications. Furthermore, we identify critical challenges in data availability, algorithmic scalability, and infrastructure constraints and offer insights into future research opportunities. We hope this survey can serve as both a comprehensive reference for understanding FFMs and a practical roadmap for future innovation.
Davide Venturelli, Erik Gustafson, Doga Kurkcuoglu
et al.
We review the prospects to build quantum processors based on superconducting transmons and radiofrequency cavities for testing applications in the NISQ era. We identify engineering opportunities and challenges for implementation of algorithms in simulation, combinatorial optimization, and quantum machine learning in qudit-based quantum computers.
Nazanin Ahmadi, Qianying Cao, Jay D. Humphrey
et al.
Physics-informed machine learning (PIML) is emerging as a potentially transformative paradigm for modeling complex biomedical systems by integrating parameterized physical laws with data-driven methods. Here, we review three main classes of PIML frameworks: physics-informed neural networks (PINNs), neural ordinary differential equations (NODEs), and neural operators (NOs), highlighting their growing role in biomedical science and engineering. We begin with PINNs, which embed governing equations into deep learning models and have been successfully applied to biosolid and biofluid mechanics, mechanobiology, and medical imaging among other areas. We then review NODEs, which offer continuous-time modeling, especially suited to dynamic physiological systems, pharmacokinetics, and cell signaling. Finally, we discuss deep NOs as powerful tools for learning mappings between function spaces, enabling efficient simulations across multiscale and spatially heterogeneous biological domains. Throughout, we emphasize applications where physical interpretability, data scarcity, or system complexity make conventional black-box learning insufficient. We conclude by identifying open challenges and future directions for advancing PIML in biomedical science and engineering, including issues of uncertainty quantification, generalization, and integration of PIML and large language models.
Large Language Models (LLMs) are revolutionizing software engineering (SE), with special emphasis on code generation and analysis. However, their applications to broader SE practices including conceptualization, design, and other non-code tasks, remain partially underexplored. This research aims to augment the generality and performance of LLMs for SE by (1) advancing the understanding of how LLMs with different characteristics perform on various non-code tasks, (2) evaluating them as sources of foundational knowledge in SE, and (3) effectively detecting hallucinations on SE statements. The expected contributions include a variety of LLMs trained and evaluated on domain-specific datasets, new benchmarks on foundational knowledge in SE, and methods for detecting hallucinations. Initial results in terms of performance improvements on various non-code tasks are promising.
Proto-personas are commonly used during early-stage Product Discovery, such as Lean Inception, to guide product definition and stakeholder alignment. However, the manual creation of proto-personas is often time-consuming, cognitively demanding, and prone to bias. In this paper, we propose and empirically investigate a prompt engineering-based approach to generate proto-personas with the support of Generative AI (GenAI). Our goal is to evaluate the approach in terms of efficiency, effectiveness, user acceptance, and the empathy elicited by the generated personas. We conducted a case study with 19 participants embedded in a real Lean Inception, employing a qualitative and quantitative methods design. The results reveal the approach's efficiency by reducing time and effort and improving the quality and reusability of personas in later discovery phases, such as Minimum Viable Product (MVP) scoping and feature refinement. While acceptance was generally high, especially regarding perceived usefulness and ease of use, participants noted limitations related to generalization and domain specificity. Furthermore, although cognitive empathy was strongly supported, affective and behavioral empathy varied significantly across participants. These results contribute novel empirical evidence on how GenAI can be effectively integrated into software Product Discovery practices, while also identifying key challenges to be addressed in future iterations of such hybrid design processes.