Research Software Engineers (RSEs) have become indispensable to computational research and scholarship. The fast rise of RSEs in higher education and the trend of universities to be slow creating or adopting models for new technology roles means a lack of structured career pathways that recognize technical mastery, scholarly impact, and leadership growth. In response to an immense demand for RSEs at Princeton University, and dedicated funding to grow the RSE group at least two-fold, Princeton was forced to strategize how to cohesively define job descriptions to match the rapid hiring of RSE positions but with enough flexibility to recognize the unique nature of each individual position. This case study describes our design and implementation of a comprehensive RSE career ladder spanning Associate through Principal levels, with parallel team-lead and managerial tracks. We outline the guiding principles, competency framework, Human Resources (HR) alignment, and implementation process, including engagement with external consultants and mapping to a standard job leveling framework utilizing market benchmarks. We share early lessons learned and outcomes including improved hiring efficiency, clearer promotion pathways, and positive reception among staff.
The advent of foundation models (FMs), large-scale pre-trained models with strong generalization capabilities, has opened new frontiers for financial engineering. While general-purpose FMs such as GPT-4 and Gemini have demonstrated promising performance in tasks ranging from financial report summarization to sentiment-aware forecasting, many financial applications remain constrained by unique domain requirements such as multimodal reasoning, regulatory compliance, and data privacy. These challenges have spurred the emergence of financial foundation models (FFMs): a new class of models explicitly designed for finance. This survey presents a comprehensive overview of FFMs, with a taxonomy spanning three key modalities: financial language foundation models (FinLFMs), financial time-series foundation models (FinTSFMs), and financial visual-language foundation models (FinVLFMs). We review their architectures, training methodologies, datasets, and real-world applications. Furthermore, we identify critical challenges in data availability, algorithmic scalability, and infrastructure constraints and offer insights into future research opportunities. We hope this survey can serve as both a comprehensive reference for understanding FFMs and a practical roadmap for future innovation.
Davide Venturelli, Erik Gustafson, Doga Kurkcuoglu
et al.
We review the prospects to build quantum processors based on superconducting transmons and radiofrequency cavities for testing applications in the NISQ era. We identify engineering opportunities and challenges for implementation of algorithms in simulation, combinatorial optimization, and quantum machine learning in qudit-based quantum computers.
Nazanin Ahmadi, Qianying Cao, Jay D. Humphrey
et al.
Physics-informed machine learning (PIML) is emerging as a potentially transformative paradigm for modeling complex biomedical systems by integrating parameterized physical laws with data-driven methods. Here, we review three main classes of PIML frameworks: physics-informed neural networks (PINNs), neural ordinary differential equations (NODEs), and neural operators (NOs), highlighting their growing role in biomedical science and engineering. We begin with PINNs, which embed governing equations into deep learning models and have been successfully applied to biosolid and biofluid mechanics, mechanobiology, and medical imaging among other areas. We then review NODEs, which offer continuous-time modeling, especially suited to dynamic physiological systems, pharmacokinetics, and cell signaling. Finally, we discuss deep NOs as powerful tools for learning mappings between function spaces, enabling efficient simulations across multiscale and spatially heterogeneous biological domains. Throughout, we emphasize applications where physical interpretability, data scarcity, or system complexity make conventional black-box learning insufficient. We conclude by identifying open challenges and future directions for advancing PIML in biomedical science and engineering, including issues of uncertainty quantification, generalization, and integration of PIML and large language models.
Large Language Models (LLMs) are revolutionizing software engineering (SE), with special emphasis on code generation and analysis. However, their applications to broader SE practices including conceptualization, design, and other non-code tasks, remain partially underexplored. This research aims to augment the generality and performance of LLMs for SE by (1) advancing the understanding of how LLMs with different characteristics perform on various non-code tasks, (2) evaluating them as sources of foundational knowledge in SE, and (3) effectively detecting hallucinations on SE statements. The expected contributions include a variety of LLMs trained and evaluated on domain-specific datasets, new benchmarks on foundational knowledge in SE, and methods for detecting hallucinations. Initial results in terms of performance improvements on various non-code tasks are promising.
Proto-personas are commonly used during early-stage Product Discovery, such as Lean Inception, to guide product definition and stakeholder alignment. However, the manual creation of proto-personas is often time-consuming, cognitively demanding, and prone to bias. In this paper, we propose and empirically investigate a prompt engineering-based approach to generate proto-personas with the support of Generative AI (GenAI). Our goal is to evaluate the approach in terms of efficiency, effectiveness, user acceptance, and the empathy elicited by the generated personas. We conducted a case study with 19 participants embedded in a real Lean Inception, employing a qualitative and quantitative methods design. The results reveal the approach's efficiency by reducing time and effort and improving the quality and reusability of personas in later discovery phases, such as Minimum Viable Product (MVP) scoping and feature refinement. While acceptance was generally high, especially regarding perceived usefulness and ease of use, participants noted limitations related to generalization and domain specificity. Furthermore, although cognitive empathy was strongly supported, affective and behavioral empathy varied significantly across participants. These results contribute novel empirical evidence on how GenAI can be effectively integrated into software Product Discovery practices, while also identifying key challenges to be addressed in future iterations of such hybrid design processes.
Alexandra Mazak-Huemer, Christian Huemer, Michael Vierhauser
et al.
With the increasing significance of Research, Technology, and Innovation (RTI) policies in recent years, the demand for detailed information about the performance of these sectors has surged. Many of the current tools are limited in their application purpose. To address these issues, we introduce a requirements engineering process to identify stakeholders and elicitate requirements to derive a system architecture, for a web-based interactive and open-access RTI system monitoring tool. Based on several core modules, we introduce a multi-tier software architecture of how such a tool is generally implemented from the perspective of software engineers. A cornerstone of this architecture is the user-facing dashboard module. We describe in detail the requirements for this module and additionally illustrate these requirements with the real example of the Austrian RTI Monitor.
Jannatul Bushra, Md Habibor Rahman, Mohammed Shafae
et al.
Reverse engineering can be used to derive a 3D model of an existing physical part when such a model is not readily available. For parts that will be fabricated with subtractive and formative manufacturing processes, existing reverse engineering techniques can be readily applied, but parts produced with additive manufacturing can present new challenges due to the high level of process-induced distortions and unique part attributes. This paper introduces an integrated 3D scanning and process simulation data-driven framework to minimize distortions of reverse-engineered additively manufactured components. This framework employs iterative finite element simulations to predict geometric distortions to minimize errors between the predicted and measured geometrical deviations of the key dimensional characteristics of the part. The effectiveness of this approach is then demonstrated by reverse engineering two Inconel-718 components manufactured using laser powder bed fusion additive manufacturing. This paper presents a remanufacturing process that combines reverse engineering and additive manufacturing, leveraging geometric feature-based part compensation through process simulation. Our approach can generate both compensated STL and parametric CAD models, eliminating laborious experimentation during reverse engineering. We evaluate the merits of STL-based and CAD-based approaches by quantifying the errors induced at the different steps of the proposed approach and analyzing the influence of varying part geometries. Using the proposed CAD-based method, the average absolute percent error between simulation-predicted distorted dimensions and actual measured dimensions of the manufactured parts was 0.087%, with better accuracy than the STL-based method.
The creation of a Software Requirements Specification (SRS) document is important for any software development project. Given the recent prowess of Large Language Models (LLMs) in answering natural language queries and generating sophisticated textual outputs, our study explores their capability to produce accurate, coherent, and structured drafts of these documents to accelerate the software development lifecycle. We assess the performance of GPT-4 and CodeLlama in drafting an SRS for a university club management system and compare it against human benchmarks using eight distinct criteria. Our results suggest that LLMs can match the output quality of an entry-level software engineer to generate an SRS, delivering complete and consistent drafts. We also evaluate the capabilities of LLMs to identify and rectify problems in a given requirements document. Our experiments indicate that GPT-4 is capable of identifying issues and giving constructive feedback for rectifying them, while CodeLlama's results for validation were not as encouraging. We repeated the generation exercise for four distinct use cases to study the time saved by employing LLMs for SRS generation. The experiment demonstrates that LLMs may facilitate a significant reduction in development time for entry-level software engineers. Hence, we conclude that the LLMs can be gainfully used by software engineers to increase productivity by saving time and effort in generating, validating and rectifying software requirements.
Large Language Models (LLMs) have shown prominent performance in various downstream tasks and prompt engineering plays a pivotal role in optimizing LLMs' performance. This paper, not only as an overview of current prompt engineering methods, but also aims to highlight the limitation of designing prompts based on an anthropomorphic assumption that expects LLMs to think like humans. From our review of 50 representative studies, we demonstrate that a goal-oriented prompt formulation, which guides LLMs to follow established human logical thinking, significantly improves the performance of LLMs. Furthermore, We introduce a novel taxonomy that categorizes goal-oriented prompting methods into five interconnected stages and we demonstrate the broad applicability of our framework. With four future directions proposed, we hope to further emphasize the power and potential of goal-oriented prompt engineering in all fields.
R32 is widely used in room air-conditioners, with heat-transfer characteristics affected by the intermiscibility of the oil. Therefore, it is necessary to determine the optimal intermiscibility of the oil to improve heat transfer characteristics. This study investigated the influence of stratification, caused by the intermiscibility of the R32-oil mixture, on the heat transfer characteristics. The flow boiling heat transfer coefficient and pressure drop of a completely miscible R32-oil mixture, a partially miscible R32-oil mixture, and a completely immiscible R32-oil mixture were tested experimentally. To cover the working conditions of air conditioners and reflect the different intermiscibilities of the R32-oil mixtures, the test conditions included evaporating temperatures from -5 ℃ to 15 ℃, vapor quality from 0.2 to 0.7, and averaged oil concentrations of 1% and 5%. The results showed that partially miscible oil had the highest heat transfer coefficient and the lowest pressure drop; at an evaporating temperature of 5 ℃, vapor quality of 0.7, and averaged oil concentration of 5%, the advantage of a partially miscible R32-oil mixture over that of the completely miscible and completely immiscible mixtures increased. The maximum increases in heat transfer coefficient were 36.8% and 357.8%, and the maximum decreases in pressure drop were 9.0% and 58.2%, respectively. Among the three types of oils, the partially miscible oil exhibited the best heat transfer and pressure drop characteristics and thus has the best application prospects.
Heating and ventilation. Air conditioning, Low temperature engineering. Cryogenic engineering. Refrigeration
AbstractThe magnetic refrigerator (MR) has gained popularity due to its potential to improve the energy efficiency of refrigeration without the use of unsafe gas, as is the case with traditional gas compression techniques. Magnetocaloric lanthanum manganite investigation, particularly at room and cryogenic temperatures, shows favorable results for the development of MR. Previous thermodynamic models require a significant amount of time and effort to estimate the magnetocaloric effect (MCE). Consequently, we employ the phenomenological model (PM), which is simple and straightforward, requiring fewer parameters than many other modeling methods. We studied the magnetocaloric effect (MCE) of silica-coated La0.54Sr0.27Gd0.19MnO3 (LSGMO) nanoparticles via PM. According to PM results, MCE parameters were obtained as the consequences of the simulated magnetization of silica-coated LSGMO nanoparticles vs. temperature under 0.1 T a magnetic field. It is revealed that the MCE of silica-coated LSGMO nanoparticles covers a broad range of temperatures between 200 and 330 K. The comparison of MCE parameters for silica-coated LSGMO nanoparticles and some published works shows that silica-coated LSGMO nanoparticles are considerably larger than some of the MCE parameters in these published works. Finally, silica-coated LSGMO nanoparticles are suitable function materials in MR, especially at room and cryogenic temperatures, contributing to efficient MR.
Gold is one of the valuable metals and an important asset class for investors. People in India are emotionally attached to gold. Thousands of tonnes of idle gold are lying with Indian temples, trusts, and individuals. Investors consider capital appreciation; interest income and safety are major factors that influence buying of gold [1][4][7]. India is one of the biggest importers of gold every year. The government of India has introduced a few gold-related schemes to reduce gold imports. Sovereign gold bond [2] scheme and Re-vamped gold deposit scheme are introduced in the year 2015 under the Swarna Bharath initiative [9]. The present study is an attempt to find the association between awareness of investors and demographic factors towards gold ETFs, gold bonds, and gold deposits. The results reveal that there is a significant association at a 5% significance level between awareness of investors and all demographic factors used in the study except the gender of the respondent.
Background: Classifications in meta-research enable researchers to cope with an increasing body of scientific knowledge. They provide a framework for, e.g., distinguishing methods, reports, reproducibility, and evaluation in a knowledge field as well as a common terminology. Both eases sharing, understanding and evolution of knowledge. In software engineering (SE), there are several classifications that describe the nature of SE research. Regarding the consolidation of the large body of classified knowledge in SE research, a generally applicable classification scheme is crucial. Moreover, the commonalities and differences among different classification schemes have rarely been studied. Due to the fact that classifications are documented textual, it is hard to catalog, reuse, and compare them. To the best of our knowledge, there is no research work so far that addresses documentation and systematic investigation of classifications in SE meta-research. Objective: We aim to construct a unified, generally applicable classification scheme for SE meta-research by collecting and documenting existing classification schemes and unifying their classes and categories. Method: Our execution plan is divided into three phases: construction, validation, and evaluation phase. For the construction phase, we perform a literature review to identify, collect, and analyze a set of established SE research classifications. In the validation phase, we analyze individual categories and classes of included papers. We use quantitative metrics from literature to conduct and assess the unification process to build a generally applicable classification scheme for SE research. Lastly, we investigate the applicability of the unified scheme. Therefore, we perform a workshop session followed by user studies w.r.t. investigations about reliability, correctness, and ease of use.
To strengthen the heat transfer of a phase change cold storage panel to match the variable cooling demand of refrigerated transportation, this study proposes a type of phase change cold storage panel with embedded heat pipes to quickly balance the heat load fluctuation in the logistics process. The discharging performance of the heat pipe evaporation section under a high heat load is experimentally studied. A dynamic analytical model is developed for the cooling process based on the thermal resistance analysis method. The results show that under the condition of a high heat load, the dynamic characteristics of the heat transfer process for the heat pipe side are observed. Under a 50 ℃ working condition, the highest average heat transfer rate reaches 42.50 W. The temperature difference and heat transfer rate at the airside calculated by the model are consistent with the measured data. The calculation error of the overall cooling capacity is -3.21%–6.16%. The model is used to simulate and analyze the cold storage panel. In the simulation cases, the average heat transfer rate reaches 88.72 W with four heat pipe rows and a 16 mm tube diameter. The average heat transfer rate reaches 112.54 W with an evaporation section length of 80 mm.
Heating and ventilation. Air conditioning, Low temperature engineering. Cryogenic engineering. Refrigeration
Background: Research software is software developed by and/or used by researchers, across a wide variety of domains, to perform their research. Because of the complexity of research software, developers cannot conduct exhaustive testing. As a result, researchers have lower confidence in the correctness of the output of the software. Peer code review, a standard software engineering practice, has helped address this problem in other types of software. Aims: Peer code review is less prevalent in research software than it is in other types of software. In addition, the literature does not contain any studies about the use of peer code review in research software. Therefore, through analyzing developers perceptions, the goal of this work is to understand the current practice of peer code review in the development of research software, identify challenges and barriers associated with peer code review in research software, and present approaches to improve the peer code review in research software. Method: We conducted interviews and a community survey of research software developers to collect information about their current peer code review practices, difficulties they face, and how they address those difficulties. Results: We received 84 unique responses from the interviews and surveys. The results show that while research software teams review a large amount of their code, they lack formal process, proper organization, and adequate people to perform the reviews. Conclusions: Use of peer code review is promising for improving the quality of research software and thereby improving the trustworthiness of the underlying research results. In addition, by using peer code review, research software developers produce more readable and understandable code, which will be easier to maintain.
Hardi M. Mohammed, Zrar Kh. Abdul, Tarik A. Rashid
et al.
Purpose: The development of metaheuristic algorithms has increased by researchers to use them extensively in the field of business, science, and engineering. One of the common metaheuristic optimization algorithms is called Grey Wolf Optimization (GWO). The algorithm works based on imitation of the wolves' searching and the process of attacking grey wolves. The main purpose of this paper to overcome the GWO problem which is trapping into local optima. Design or Methodology or Approach: In this paper, the K-means clustering algorithm is used to enhance the performance of the original Grey Wolf Optimization by dividing the population into different parts. The proposed algorithm is called K-means clustering Grey Wolf Optimization (KMGWO). Findings: Results illustrate the efficiency of KMGWO is superior to GWO. To evaluate the performance of the KMGWO, KMGWO applied to solve 10 CEC2019 benchmark test functions. Results prove that KMGWO is better compared to GWO. KMGWO is also compared to Cat Swarm Optimization (CSO), Whale Optimization Algorithm-Bat Algorithm (WOA-BAT), and WOA, so, KMGWO achieves the first rank in terms of performance. Statistical results proved that KMGWO achieved a higher significant value compared to the compared algorithms. Also, the KMGWO is used to solve a pressure vessel design problem and it has outperformed results. Originality/value: Results prove that KMGWO is superior to GWO. KMGWO is also compared to cat swarm optimization (CSO), whale optimization algorithm-bat algorithm (WOA-BAT), WOA, and GWO so KMGWO achieved the first rank in terms of performance. Also, the KMGWO is used to solve a classical engineering problem and it is superior
Temperature monitoring during the life time of heat source components in engineering systems becomes essential to guarantee the normal work and the working life of these components. However, prior methods, which mainly use the interpolate estimation to reconstruct the temperature field from limited monitoring points, require large amounts of temperature tensors for an accurate estimation. This may decrease the availability and reliability of the system and sharply increase the monitoring cost. To solve this problem, this work develops a novel physics-informed deep reversible regression models for temperature field reconstruction of heat-source systems (TFR-HSS), which can better reconstruct the temperature field with limited monitoring points unsupervisedly. First, we define the TFR-HSS task mathematically, and numerically model the task, and hence transform the task as an image-to-image regression problem. Then this work develops the deep reversible regression model which can better learn the physical information, especially over the boundary. Finally, considering the physical characteristics of heat conduction as well as the boundary conditions, this work proposes the physics-informed reconstruction loss including four training losses and jointly learns the deep surrogate model with these losses unsupervisedly. Experimental studies have conducted over typical two-dimensional heat-source systems to demonstrate the effectiveness of the proposed method.