Disturbances in centrifugal cascade systems manifest as flow or pressure perturbations, typically exhibiting abnormal pressure fluctuations. Taking staged centrifugal cascades as an example, dynamic hydraulic analysis reveals that disturbances propagating toward the depleted-end units cause pressure oscillations that amplify progressively downstream. During operational transitions, dedicated personnel must be arranged for real-time monitoring and necessary intervention to maintain the feed main pressure of Unit 01 within its safe operating range. Accurate determination of pressure trends in Unit 01’s feed main requires both prolonged disturbance propagation durations and operators’ extensive experience. Conventional monitoring approaches predominantly rely on threshold-triggered alarms control systems and operator experience-driven judgments. These methods cannot promptly capture subtle pressure variations during disturbances’ incipient stages, while the nonlinear, multivariable-coupled nature of cascade systems creates intricate parameter interdependencies. Consequently, operators face difficulties in timely disturbance detection, predicting propagation patterns, and assessing impact scopes. Due to the lack of means to predict pressures in a centrifugal cascade system, it is difficult for operators to predict pressure changes instantly and accurately. This limitation hinders timely response to potential disturbances impacting system stability and safety. To address this challenge, an improved Transformer model specifically designed for effective pressure prediction in centrifugal cascade systems was proposed. The characteristic relation of the input sequence was captured by a two-dimensional convolution layer, and the information at the adjacent time steps was amalgamated by a one-dimensional convolution layer. Crucially, to overcome the computational inefficiency of the standard Transformer, especially for long sequences common in pressure monitoring, a mechanism of multi-head sparse self-attention (MHSSA) was employed. This MHSSA mechanism sparsifies the attention weight matrix to significantly reduce the computational complexity and improve the computational efficiency. Furthermore, to enhance the temporal coherence and accuracy of the generated pressure sequences, autoregressive decoding was adopted. By simulating the pressure change caused by mis-operation of an electric regulating valve on the main pipe for the depleted material, the pressure prediction effect of the model within 60 s was verified. The average absolute error (MAE), the mean absolute percentage error (MAPE) and the root-mean-square error (RMSE) of the improved Transformer model for pressure prediction are 0.42, 1.5% and 0.48, respectively. Compared with the traditional Transformer model, these metrics are reduced by 59.6%, 57.1% and 61.9%, demonstrating the effectiveness of the proposed improvements. Furthermore, the proposed model (F4) exhibits superior prediction accuracy compared to other baseline models, including a standard LSTM (F1), an encoder-decoder LSTM (F2), and the traditional Transformer (F3), across all evaluation metrics. The significant performance gains highlight the model’s capability in capturing complex spatio-temporal dependencies in cascade system pressure dynamics, providing a valuable tool for real-time monitoring and early anomaly detection.
Ongoing research in new nuclear mechanisms hold the potential for beneficial developments in nuclear power cycle designs. Recent reports investigated the possibility of lattice dynamics to influence nuclear processes in metals. Results from Steinetz et al., at the NASA Glenn Research Center indicated that it may be feasible to initiate deuterium deuterium fusion reactions that are enhanced using electron screening to reduce the deuterium deuterium fusion barrier. This article presents tritium production results from both simulations and experiments targeting specific nuclear processes in an effort to identify the source of higher energy neutrons observed in those results. We explore two pathways of tritium generation in TiD2 through this fusion cycle. Tritium production from TiD2 in the University of Missouri Research Reactor, where the neutron spectrum was approximately 90 percent thermal, was within 25 percent of the predicted amount from simulations, and well explained by known nuclear reactions without invoking screening enhanced recoil-induced fusion. Tritium production from TiD2 in the cyclotron vault at MURR, where the neutron spectrum was completely energetic with almost no thermal neutrons, was a factor of 2.9 to 5.1 times higher than predicted from simulations using known nuclear reactions. This indicates the likelihood of an additional mechanism, such as collision-induced fusion in the solid state, increasing the credibility in the results from Steinetz et al.
The last decade has seen the development and application of data-driven methods taking off in nuclear engineering research, aiming to improve the safety and reliability of nuclear power. This work focuses on developing a reinforcement learning-based control sequence optimization framework for advanced nuclear systems, which not only aims to enhance flexible operations, promoting the economics of advanced nuclear technology, but also prioritizing safety during normal operation. At its core, the framework allows the sequence of operational actions to be learned and optimized by an agent to facilitate smooth transitions between the modes of operations (i.e., load-following), while ensuring that all safety significant system parameters remain within their respective limits. To generate dynamic system responses, facilitate control strategy development, and demonstrate the effectiveness of the framework, a simulation environment of a pebble-bed high-temperature gas-cooled reactor was utilized. The soft actor-critic algorithm was adopted to train a reinforcement learning agent, which can generate control sequences to maneuver plant power output in the range between 100% and 50% of the nameplate power through sufficient training. It was shown in the performance validation that the agent successfully generated control actions that maintained electrical output within a tight tolerance of 0.5% from the demand while satisfying all safety constraints. During the mode transition, the agent can maintain the reactor outlet temperature within ±1.5 °C and steam pressure within 0.1 MPa of their setpoints, respectively, by dynamically adjusting control rod positions, control valve openings, and pump speeds. The results demonstrate the effectiveness of the optimization framework and the feasibility of reinforcement learning in designing control strategies for advanced reactor systems.
Abstract A new benchmark solution has been developed to aid in the development of neutron kinetics solvers for hexagonal geometries, such as those in water-water energetic reactors. This benchmark problem is based on the two-dimensional, two-group, International Atomic Energy Agency–Hex steady-state benchmark problem. Two transient problems are presented: a ramp and a step transient. To create a benchmark-quality solution to this transient problem, a basic neutron kinetics model was added to the computer program LUPINE (Liquid metal–cooled fast reactor Utility for Physics Informed Nuclear Engineering). LUPINE solves neutron kinetics equations in general unstructured mesh. First, the LUPINE kinetics solvers are verified using the TWIGL benchmark problems. Then the methods in LUPINE are used to perform a spatiotemporal convergence analysis to ensure that the solutions are sufficiently converged. Finally, Richardson extrapolation is performed to obtain the reference solutions for these new kinetics benchmark problems.
Tomasz Kwiatkowski, Michał Jędrzejczyk, Afaque Shams
The reactor cavity cooling system (RCCS) is a passive reactor safety system commonly present in the designs of High-Temperature Gas-cooled Reactors (HTGR) that removes heat from the reactor pressure vessel by means of natural convection and radiation. It is one of the factors responsible for ensuring that the reactor does not melt down under any plausible accident scenario. For the simulation of accident scenarios, which are transient phenomena unfolding over a span of up to several days, intermediate fidelity methods and system codes must be employed to limit the models’ execution time. These models can quantify radiation heat transfer well, but heat transfer caused by natural convection must be quantified with the use of correlations for the heat transfer coefficient. It is difficult to obtain reliable correlations for HTGR RCCS heat transfer coefficients experimentally due to such a system’s size. They could, however, be obtained from high-fidelity steady-state simulations of RCCSs. The Rayleigh number in RCCSs is too high for using a Direct Numerical Simulation (DNS) technique; thus, a Reynolds-Averaged Navier–Stokes (RANS) approach must be employed. There are many RANS models, each performing best under different geometry and fluid flow conditions. To find the most suitable one for simulating an RCCS, the RANS models need to be validated. This work benchmarks various RANS models against three experiments performed on the HTTR RCCS Mockup by the Japanese Atomic Energy Agency (JAEA) in 1993. This facility is a 1/6 scale model of a vessel cooling system (VCS) for the High Temperature Engineering Test Reactor (HTTR), which is operated by JAEA. Multiple RANS models were evaluated on a simplified 2d-axisymmetric geometry. They were found to reproduce the experimental temperature profiles with errors of up to 22% for the lowest temperature benchmark and 15% for the higher temperature benchmarks. The results highlight that the pragmatic turbulence models need to be validated for high Rayleigh natural convection-driven flows and improved accordingly, more publicly available experimental data of RCCS resembling experiments is needed and indicate that a 2d-axisymmetric geometry approximation is likely insufficient to capture all the relevant phenomena in RCCS simulations.
Disused Sealed Radioactive Sources (DSRS) containing neutron sources such as 241Am-Be require careful management due to neutron radiation. However, finding readily available and effective combination layer shielding materials for practical use to safely contain 241Am-Be can be challenging. The main objective of this study is to investigate the configuration of shielding materials and determine the maximum activity of 241Am-Be sources that can be safely stored in a 200-L drum. A three-layer shielding approach using a 200-L drum as a storage container, with sequential layers of lead (Pb), polyethylene (PE), and ordinary Portland concrete (OPC), achieves the lowest dose rates compared to other combination sequences, as shown by Monte Carlo simulations. With a fixed lead thickness and varying polyethylene and ordinary Portland concrete thicknesses, Monte Carlo simulations using the Particle and Heavy Ion Transport code System (PHITS) demonstrate that this drum design can safely accommodate activities ranging from 22.01 Ci to 72.92 Ci of 241Am-Be. The fitted model equation determines the required polyethylene thickness for any activity within this range. Additionally, case-based simulation results indicate that Indonesia's total inventory of 241Am-Be DSRS can be stored in three 200-L drums with a polyethylene thickness of 15 cm. This configuration meets international standards, ensuring the dose rate does not exceed 2 mSv/h at the surface and 0.1 mSv/h at 1 m from the drum's surface.
Medical physics. Medical radiology. Nuclear medicine, Nuclear engineering. Atomic power
O.S. Medvedev, A.G. Razdobarin, E.V. Shubina (Smirnova)
et al.
The technique based on quadrupole mass spectrometry of the material ejected by laser-induced ablation – LIA-QMS is proposed as a tool to measure H/D/T content in carbon co-deposits in the first wall and divertor regions of tokamak reactors. Tungsten tiles exposed in Globus-M2 tokamak were taken for validation experiments. Measurement accuracy was determined by comparison with the results obtained by laser-induced desorption – LID-QMS and conventional thermal desorption spectroscopy (TDS). Total amount of deuterium in hydrocarbon deposits was found to be 2.8 × 1017 D/cm2, when measured by – LIA-QMS, 3.0 × 1017 D/cm2 for LID-QMS and 2.9 × 1017 D/cm2 for TDS. The main uncertainty of D surface concentration are caused by inhomogeneity of the deposit thickness over the sample area or by presence of hydrocarbon species in the released gas.
Dominic Power, Stefan Mijin, Kevin Verhaegh
et al.
Plasma-impurity reaction rates are a crucial part of modelling tokamak scrape-off layer (SOL) plasmas. To avoid calculating the full set of rates for the large number of important processes involved, a set of effective rates are typically derived which assume Maxwellian electrons. However, non-local parallel electron transport may result in non-Maxwellian electrons, particularly close to divertor targets. Here, the validity of using Maxwellian-averaged rates in this context is investigated by computing the full set of rate equations for a fixed plasma background from kinetic and fluid SOL simulations. We consider the effect of the electron distribution as well as the impact of the electron transport model on plasma profiles. Results are presented for lithium, beryllium, carbon, nitrogen, neon and argon. It is found that electron distributions with enhanced high-energy tails can result in significant modifications to the ionisation balance and radiative power loss rates from excitation, on the order of 50-75% for the latter. Fluid electron models with Spitzer-Härm or flux-limited Spitzer-Härm thermal conductivity, combined with Maxwellian electrons for rate calculations, can increase or decrease this error, depending on the impurity species and plasma conditions. Based on these results, we also discuss some approaches to experimentally observing non-local electron transport in SOL plasmas.
The paper presents a detailed analysis of helium (He) bubble development in ODS-EUROFER steel caused by helium ion implantation in different regimes, with a particular attention to the role of the oxide nanoparticles in promoting the growth of He bubbles, helium accumulation and gas-driven swelling. The Transmission Electron Microscopy (TEM) characterization of steel samples implanted applying systematic variation of experimental parameters has allowed clarifying the trends of the bubble microstructure evolution depending on the implantation dose, flux, and sample temperature. It was found that in all investigated implantation regimes He bubbles formed both in the grain bulk and on various structural defects (dislocations, grain boundaries, oxide particles and carbide precipitates), but the sizes and densities of bubbles in different bubble populations were sensitive to particular irradiation conditions. In the majority of cases the main traps for implanted helium and the main contributors to the estimated swelling were bubbles associated with grain boundaries, though in some cases (high implantation dose or lower temperature) the bubbles in the grain bulk were competitive with the grain boundary bubble population. Oxide particles in ODS-EUROFER were found to be excellent nucleation sites for He bubbles and practically each observed particle hosted a single relatively large bubble, sometimes as large as the particle itself. However, the contribution of oxide-associated bubbles to the estimated swelling and He inventory was found to be minor as compared to other bubble populations because of a relatively low number density of nano-oxides. Comparison of ODS-EUROFER and EUROFER 97 samples implanted with He ions in identical regimes has demonstrated lower efficiency of ODS-EUROFER for accumulating implanted helium in bubbles and noticeably higher share of helium atoms trapped in the vacancy defects invisible by TEM.
Objective: Before cochlear implantation, accurately identifying the cochlea's morphology is necessary. This study proposes an improved network model based on U-Net, which can realize automatic segmentation of human cochlear anatomy in computed tomography (CT) images. Methods: The CT scan data of 100 patients requiring cochlear implantation diagnosed in our hospital were randomly collected. It was divided into a training set (n = 75) and a test set (n = 25). All data were manually segmented by two clinicians. At the same time, U-Net was used for deep learning of the above data. The cochlea in the test set was compared with the dice similarity coefficient (DSC) and 95% Hausdorff surface distance (HD95%) by manual and automatic segmentation. Results: The DSC and HD95% of manual cochlear image segmentation were 0.761 and 4.343, respectively. The DSC and HD95% were 0.742 and 4.217, respectively, for automatic segmentation of cochlear structure using the U-Net network structure. The difference of DSC and HD95% between the two segmentation methods was not statistically significant (P > 0.05). Conclusions: The cochlea can be thoroughly segmented automatically based on the U-Net neural network, and the precision is close to manual segmentation.
Medical physics. Medical radiology. Nuclear medicine, Nuclear engineering. Atomic power
The increasing pressure within VUCA (volatility, uncertainty, complexity and ambiguity) driven environments causes traditional, plan-driven Systems Engineering approaches to no longer suffice. Agility is then changing from a "nice-to-have" to a "must-have" capability for successful system developing organisations. The current state of the art, however, does not provide clear answers on how to map this need in terms of processes, methods, tools and competencies (PMTC) and how to successfully manage the transition within established industries. In this paper, we propose an agile Systems Engineering (SE) Framework for the automotive industry to meet the new agility demand. In addition to the methodological background, we present results of a pilot project in the chassis development department of a German automotive manufacturer and demonstrate the effectiveness of the newly proposed framework. By adopting the described agile SE Framework, companies can foster innovation and collaboration based on a learning, continuous improvement and self-reinforcing base.
The review attempts to systematically and analytically consider certain results of scientific research and applied developments of such an urgent problem of our time as «hydrogen, hydrogen and atomic hydrogen energy» over the past 15–20 years. In the context of a reasoned statement of the problem, the main categorical-conceptual apparatus of the problem is determined. The main directions and issues of research on the phased solution of the problem are indicated. It is proved that the foundation of the problem is the understanding of the physicochemical properties of hydrogen as a chemical element and its characteristics as a simple substance based on a number of its specific properties. The phenomenon of hydrogen corrosion and its analysis from the point of view of the level of danger, the risk of its use and safety precautions are considered. Attention is focused on the features of the processes of storage, transportation and use of hydrogen as an energy carrier and raw material for technologies. The advantages of obtaining and using solid-phase hydrogen compounds with metals and intermetallic compounds as convenient and safe means of hydrogen transfer to consumers are noted. An example of the use of the most effective hydrides as carriers of H2 in motor vehicles by adding H2 to the minfuel in the engine power system is given, illustrated by a diagram. Special conditions for the use of H2 in heat supply processes (in thermal power engineering in general) are indicated, taking into account the difference in the thermophysical characteristics of H2, CH4, air and oxygen. The features of the development and use of means of transportation and storage of H2 are noted. Considerable attention is paid to the consideration of the physicochemical foundations for the production and use of metal hydrides and intermetallides in the context of evaluating them as means of solid-phase storage of H2 transfer in technological processes. The classification of hydrides and their functional characteristics of the most effective and promising hydrides — metal-like and especially intermetallic ones are presented: their preparation and areas of use. The innovative concept of atomic hydrogen energy is described in detail, which will determine the most promising areas of practical developments on the subject of the problem and their implementation. The concept is based on the use of the heat of a gas-cooled nuclear reactor for the implementation of two types of tasks: the efficient use of hydrogen as an energy carrier, for example, in the field of heat supply; the efficiency and profitability of methods for producing hydrogen using the numerous methods, methods and technologies already proposed, which, without nuclear technologies, are currently low-efficient and unprofitable. A project is proposed for using the heat of a GOx-th nuclear reactor in a fundamentally new complex of distant heat supply (method, technology, schemes) using a two-stage, reversible chemo-thermal process. It has been proven that the heat of a nuclear reactor can be used for the effective implementation of a number of traditional and innovative chemical and electrochemical, biochemical, etc. reactions for obtaining H2. A feasibility study has proven the effectiveness of such nuclear-hydrogen energy. A complete list and analysis of innovative, reversible (cyclic) chemical reactions for the production of H2 is presented. The review is based on the latest references to foreign publications on the subject of the problem (2018–2022), obtained from such an international source as «Elsevier-Science Direct». Bibl. 26, Fig. 4, Tab. 5.
Heat pipe cooled nuclear reactor has been a very attractive technical solution to provide the power for deep space applications. In this paper, a 200 kWe space nuclear reactor power design has been proposed based on the combination of an integrated UN ceramic fuel, a heat pipe cooling system and the Stirling power generators. Neutronics and thermal analysis have been performed on the space nuclear reactor. It was found that the entire reactor core has at least 3.9 $ subcritical even under the worst-case submersion accident superimposed a single safety drum failure, and results from fuel temperature coefficient, neutron spectrum and power distribution analysis also showed that this reactor design satisfies the neutronics requirements. Thermal analysis showed that the power in the core can be successfully removed both in normal operation or under one or more heat pipes failure scenarios.
This work presents the n<sup>th</sup>-Order Comprehensive Adjoint Sensitivity Analysis Methodology for Nonlinear Systems (n<sup>th</sup>-CASAM-N), which enables the most efficient computation of exactly determined expressions of arbitrarily high-order sensitivities of generic nonlinear system responses with respect to model parameters, uncertain boundaries, and internal interfaces in the model’s phase space. The mathematical framework underlying the n<sup>th</sup>-CASAM-N is proven to be correct by using mathematical induction. The n<sup>th</sup>-CASAM-N is formulated in linearly increasing higher-dimensional Hilbert spaces—as opposed to exponentially increasing parameter-dimensional spaces—thus overcoming the curse of dimensionality in sensitivity analysis of nonlinear systems.
Background Digital measurement system based on ADCs (analog-to-digital converter) has higher requirement on the signal to noise ratio (SNR) of sampled data. Among all the factors, the jitter of sampling clock has the most prominent effect on SNR. Purpose This study aims to design a clock circuit based on dual-loop phase-locked loop to reduce the jitter of digital measurement system input clock. Methods First of all, the influence of clock jitter on digital measurement system was analyzed. Then, the LMK04610 chip with dual loop PLL architecture of Texas Instruments was employed to design and implement a dual-loop phase-locked loop jitter cleaner circuit. The cores of this design were power supply design and the loop filter design. At last, the performance of the circuit was tested by using Rodschwarz phase noise analyzer. Results After testing, the dual-loop phase-locked loop jitter cleaner circuit can reduce the jitter of the 62.475 MHz source clock from more than 7 ps to less than 2 ps with output frequency of 499.8 MHz. The SNR of the sampled data is close to the theoretical value. Conclusions Dual-loop phase-locked loop jitter cleaner circuit has a good result and can provide reference for designers of digital measurement system.
Yochan Kim, Yung Hsien James Chang, Jinkyun Park
et al.
As a part of probabilistic risk (or safety) assessment (PRA or PSA) of nuclear power plants (NPPs), the primary role of human reliability analysis (HRA) is to provide credible estimations of the human error probabilities (HEPs) of safety-critical tasks. In this regard, it is vital to provide credible HEPs based on firm technical underpinnings including (but not limited to): (1) how to collect HRA data from available sources of information, and (2) how to inform HRA practitioners with the collected HRA data. Because of these necessities, the U.S. Nuclear Regulatory Commission and the Korea Atomic Energy Research Institute independently developed two dedicated HRA data collection systems, SACADA (Scenario Authoring, Characterization, And Debriefing Application) and HuREX (Human Reliability data EXtraction), respectively. These systems provide unique frameworks that can be used to secure HRA data from full-scope training simulators of NPPs (i.e., simulator data). In order to investigate the applicability of these two systems, two papers have been prepared with distinct purposes. The first paper, entitled “SACADA and HuREX: Part 1. The Use of SACADA and HuREX Systems to Collect Human Reliability Data”, deals with technical issues pertaining to the collection of HRA data. This second paper explains how the two systems are able to inform HRA practitioners. To this end, the process of estimating HEPs is demonstrated based on feed-and-bleed operations using HRA data from the two systems.
In this work, the large-scale shell-model calculations for $β$-decay properties have been done. The $β$-delayed $γ$-ray spectroscopy has been performed recently at ILL, Grenoble [M. Si \textit{et al.}, Phys. Rev. C {\bf 106}, 014302 (2022)] to study $β$-decay properties corresponding to $^{137}$Te (($7/2^-))$ $\rightarrow$ $^{137}$I ($J_f$) transitions. We have done a systematic shell-model study for nuclear structure properties and compared the obtained results with the experimental data. Finally, the $β$-decay properties such as the $\log ft$ values and average shape factors have been reported. This is the first theoretical calculation for the $\log ft$ values corresponding to these new experimental data. In addition, we have also reported calculated $\log ft$ results for $^{135}$Te (($7/2^-))$ $\rightarrow$ $^{135}$I ($J_f$) transitions.
Employing the concept of three-body radial distribution function and using the two-body correlation functions, calculated based on the lowest order constrained variational method, we investigated the effect of the three-body force (TBF) on the nuclear matter properties, for Argonne and Urbana $\it{v_{14}}$ potentials. As such, the results for nuclear matter density, incompressibility, energy per nucleon, and symmetry energy are presented at the saturation point. The inclusion of a phenomenological TBF resulted in closer values of the saturation density, incompressibility, and symmetry energy to the empirical ones for the symmetric nuclear matter. This is especially the case for the Urbana $\it{v_{14}}$ potential. In addition, an empirically-verified parabolic approximation of the interaction energy was utilized to perform an approximate study of the nuclear matter with neutron excess. Hence, at densities higher than about 0.3~fm$^{-3}$ and for proton-to-neutron density ratios close to the symmetric nuclear matter, the inclusion of TBF resulted in an extra attraction for the Argonne as compared to the Urbana $\it{v_{14}}$ potential.