Although the probability of a severe accident at a nuclear power plant is low, it can result in catastrophic outcomes. This study employed deep-learning-based time-series models to simultaneously predict the core exit temperature, containment pressure, and hydrogen concentration, which are critical monitoring variables during severe accidents. Four models (recurrent neural network, long short-term memory, convolutional neural network (CNN), and temporal convolutional network) were implemented in a multi-input multi-output structure and trained on simulation data from cold-leg loss-of-coolant accident (LOCA), hot-leg LOCA, and steam generator tube rupture scenarios. To address predictive uncertainty, Monte Carlo dropout was applied to estimate the confidence intervals. Among the models, the CNN demonstrated a superior balance between predictive accuracy and computational efficiency. It achieved highly competitive performance, despite having significantly fewer trainable parameters and a dramatically faster training time. This approach combines multivariate prediction and uncertainty quantification, demonstrating the practical potential for integration into AI-based operator support systems. This methodology is expected to enhance the situational assessments of operators and support proactive mitigation strategies. Future work will expand the scope of validation by incorporating a wider range of accident scenarios and operational conditions, while also accounting for external and environmental variables that may influence the prediction accuracy.
Kevin Hermann, Sven Peldszus, Jan-Philipp Steghöfer
et al.
Software security is of utmost importance for most software systems. Developers must systematically select, plan, design, implement, and especially, maintain and evolve security features -- functionalities to mitigate attacks or protect personal data such as cryptography or access control -- to ensure the security of their software. Although security features are usually available in libraries, integrating security features requires writing and maintaining additional security-critical code. While there have been studies on the use of such libraries, surprisingly little is known about how developers engineer security features, how they select what security features to implement and which ones may require custom implementation, and the implications for maintenance. As a result, we currently rely on assumptions that are largely based on common sense or individual examples. However, to provide them with effective solutions, researchers need hard empirical data to understand what practitioners need and how they view security -- data that we currently lack. To fill this gap, we contribute an exploratory study with 26 knowledgeable industrial participants. We study how security features of software systems are selected and engineered in practice, what their code-level characteristics are, and what challenges practitioners face. Based on the empirical data gathered, we provide insights into engineering practices and validate four common assumptions.
Nucleon short-range correlations (SRCs) and their high-momentum tails (HMTs) encode key short-range dynamics in nuclei and dense matter. This review provides a concise overview of SRC features relevant to the Equation of State (EOS) of isospin-asymmetric nuclear matter. We summarize empirical and theoretical properties of the single-nucleon momentum distribution $n(k)$, emphasizing the role of the neutron--proton tensor force, the dominance of correlated np pairs, and the enhancement of minority-species HMTs. Links to nucleon effective E-masses, quasi-deuteron components, and orbital entanglement are briefly noted. We examine how SRC-induced HMTs modify kinetic and potential contributions to the EOS in both non-relativistic and relativistic frameworks, including the softening of the kinetic symmetry energy and departures from the isospin parabolic approximation of asymmetric nuclear EOS. Sensitivity to high-momentum components and generalizations to arbitrary dimensions are also highlighted. Implications for heavy-ion reactions are summarized, including effects on particle yields, collective flows, deeply sub-threshold particle production and hard photon emission, driven by modified initial nucleon momentum distributions and abundant high relative-momentum np pairs during the reaction. Finally, we outline SRC-HMT consequences for neutron-star matter, covering proton fractions, tidal deformabilities, $Z$-factors, cooling, and the core--crust transition, as well as possible connections to dark-matter interactions in dense environments.
Some core designs integrate high-enriched fuel and moderator materials to enhance neutron utilization. This combination results in a broad spectrum within the system, posing challenges in resonance calculation. This paper introduces a general framework to realize resonance self-shielding treatment in broad-spectrum fuel lattice problems. The framework consists of three components. First, a new energy group structure is devised to support resonance calculation in the entire energy range and capture spectral transition and thermalization effects during eigenvalue calculation. Second, the subgroup method based on narrow approximation is selected as a universal method to perform resonance calculation. Finally, transport equations for each fissionable region are solved for neutron flux to collapse the fission spectrum. The proposed method is verified against fast, intermediate, and thermal spectrum pin cell problems and an assembly problem featuring a fast-thermal coupled spectrum. Numerical results affirm the accuracy of the proposed method in handling these scenarios, with eigenvalue errors below 154 pcm for pin cell problems and 106 pcm for the assembly problem. The verification results revealed that the proposed method enables accurate resonance self-shielding treatment for broad-spectrum problems.
The perplexing nature of Iran’s nuclear program is evident in its simultaneous growth of enrichment capacity and the concurrent denial of any aspirations toward nuclear weapons development. This conundrum calls for a rigorous examination of Iran’s deterrence policy and the identification of obstacles hindering the adoption of a nuclear deterrence strategy. The present study’s contribution is thus twofold. Firstly, employing a Systematic Literature Review (SLR) and the Delphi technique, it delves into the intricacies of Iran’s nuclear policy and identifies thirteen driving factors inhibiting the development of nuclear weapons. Secondly, by employing the DEMATEL technique, the article determines the importance of individual factors on Iran’s nuclear strategy as well as the causal relations among them. This study, which draws on the analysis of original data, concludes by highlighting the pivotal role of the Shia religion and the ideology of leaders in Iran’s way of war and deterrence strategy.
Nuclear engineering. Atomic power, International relations
The purpose of this study was to explore the risk factors for anastomotic leakage (AL) after laparoscopic radical resection of rectal cancer and to construct a nomogram prediction model for AL after laparoscopic radical resection of rectal cancer. We selected 366 patients with rectal cancer who underwent laparoscopic radical resection for rectal cancer in our hospital between January 2021 and December 2023 as the research subjects. Logistic regression analysis was used to screen the risk factors for AL after laparoscopic radical resection for rectal cancer, and a nomogram model for AL after radical resection for rectal cancer was constructed and validated. Our results showed among 366 patients with rectal cancer, 42 patients developed AL after surgery, and the incidence rate of AL was 11.48%. Logistic regression analysis results showed that gender, preoperative intestinal obstruction, distance between the tumor and the anal edge ≤7 cm, and diabetes were risk factors for AL after laparoscopic radical resection of rectal cancer (P < 0.05). The calibration curve of the nomogram model for AL after laparoscopic radical resection of rectal cancer showed that the model's predicted value and actual value fit well, and the area under the curve of the model was 0.859, (95%CI: 0.807–0.912). Overall, gender, preoperative intestinal obstruction, distance between the tumor and the anal verge ≤7 cm, and diabetes are risk factors for AL after laparoscopic radical resection of rectal cancer. The nomogram model of AL after laparoscopic radical resection of rectal cancer has high accuracy and has a certain guiding significance for formulating prevention and treatment strategies in advance.
Medical physics. Medical radiology. Nuclear medicine, Nuclear engineering. Atomic power
A.A.M. Mahmoud, Rana H. Khashab, Zakiah I. Kalantan
et al.
A bivariate distribution is a probability distribution that describes the joint behavior of two random variables. It provides information about the simultaneous variation of two variables, allowing us to analyze their relationship and dependencies. This article discussed the bivariate compound exponentiated survival function of the beta distribution. the joint cumulative distribution function and the joint probability density function were found in closed forms. Several characteristics of this distribution have been discussed. The maximum likelihood (ML) estimators of the parameters and two sample ML predictions of the future observations are derived. The Bayes estimators (BEs) of the parameters based on the squared error loss function and two sample Bayesian predictions of the future observations are presented. The performance of the proposed bivariate distributions is examined using a simulation study. Finally, two data sets are considered in the framework of the proposed distributions to demonstrate their flexibility for real-life applications.
Medical physics. Medical radiology. Nuclear medicine, Nuclear engineering. Atomic power
This work is devoted to the problem of neutron field formation near elliptical orbits of space objects equipped with nuclear power plants. The high-energy part of the fission spectrum is not affected by the gravitational field. Radiation safety is ensured by the triad fair for a point source: activity-distance-time. The connection between space object orbit parameters and neutron flux density occurs for the thermal (near-thermal) part of the spectrum. The possibility of formation of a stable neutron trace in the volume of a «torus» around the orbit of a space object is considered. The paper presents theoretical and numerical evidence of the validity of the hypothesis put forward. The introduction considers the separation effect of Galileo's relativity principle in the case of rectilinear uniform motion of a point isotropic neutron source on a plane. The occurrence of angular asymmetry of the neutron distribution in a stationary coordinate system when their relative velocity is close to the transport velocity of the source is illustrated. Under these conditions, a significant velocity dispersion of initially monochromatic neutrons is also recorded. This expected fundamental kinematic effect determines the characteristic distribution of neutrons in the gravitational field when the source moves along the Kepler orbit. The solution of the problem is carried out in the velocity space. It is argued that if the distribution of neutrons in the velocity space is such that their velocities are collinear to the orbital velocity of the source, this indicates the existence of a neutron flux near the orbit. The problem is analysed on the example of one revolution of a hypothetical space station by simulation modelling. For this purpose, thermal neutron packets with isotropic angular distribution were generated at eight points of an elliptical orbit. The neutron and source velocity fluxes at the selected points of the orbit were compared in a coordinate system related to the earth. The obtained data made it possible to calculate the densities of neutron velocity fluxes to the front and rear hemispheres relative to the source orbital motion as a function of the polar angle, while the value of the determinant of the correlation matrix – an indicator of collinearity of neutron velocity vectors in the flux - was fixed. The results of the studies confirm the hypothesis put forward about the possibility of formation of a «trace» in the orbit of a thermal neutron source, which determines the need to take it into account as a significant component of radiation risk.
Level 3 probabilistic safety assessment (PSA) is performed to calculate radionuclide concentrations and exposure dose resulting from nuclear power plant accidents. To calculate the external exposure dose from the released radioactive materials, the radionuclide concentrations are multiplied by two factors of dose coefficient and a finite cloud dose correction factor (FCDCF), and the obtained values are summed. This indicates that a standard set of FCDCFs is required for external exposure dose calculations.To calculate a standard set of FCDCFs, the effective distance from the release point to the receptor along the wind direction should be predetermined. The TID-24190 document published in 1968 provides equations to calculate FCDCFs and the resultant standard set of FCDCFs. However, it does not provide any explanation on the effective distance required to calculate the standard set of FCDCFs. In 2021, Sandia National Laboratories (SNLs) proposed a method to predetermine finite effective distances depending on the atmospheric stability classes A to F, which results in six standard sets of FCDCFs. Meanwhile, independently of the SNLs, the authors of this paper discovered that an infinite effective distance assumption is a very reasonable approach to calculate one standard set of FCDCFs, and they implemented it into the multi-unit radiological consequence calculator (MURCC) code, which is a post-processor of the level 3 PSA codes.This paper calculates and compares short- and long-range FCDCFs calculated using the TID-24190, SNLs method, and MURCC method, and explains the strength of the MURCC method over the SNLs method. Although six standard sets of FCDCFs are required by the SNLs method, one standard sets of FCDCFs are sufficient by the MURCC method. Additionally, the use of the MURCC method and its resultant FCDCFs for level 3 PSA was strongly recommended.
Robotic path planning plays a pivotal role in computer-aided liver tumor thermal ablation surgery. However, traditional methods face challenges in terms of low reconstruction efficiency and planning safety. To address these issues, we propose a method for robotic path planning of liver tumor thermal ablation surgery. Firstly, an interlayer interpolation algorithm based on optical flow estimation is utilized to compensate for large interlayer spacing in computed tomography (CT) images by inserting predicted images between sequence images. Secondly, the voxel traversal strategy and patch intersection calculation strategy in the standard marching cube (MC) algorithm is optimized to improve the efficiency of abdominal tissues reconstruction. Finally, comprehensive clinical constraints are summarized to ensure the surgery safety and the strength pareto evolutionary algorithm II (SPEA-II) is leveraged to optimize surgery path, which can obtain even distribution of solutions in high-dimension optimization problems. Extensive experiments conducted on the 3Dircadb and SLIVER07 datasets revealed that our proposed method reduces reconstruction time by 21.5% compared to the standard MC algorithm, while achieving an average overlap rate of 88.25% and an average Hausdorff distance of 15.25 mm between Pareto front points and surgeon's recommended puncture points.
Medical physics. Medical radiology. Nuclear medicine, Nuclear engineering. Atomic power
In this paper we address the following question: How can we use Large Language Models (LLMs) to improve code independently of a human, while ensuring that the improved code - does not regress the properties of the original code? - improves the original in a verifiable and measurable way? To address this question, we advocate Assured LLM-Based Software Engineering; a generate-and-test approach, inspired by Genetic Improvement. Assured LLMSE applies a series of semantic filters that discard code that fails to meet these twin guarantees. This overcomes the potential problem of LLM's propensity to hallucinate. It allows us to generate code using LLMs, independently of any human. The human plays the role only of final code reviewer, as they would do with code generated by other human engineers. This paper is an outline of the content of the keynote by Mark Harman at the International Workshop on Interpretability, Robustness, and Benchmarking in Neural Software Engineering, Monday 15th April 2024, Lisbon, Portugal.
Motivated by learning from experience and exploiting existing knowledge in civil nuclear operations, we have developed in-house generic Probabilistic Safety Assessment (PSA) models for pressurized and boiling water reactors. The models are computationally light, handy, transparent, user-friendly, and easily adaptable to account for major plant-specific differences. They cover the common internal initiating events, frontline and support systems reliability and dependencies, human-factors, common-cause failures, and account for new factors typically overlooked in many PSAs. For quantification, the models use generic US reliability data, precursor analysis reports, the ETHZ Curated Nuclear Events Database, and experts’ opinions. Moreover, uncertainties in the most influential basic events are addressed. The generated results show good agreement with assessments available in the literature with detailed PSAs. We envision the models as an unbiased framework to measure nuclear operational risk with the same “ruler”, and hence support inter-plant risk comparisons that are usually not possible due to differences in plant-specific PSA assumptions and scopes. The models can be used for initial risk screening, order-of-magnitude precursor analysis, and other research/pedagogic applications especially when no plant-specific PSAs are available. Finally, we are using the generic models for large-scale precursor analysis that will generate big picture trends, lessons, and insights.
Dawid Brzeminski, Zackaria Chacko, Abhish Dev
et al.
We consider the general class of theories in which there is a new ultralight scalar field that mediates an equivalence principle violating, long-range force. In such a framework, the sun and the earth act as sources of the scalar field, leading to potentially observable location dependent effects on atomic and nuclear spectra. We determine the sensitivity of current and next-generation atomic and nuclear clocks to these effects and compare the results against the existing laboratory and astrophysical constraints on equivalence principle violating fifth forces. We show that in the future, the annual modulation in the frequencies of atomic and nuclear clocks in the laboratory caused by the eccentricity of the earth's orbit around the sun may offer the most sensitive probe of this general class of equivalence principle violating theories. Even greater sensitivity can be obtained by placing a precision clock in an eccentric orbit around the earth and searching for time variation in the frequency, as is done in anomalous redshift experiments. In particular, an anomalous redshift experiment based on current clock technology would already have a sensitivity to fifth forces that couple primarily to electrons at about the same level as the existing limits. Our study provides well-defined sensitivity targets to aim for when designing future versions of these experiments.
Abstract Automatic differentiation (AD) is a set of techniques which allows the numeric evaluation of derivatives of functions calculated by a computer program. In recent years, interest in AD has grown significantly in many disciplines, especially in the context of gradient-based optimization algorithms. Sensitivity analysis is another natural application area for AD methods. However, despite the large body of sensitivity and uncertainty (S/U) analysis publications produced in the field of nuclear reactor science and engineering in the last decade, the use of AD by the community has been very limited. The purpose of the present paper is to fill this gap and to demonstrate how AD can be employed in conjunction with some traditionally used sensitivity analysis and uncertainty propagation techniques. Specifically, the forward mode of AD based on dual number arithmetic was considered in the study. We provide a short overview of dual number algebra and dual number automatic differentiation (DNAD) methods, as well as of the tools available for the practical implementation of DNAD, followed by a discussion of its application to S/U analysis. As illustration, we solve a simplistic example of an infinite, homogeneous diffusion problem using parameters that correspond to a plate-type, Material Testing Reactor fuel assembly. Homogenized cross-sections and uncertainty (covariance) data for the test problem are generated with the SCALE code in six energy groups. The diffusion problem is solved through the power iteration algorithm with the algebra of dual matrices, which yields sensitivity information for use in the sandwich formula. DNAD is also used to calculate partial derivatives of the production and loss operators in the perturbation formula in the context of the adjoint-weighted technique. Both of these methods yield uncertainty values for the multiplication factor that are within three pcm of the reference value. Automatic differentiation can, therefore, be useful for uncertainty propagation in the framework of local sensitivity analysis in addition to traditionally employed sampling methods or in conjunction with the perturbation method.
Maxim Yu. Tuchkov, Petr V. Povarov, Aleksandr I. Tikhonov
et al.
This article is focused on the current issue of developing an operator information support system (OISS) for the Novovoronezh NPP II project. One of the main reasons to raise this topic is the MCR operator’s overload with data due to the greatly increased information flows related to the VVER-1200 Process I&amp;C compared to the serially produced VVER-1000 power units. The other important reason, in the authors’ opinion, is the increased volume of existing procedures in hard copy due to the strengthened requirements for their registration and attempts to describe all possible failures and deviations in the programs and plant evolution sheet, which complicates the work on them. In the era of ubiquitous digitalization, the paper procedures can only distract the attention of the operator, who is overloaded with information even without that. The obvious solution is to create a system providing automatic collection and analysis of information. In addition, the functionality of the operator information support system allows the use of operating experience, thus minimizing the impact of the human factor. The lack of knowledge or experience could be especially challenging with procedures being applied infrequently, for example, for starting up and shutting down the unit. The authors consider the development and functionality of interactive procedures and applicable requirements for them. Particular attention is paid to the ergonomics of the workplace and the convenience of operating personnel working with an interactive procedure. Since the transition from the paper version of the programs can cause problems with reading the procedures and, ultimately, lead to the failure of the unit start-up time, the personnel of the operating station were directly involved in the development of the interactive programs. Based on the review results, conclusions were made about the correctness of the approaches in developing the interactive procedures and validated solutions to be disseminated for all routine operations.
A. Khouass, Christian Attiogb'e, Mohamed Messabihi
Critical and cyber-physical systems (CPS) that exist in large industries, such as nuclear power plants, railway, automotive or aeronautical industries are complex heterogeneous systems. They are complex because they are open, perimeter-less, often built by assembling various heterogeneous and interacting components which are frequently reconfigured due to requirements. Consequently, the modeling and analysis of such systems is a challenge in software engineering. We introduce a new method for modeling and verifying heterogeneous systems. The method consists in: equipping individual components with generalized contract, ordering these contracts according to given facets, composing these components and verifying the resulting system with respect to the facets. We illustrate the use of the method by a case study. The proposed method may be extended to cover more facets, and by strengthening assistance tool through proactive aspects in modelling and property verification.