Data processing and uncertainty evaluation method for five-hole probe considering compressibility
Yufeng Du, Lulu Wang, Qiuting Guo
et al.
The data processing and uncertainty evaluation method of the five-hole probe, considering compressibility, is studied to meet the application requirements of the five-hole probe in a high-speed wind tunnel. The data processing method that considers compressibility is theoretically derived in detail. Based on interpolation fitting and an iterative solution, a data analysis method is established. On this basis, a Monte Carlo simulation method is further established to evaluate the uncertainty of the parameters of the flow field to be measured. It provides a theoretical basis and technical means for the calibration and application of a five-hole probe, considering compressibility in a high-speed wind tunnel.
Associations Between Serum 25(OH)D Concentrations and Clinical Characteristics in Pediatric Patients
Maria Nicolae, Sorin Deacu, Cristina Maria Mihai
et al.
<b>Background/Objectives:</b> Vitamin D has an essential role in immune modulation and inflammatory control, particularly in respiratory infections. Despite widespread supplementation policies, hypovitaminosis D remains common in children and data linking vitamin D status to hospitalization outcomes in pediatric upper respiratory tract infections are limited, especially in Eastern Europe. <b>Methods:</b> We included 400 pediatric patients hospitalized between October 2020 and December 2024 for acute respiratory tract infections (ARTI), and we stratified them into a Normal Vitamin D group (NVD) with sufficient serum 25(OH)D concentrations and a Low Vitamin D group (LVD) with insufficient or deficient levels. Between-group comparisons for continuous variables were performed using non-parametric methods. <b>Results:</b> Children with insufficient or deficient 25(OH)D had a significantly longer duration of hospitalization compared with those with sufficient levels (mean 4.68 ± 2.59 days vs. 2.89 ± 1.81 days). The LVD group showed markedly lower serum vitamin D concentrations (mean 21.63 ± 5.56 ng/mL; median 22.29 ng/mL) compared with the NVD group (mean 47.60 ± 19.59 ng/mL; median 43.70 ng/mL). Markers of disease severity were consistently higher in vitamin D-deficient patients, including higher clinical scores (mean 3.77 ± 2.29 vs. 1.62 ± 1.89), elevated CRP levels (mean 3.50 ± 3.02 mg/L vs. 1.64 ± 1.59 mg/L), and increased O<sub>2</sub> therapy requirement (69.5% vs. 21.0%). Fever was more frequent in the LVD group (61.0% vs. 32.0%). An inverse correlation was observed between serum 25(OH)D concentrations and hospitalization duration, clinical score, and disease severity, with deficiency present across all age strata in the LVD group, while no cases of deficiency were observed in the NVD group. <b>Conclusions:</b> Low serum 25(OH)D concentrations are associated with increased disease severity and prolonged hospitalization.
Medicine (General), Medical physics. Medical radiology. Nuclear medicine
Author Correction: Correlation between positron annihilation lifetime and photoluminescence measurements for calcined Hydroxyapatite
Hoda Atta, Kamal R. Mahmoud, Elsayed I. Salim
et al.
Resonant Young’s Slit Interferometer for Sensitive Detection of Low-Molecular-Weight Biomarkers
Stefanus Renaldi Wijaya, Augusto Martins, Katie Morris
et al.
The detection of low-molecular-weight biomarkers is essential for diagnosing and managing various diseases, including neurodegenerative conditions such as Alzheimer’s disease. A biomarker’s low molecular weight is a challenge for label-free optical modalities, as the phase change they detect is directly proportional to the mass bound on the sensor’s surface. To address this challenge, we used a resonant Young’s slit interferometer geometry and implemented several innovations, such as phase noise matching and optimisation of the fringe spacing, to maximise the signal-to-noise ratio. As a result, we achieved a limit of detection of 2.9 × 10<sup>−6</sup> refractive index units (RIU). We validated our sensor’s low molecular weight capability by demonstrating the detection of Aβ-42, a 4.5 kDa peptide indicative of Alzheimer’s disease, and reached the clinically relevant pg/mL regime. This system builds on the guided mode resonance modality we previously showed to be compatible with handheld operation using low-cost components. We expect this development will have far-reaching applications beyond Aβ-42 and become a workhorse tool for the label-free detection of low-molecular-weight biomarkers across a range of disease types.
A Magic Act in Causal Reasoning: Making Markov Violations Disappear
Bob Rehder
A desirable property of any theory of causal reasoning is to explain not only why people make causal reasoning errors but also <i>when</i> they make them. The <i>mutation sampler</i> is a rational process model of human causal reasoning that yields normatively correct inferences when sufficient cognitive resources are available but introduces systematic errors when they are not. The mutation sampler has been shown to account for a number of causal reasoning errors, including <i>Markov violations</i>, the phenomenon in which human reasoners treat causally related variables as statistically dependent when they are normatively independent. A Markov violation arises, for example, when an individual reasoning about a causal chain <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>X</mi><mo>→</mo><mi>Y</mi><mo>→</mo><mi>Z</mi></mrow></semantics></math></inline-formula> treats <i>X</i> as informative about the state of <i>Z</i> even when the state of <i>Y</i> is known. Recently, the mutation sampler was used to predict the existence of previously untested experimental conditions in which the <i>sign</i> of Markov violations would switch from positive to negative. Here, it was used to predict the existence of conditions in which Markov violations should <i>disappear</i> entirely. In fact, asking subjects to reason about a novel causal structure with nothing but <i>generative</i> causal relations (a cause makes its effect more likely) resulted in Markov violations in the usual positive direction. But simply describing one of four causal relations as <i>inhibitory</i> (the cause makes its effect less likely) resulted in the elimination of those violations. Theoretical model fitting confirmed how this novel result is predicted by the mutation sampler.
Search for new physics effects in $ν\barνγ$ production at a Tera-Z factory
H. Denizli, A. Senol, M. Köksal
Rare decays of the Z boson provide a sensitive probe for physics beyond the Standard Model (SM). This study investigates the $e^{+}e^{-} \to Z \to ν\barνγ$ process within the context of the Tera-Z programmes at future colliders such as the FCC-ee and CEPC. The SM predicts a one-loop branching ratio of $7.16 \times 10^{-10}$ for $Z \to ν\barνγ$, a value four times smaller than the current experimental limit from the LEP. To explore this window for new physics, we parameterize anomalous $Zν\barνγ$ interactions using an Effective Field Theory framework, considering both dimension-6 and dimension-8 operators. A detailed simulation is performed by generating signal and background events with MadGraph, modeling particle showers with Pythia, and simulating detector effects with Delphes. The analysis employs key kinematic variables-including the photon energy ($E_γ$), missing transverse energy ($\not{E}_T$), and the missing transverse energy significance ($S_{\not{E}_T}$) to isolate the signal. The results yield upper limits on the anomalous couplings, from which we infer branching ratios for $Z \to ν\barνγ$ on the order of $10^{-9}$. This represents a significant improvement of several orders of magnitude over the LEP sensitivity. Consequently, this study demonstrates the unique potential of the Tera-Z runs not only to test the SM loop-level predictions with unprecedented precision but also to tightly constrain or reveal new anomalous interactions.
Role of Mentorship, Career Conceptualization, and Leadership in Developing Women's Physics Identity and Belonging
Jessica L. Rosenberg, Nancy Holincheck, Kathryn Fernández
et al.
The percentage of women receiving bachelors degrees in physics in the U.S. lags well behind that of men, and women leave the major at higher rates. Achieving equity in physics will mean that women stay in physics at the same rates as men, but this will require changes in the culture and support structures. A strong sense of belonging can lead to higher retention rates so interventions meant to increase dimensions of physics identity (interest, recognition, performance, and competence) may increase persistence overall and increase women's retention differentially. We describe our model in which mentorship, an understanding of career options (career conceptualization), and leadership are inputs into the development of these dimensions of physics identity. This paper includes preliminary results from a qualitative study that aims to better understand how career conceptualization, leadership, and mentorship contribute to the development of physics identity and belonging. We report results from a survey of 15 undergraduate physics students which was followed up by interviews with 5 of those students. The students were from a small private liberal arts college in the midwest region of the U.S. and a large public university in the southeast region of the U.S. classified as a Hispanic-serving institution (HSI). With respect to mentorship, we found that it could provide critical support for students' engagement in the physics community. Leadership experiences have not previously been positioned as an important input into identity, yet we found that they helped women in physics feel more confident, contributing to their recognition of themselves as physics people. While the data on how career conceptualization contributed to the building of identity is limited, there are some connections to recognition and competence, and it will be an interesting avenue of future exploration.
Security analysis of measurement-device-independent quantum conference key agreement with weak randomness
Xiao-Lei Jiang, Xiao-Lei Jiang, Yang Wang
et al.
Quantum conference key agreement (QCKA) allows multiple users to distribute secret conference keys over long distances. Measurement-device-independent QCKA (MDI-QCKA) is an effective QCKA scheme, which closes all detection loopholes and greatly enhances QCKA’s security in practical application. However, an eavesdropper (Eve) may compromise the security of practical systems and acquire conference key information by taking advantage of the weak randomness from the imperfect quantum devices. In this article, we analyze the performance of the MDI-QCKA scheme based on the weak randomness model. Our simulation results show that even a small proportion of weak randomness may lead to a noticeable fluctuation in the conference key rate. For the case with finite-key size, we find that the weak randomness damages the performance of MDI-QCKA to different degrees according to the data size of total pulses transmitted. Furthermore, we infer that QCKA based on single-photon interference technology may perform better in resisting weak randomness vulnerabilities. Our work contributes to the practical security analysis of multiparty quantum communication and takes a further step in the development of quantum networks.
CFD Study of Thermal Stratification in a Scaled-Down, Toroidal Suppression Pool of Fukushima Daiichi Type BWR
Sampath Bharadwaj Kota, Seik Mansoor Ali, Sreenivas Jayanti
During the 2011 nuclear catastrophe at Fukushima Daiichi, Unit 3 had a sharper increase in containment pressure than Unit 2, with thermal stratification of the suppression pool cited as one of the contributing factors. In the present work, the buoyancy-induced circulation consequent to steam condensation in a large, toroidal pool of water is studied using computational fluid dynamics (CFD) simulations with a view to understanding the role of important design parameters of the suppression pool system. The tunnelling phenomenon observed in the development of the thermal stratification process is delineated in terms of the establishment of a thermocline. The effects of the number of steam injection points and the cross-section of the pool on thermal stratification characteristics have been investigated through a number of case studies. In all the cases, the surface temperature, which is responsible for over-pressurization of the containment, is found to be significantly higher than the bulk pool temperature. Multiple injection points with the same overall steam flow rate are found to lead to higher surface temperatures due to a shortened circulation path. For the same volume of pool water, the simulations show that a deeper and narrower pool gives rise to significantly higher temperatures than a wider and shallower pool. This is attributed to the relatively deeper penetration of the buoyancy-induced circulation into the pool.
Thermodynamics, Descriptive and experimental mechanics
Effect of microstructure evolution induced by LP on hydrogen permeation behavior of 316L stainless steel
Yunfeng Jiang, Shu Huang, Jie Sheng
et al.
In order to investigate the hydrogen permeation behavior of 316L stainless steel during the microstructural evolution induced by laser peening (LP), an electrochemical hydrogen charging system for initial hydrogen charging of LPed and non-LPed specimen was developed. Afterward, the microhardness, residual stress, and microstructures of the samples were determined and analyzed. Finally, electrochemical hydrogen permeation experiments were undertaken to verify LP's influence on hydrogen permeation parameters of 316L. The results showed that LP reduced the hydrogen-induced hardening rate of the alloy and additionally invoked high magnitude compressive residual stress on its surface. At the layer close to the face of the specimen, the grain refinement rate was as high as 56.18%, which was accompanied by the appearance of high-density dislocations. Compared with the non-LPed sample, the hydrogen permeation time increased significantly, and the saturation current density in steady state hydrogen permeation also decreased gradually.
Fault interaction and earthquake triggering mechanisms: Progress and prospects
Ke Jia, Shiyong Zhou
Research on fault interaction and earthquake triggering, which is a hot issue in the field of source physics, can facilitate understanding of the underlying mechanisms of strong earthquakes and also has good application prospects in earthquake risk analysis and prediction research. Previous review articles provided detailed explanations from the perspectives of basic principles, methods, and applicability, as well as multiple earthquake case studies of stress triggering. However, the introduction to earthquake triggering from the perspective of seismicity analysis is not exhaustive, and the combination and complementarity of these two perspectives are not provided in detail. This paper summarizes the achievements and progress of research on fault interaction and earthquake triggering mechanism through the past few decades from the perspectives of physical and statistical models. The current challenges and possible future directions are reviewed and evaluated. From the perspective of the physical model, three important mechanisms of sources of fault interaction are analyzed: static stress triggering, dynamic stress triggering, and viscoelastic stress triggering, as well as the basic principles and methods of calculation. In the aspect of the statistical model, the basic principles and methods of seismicity analysis are introduced, and applications of the epidemic-type aftershock sequence (ETAS) model and b-value in fault interaction and earthquake triggering mechanism are analyzed. From the perspective of the combination of these two models, the unified connotation of mutual verification and the basic principle of the rate-and-state friction law are introduced. The analysis points out that the stress interaction between multiple faults or earthquakes can be comprehensively studied through the two different schools of Coulomb stress calculation and the ETAS model and that cross-validation can increase the reliability of the results. Retroactive application of rate-and-state friction law can provide a new perspective for understanding the earthquake triggering relationship and fault interaction.
Geophysics. Cosmic physics, Astrophysics
Axion-Neutrino Couplings, Late-time Phase Transitions and the Far Infrared Physics
V. K. Oikonomou
The far infrared physics is a fascinating topic for theoretical physics, since the foundation of quantum field theory and neutrinos seem to be strongly related with the far infrared physics of our Universe. In this work we shall explore the possibility of a late-time thermal phase transition caused by the axion-neutrino interactions. The axion is assumed to be the misalignment axion which is coupled primordially to a chiral symmetric neutrino. The chiral symmetry is supposed to be broken either spontaneously or explicitly, and two distinct phenomenological models of axion-neutrinos are constructed. The axion behaves as cold dark matter during all its evolution eras, however if we assume that the axion and the neutrino fields interact coherently in a classical way as fields, or as ensembles, then we consider thermal effects in the axion sector, due to the values of operators $φ$ for the axion and $\barνν$ due to the neutrinos. The thermal equilibrium between the two has no effect to the axion effective potential for a wide temperature range. As we show, contrary to the existing literature, the axion never becomes destabilized due to the finite temperature effects, however if axion-Higgs higher order non-renormalizable operators are present in the Lagrangian, the axion potential is destabilized in the temperature range $T\sim 0.1\,$MeV down to $T\sim 0.01\,$eV and a first order phase transition takes place. The initial axion vacuum decays to the energetically more favorable axion vacuum, and the latter decays to the Higgs vacuum which is more preferable. This late-time phase transition might take place in the redshift range $z\sim 385-37$ and thus it may cause density fluctuations in the post-recombination era.
Anomalies in Particle Physics
Andreas Crivellin
I provide a (personal) review of the current hints for physics beyond the Standard Model, called ``anomalies'', obtained both at the intensity frontier (flavour and electroweak precision observables) and in direct LHC searches. This includes the deviations from the Standard Model predictions in semi-leptonic $B$ decays, the anomalous magnetic moment of the muon, the Cabibbo Angle Anomaly, the $W$ mass as well as non-resonant di-lepton searches, the hints for new scalar particles around $\approx\! 95\,$GeV, $\approx\! 151\,$GeV, $\approx\! 670\,$GeV and the (di-)di-jet excess at $\approx \!1\,$TeV ($\approx 3.6\,$TeV). Possible explanations in terms of new particles are briefly summarized and discussed.
Survey of physics reasoning on uncertainty concepts in experiments: an assessment of measurement uncertainty for introductory physics labs
Michael Vignal, Gayle Geschwind, Benjamin Pollard
et al.
Measurement uncertainty is a critical feature of experimental research in the physical sciences, and the concepts and practices surrounding measurement uncertainty are important components of physics lab courses. However, there has not been a broadly applicable, research-based assessment tool that allows physics instructors to easily measure students' knowledge of measurement uncertainty concepts and practices. To address this need, we employed Evidence-Centered Design to create the Survey of Physics Reasoning on Uncertainty Concepts in Experiments (SPRUCE). SPRUCE is a pre-post assessment instrument intended for use in introductory (first- and second-year) physics lab courses to help instructors and researchers identify student strengths and challenges with measurement uncertainty. In this paper, we discuss the development of SPRUCE's assessment items guided by Evidence-Centered Design, focusing on how instructors' and researchers' assessment priorities were incorporated into the assessment items and how students' reasoning from pilot testing informed decisions around item answer options. We also present an example of some of the feedback an instructor would receive after implementing SPRUCE in a pre-post fashion, along with a brief discussion of how that feedback could be interpreted and acted upon.
Error Analysis of a PFEM Based on the Euler Semi-Implicit Scheme for the Unsteady MHD Equations
Kaiwen Shi, Haiyan Su, Xinlong Feng
In this article, we mainly consider a first order penalty finite element method (PFEM) for the 2D/3D unsteady incompressible magnetohydrodynamic (MHD) equations. The penalty method applies a penalty term to relax the constraint “<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mo>∇</mo><mo>·</mo><mi mathvariant="italic">u</mi><mo>=</mo><mn>0</mn></mrow></semantics></math></inline-formula>”, which allows us to transform the saddle point problem into two smaller problems to solve. The Euler semi-implicit scheme is based on a first order backward difference formula for time discretization and semi-implicit treatments for nonlinear terms. It is worth mentioning that the error estimates of the fully discrete PFEM are rigorously derived, which depend on the penalty parameter <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mi>ϵ</mi></semantics></math></inline-formula>, the time-step size <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mi>τ</mi></semantics></math></inline-formula>, and the mesh size <i>h</i>. Finally, two numerical tests show that our scheme is effective.
Dynamical Mean Field Studies of Infinite Layer Nickelates: Physics Results and Methodological Implications
Hanghui Chen, Alexander Hampel, Jonathan Karp
et al.
This article summarizes recent work on the many-body (beyond density functional theory) electronic structure of layered rare-earth nickelates, both in the context of the materials themselves and in comparison to the high-temperature superconducting (high-$T_c$) layered copper-oxide compounds. It aims to outline the current state of our understanding of layered nickelates and to show how the analysis of these fascinating materials can shed light on fundamental questions in modern electronic structure theory. A prime focus is determining how the interacting physics defined over a wide energy range can be estimated and "downfolded" into a low energy theory that would describe the relevant degrees of freedom on the $\sim 0.5$ eV scale and that could be solved to determine superconducting and spin and charge density wave phase boundaries, temperature-dependent resistivities, and dynamical susceptibilities.
en
cond-mat.str-el, cond-mat.mtrl-sci
Students' perspectives on computational challenges in physics class
Patti Hamerski, Daryl McPadden, Marcos D. Caballero
et al.
High school science classrooms across the United States are answering calls to make computation a part of science learning. The problem is that there is little known about the barriers to learning that computation might bring to a science classroom or about how to help students overcome these challenges. This case study explores these challenges from the perspectives of students in a high school physics classroom with a newly revamped, computation-integrated curriculum. Focusing mainly on interviews to center the perspectives of students, we found that computation is a double-edged sword: It can make science learning more authentic for students who are familiar with it, but it can also generate frustration and an aversion towards physics for students who are not.
Minimally Invasive Lateral Approach through Circular Window with a Diameter of 5 to 6 mm for Maxillary Sinus Floor Elevation with Simultaneous Implant Placement: Retrospective Study
Sang-Woon Lee, Young-Wook Park
The aims of this study were to propose a minimally invasive lateral approach technique for maxillary sinus floor elevation (MSFE) with simultaneous implant placement and to evaluate the surgical outcome and complications of this technique. This study reviewed 49 surgeries of MSFE with simultaneous implant placement (<i>n</i> = 83) using a minimally invasive lateral approach. A circular shape window with a diameter of 5 to 6 mm and an area of 20–30 mm<sup>2</sup> was made on the lateral wall of the maxillary sinus. After elevation of the Schneiderian membrane, the xenograft was used for bone grafting. The MSFE was possible with a minimum-sized window in 47 of 49 cases. For the remaining 2 cases, MSFE with a minimum-sized window was failed. In one case, it was expanded to be more than 30 mm<sup>2</sup> to repair the membrane perforation. In another case, MSFE was performed by forming two minimum-sized windows. Post-operative bleeding after MSFE occurred in one anticoagulant-treated patient. There was no failed implant during the follow-up period (mean 22 months). A minimally invasive lateral approach through a small circular window with a diameter of 5 to 6 mm is a feasible and safe technique for MSFE with simultaneous implant placement.
Technology, Engineering (General). Civil engineering (General)
Sequential generation of linear cluster states from a single photon emitter
D. Istrati, Y. Pilnyak, J. C. Loredo
et al.
Generating photonic cluster states using a single non-heralded source and a single entangling gate would optimise scalability and reduce resource overhead. Here, the authors generate up to 4-photon cluster states using a quantum dot coupled to a fibre loop, with a fourfold generation rate of 10 Hz.
Physics Computational Literacy: An Exploratory Case Study Using Computational Essays
Tor Ole B. Odden, Elise Lockwood, Marcos D. Caballero
Computation is becoming an increasingly important part of physics education. However, there are currently few theories of learning that can be used to help explain and predict the unique challenges and affordances associated with computation in physics. In this study, we adapt the existing theory of computational literacy, which posits that computational learning can be divided into material, cognitive, and social aspects, to the context of undergraduate physics. Based on an exploratory study of undergraduate physics computational literacy, using a newly-developed teaching tool known as a computational essay, we have identified a variety of student practices, knowledge, and beliefs across these three aspects of computational literacy. We illustrate these categories with data collected from students who engaged in an initial implementation of computational essays in an introductory electricity and magnetism class. We conclude by arguing that this framework can be used to theoretically diagnose student difficulties with computation, distinguish educational approaches that focus on material vs. cognitive aspects of computational literacy, and highlight the benefits and limitations of open-ended projects like computational essays to student learning.