I consider the longstanding issue of the hermiticity of the Dirac equation in curved spacetime. Instead of imposing hermiticity by adding ad hoc terms, I renormalize the field by a scaling function, which is related to the determinant of the metric, and then regularize the renormalized field on a discrete lattice. I found that, for time-independent and diagonal (or conformally flat) coordinates, the Dirac equation returns a pseudo-Hermitian (i.e., PT-symmetric) Hamiltonian when properly regularized on the lattice. Notably, the PT-symmetry is unbroken, ensuring a real energy spectrum and unitary time evolution. This establishes stringent conditions for the existence of complex spectra in 1D non-Hermitian (NH) models. Conversely, time-dependent spacetime coordinates break pseudohermiticity, yielding NH Hamiltonians with nonunitary time evolution. Similarly, space-dependent coordinates lead to the NH skin effect (NHSE), i.e., the accumulation of localized states on the boundaries. Arguably, these NH effects are physical: time dependence leads to local gain and loss processes and nonunitary growth or decay. Conversely, space dependence leads to the NHSE with spatial decay of the fields in a preferential direction. In other words, the curvature gradients induce an imaginary gauge field, corresponding to a drift force acting in space and time, pushing the eigenmodes to the boundaries or forcing their probability density to increase or decrease over time. Hence, temporal curvature gradients produce nonunitary gain or loss, while spatial curvature gradients correspond to the NHSE, allowing for the description of these two phenomena in a unified framework. This also suggests a duality between NH physics and spacetime deformations, framing NH physics in purely geometric terms. This metric-induced nonhermiticity unveils an unexpected connection between the spacetime metric and NH phases of matter.
This chapter narrates the journey of developing and integrating computing into the physics curriculum through three consecutive courses, each tailored to the learners' level. It starts with the entry-level "Physics Playground in Python" for high school and freshman students with no programming experience, designed in the spirit of the "Hello World" approach. At the sophomore and junior level, students from all sciences and engineering disciplines learn "Scientific Computing with Python" in an environment based on the "Two Bites at Every Apple" approach. Ultimately, upper undergraduate and entry-level graduate students take "Computational Physics," to develop their skills in solving advanced problems using complex numerical algorithms and computational tools. This journey showcases the increasing complexity and sophistication of computational tools and techniques that can be incorporated into the physical science curriculum, serving as a guide for educators looking to integrate computing into their teaching. It also aims to inspire students by showcasing the impact and potential of computational methods in science education and research.
Lawal Abubakar, Nor Azah Yusof, Abdul Halim Abdullah
et al.
Due to the release of hazardous heavy metals from various industries, water pollution has become one of the biggest challenges for environmental scientists today. Mercury Hg(II) is regarded as one of the most toxic heavy metals due to its ability to cause cancer and other health issues. In this study, a tailor-made modern eco-friendly molecularly imprinted polymer (MIP)/nanoporous carbon (NC) nanocomposite was synthesized and examined for the uptake of Hg(II) using an aqueous solution. The fabrication of the MIP/NC nanocomposite occurred via bulk polymerization involving the complexation of the template, followed by polymerization and, finally, template removal. Thus, the formed nanocomposite underwent characterizations that included morphological, thermal degradation, functional, and surface area analyses. The MIP/NC nanocomposite, with a high specific surface area of 884.9 m<sup>2</sup>/g, was evaluated for its efficacy towards the adsorptive elimination of Hg(II) against the pH solution changes, the dosage of adsorbent, initial concentration, and interaction time. The analysis showed that a maximum Hg(II) adsorption effectiveness of 116 mg/g was attained at pH 4, while the Freundlich model fitted the equilibrium sorption result and was aligned with pseudo-second-order kinetics. Likewise, thermodynamic parameters like enthalpy, entropy, and Gibbs free energy indicated that the adsorption was consistent with spontaneous, favorable, and endothermic reactions. Furthermore, the adsorption efficiency of MIP/NC was also evaluated against a real sample of condensate from the oil and gas industry, showing an 87.4% recovery of Hg(II). Finally, the synthesized MIP/NC showed promise as a selective adsorbent of Hg(II) in polluted environments, suggesting that a variety of combined absorbents of different precursors is recommended to evaluate heavy metal and pharmaceutical removals.
Alla G. Polyakova, Anna G. Soloveva, Petr V. Peretyagin
et al.
The development of anti-pain technologies in the complex treatment of pain syndromes is one of the most urgent tasks of modern medicine. We undertook a placebo-controlled experimental study of the therapeutic potential of low-intensity laser radiation when applied to acupuncture points that are directly related to the autonomic nervous system. The adaptation effect of puncture photobiomodulation on the induction of stress-mediated autonomic reactions, oxidative metabolism and microcirculation in animals during the acute phase of pain stress was revealed. The data obtained are of interest for use in the complex rehabilitation of patients with pain syndromes.
fairooz kareem, Asrar Abdulmunem Saeed, Mahasin F. Hadi Al- Kadhemy
et al.
Energy transfer in a hybrid mixture of Rhodamine 6G (Rh6G) dye laser as a donor and nanoparticles (NPs) as an acceptor were studied. The absorption spectra of 1×10-5 M of Rh6G in distilled water showed an increase in peak intensity upon addition of NPs. Notably, the spectra were improved upon addition of Aluminum Oxide (Al2O3) NPs. The addition of NPs quenches the fluorescence spectra of Rh6G due to Förster resonance energy transfer (FRET). The efficiency of this energy transfer increases with an increasing concentration of NPs, and a best value of efficiency of energy transfer was found for the Rh6G/Magnesium Oxide (MgO) NP system. A similarly strong relationship was also found for for the Rh6G/Al2O3 NP system.
The ability to explain decisions made by machine learning models remains one of the most significant hurdles towards widespread adoption of AI in highly sensitive areas such as medicine, cybersecurity or autonomous driving. Great interest exists in understanding which features of the input data prompt model decision making. In this contribution, we propose a novel approach to identify relevant features of the input data, inspired by methods from the energy landscapes field, developed in the physical sciences. By identifying conserved weights within groups of minima of the loss landscapes, we can identify the drivers of model decision making. Analogues to this idea exist in the molecular sciences, where coordinate invariants or order parameters are employed to identify critical features of a molecule. However, no such approach exists for machine learning loss landscapes. We will demonstrate the applicability of energy landscape methods to machine learning models and give examples, both synthetic and from the real world, for how these methods can help to make models more interpretable.
Frédéric Bouquet, Julien Bobroff, Lou-Andreas Etienne
et al.
We developed a two-day physics class that uses a nearby forest as a teaching location. Using low-cost material, students design and carry out physics projects outside of the usual controlled environment that is a classroom. In this way they come to realize that physics can be used to understand the real world. They organize and present their results in an original format, an exhibit they collectively build. This project is an introduction to the role physics can play in exploring environmental issues, incorporating a sensitive and positive aspect which is important in this time of environmental crisis.
External reactor vessel cooling (ERVC) is one of the important severe accident mitigation strategies to achieve in-vessel retention(IVR) of melt core debris under severe accident conditions. Referring to the IVR-ERVC conditions for the prototypical pressure vessel lower head wall of elliptic-shaped, a critical heat flux (CHF) test campaign was, in the paper, carried out upon a full-sized thick test block section which was installed in a one-dimensional full height natural circulation test loop. Eighteen groups of heating rods with independent power control were inserted into the test block. Eight experimental measuring points were evenly distributed on the heating wall of the test block along the inclination angle, and the heating power shapes of each experimental measuring point were determined according to the Theofanous’ power shaping principle. Thermocouples were arranged near the heating wall and on all sides of the test block to obtain the temperature information during heating and CHF occurring. CHF data as well as their distribution along ellipticalshaped outer wall of test block were obtained. Meanwhile, preliminary evidence of typical CHF triggering mechanism on downwardfacing curved heating wall was deduced through the visual observations during the test. The visual observations show that when the evaporative drying area of the liquid film under the vapor block is large enough, it is difficult to cool the heating wall of test block. The wall temperature rises rapidly, and CHF occurs. Furthermore, effects of inlet subcooling, flooding water level, flow resistance and natural circulation flow rate, as well as the gap size of ERVC channels on CHF limits are experimentally studied. Test results show that, CHF increases with the increase of the inclination angle of heating wall, the increase of inlet subcooling can significantly increase CHF. Increasing the inlet subcooling can reduce the liquid temperature in the twophase boundary layer and effectively delay the evaporation of the liquid film, so as to improve the CHF. In the base cases and inlet subcooling cases, the relative decrease of CHF occurs in the uppermost section of the heating wall, which is called “exit phenomenon”. The CHF of the heating wall increases slightly with the increase of liquid level. While the change of natural circulation flow resistance and flow rate in a certain range has a rather limited impact on CHF. According to the CHF triggering mechanism, the flow rate change is not large enough to cause the instability and fracture of vapor block and the near wall flow structure does not change significantly, so the impact is limited. The influence of the change of gap size of ERVC channel on CHF is quite complicated. It seems that the relative relationship between the gap size and the thickness of twophase boundary layer, as well as the streamline constraints of the flow channel wall on the vapor phase both have influence on the CHF quantity and distribution.
Composite slab systems have become increasingly popular over the last few decades because of the advantages of merging the two building materials, profiled steel sheets and concrete. The profiled composite slab’s performance depends on the composite interaction at the longitudinal direction of the concrete–steel interface. Geopolymer concrete has emerged over the last few years as a potential sustainable construction material, with 80% less carbon dioxide emissions than cementitious concrete. Recently, self-compacted geopolymer concrete (SCGC) has been developed, synthesised from a fly ash/slag ratio equal to 60/40, micro fly ash (5%), anhydrous sodium metasilicate solid powder as the alkali-activator and a water/solid content ratio equal to 0.45. The production of SCGC eliminates the need for an elevated temperature during curing and high corrosive alkali-activator solutions, as in traditional geopolymer concrete. The bond characteristics of the profiled composite slab system incorporated with the SCGC mix have not yet been thoroughly investigated. The cost-effectiveness of small-scale tests has popularised its usage by many researchers as an alternative technique to large-scale testing for assessing composite slab load shear capacity. In this paper, small-scale push tests were conducted to investigate the load slip behaviour of the SCGC composite slab compared to the normal concrete (NC) composite slab, with targeted compressive strengths of 40 and 60 MPa. The results indicate that SCGC has better chemical adhesion with profiled steel sheets than NC. Additionally, the profiled composite slab incorporated with SCGC possesses higher ultimate strength and toughness than the normal concrete composite slab.
Turbulence remains a problem that is yet to be fully understood, with experimental and numerical studies aiming to fully characterise the statistical properties of turbulent flows. Such studies require huge amount of resources to capture, simulate, store and analyse the data. In this work, we present physics-informed neural network (PINN) based methods to predict flow quantities and features of two-dimensional turbulence with the help of sparse data in a rectangular domain with periodic boundaries. While the PINN model can reproduce all the statistics at large scales, the small scale properties are not captured properly. We introduce a new PINN model that can effectively capture the energy distribution at small scales performing better than the standard PINN based approach. It relies on the training of the low and high wavenumber behaviour separately leading to a better estimate of the full turbulent flow. With 0.1 % training data, we observe that the new PINN model captures the turbulent field at inertial scales leading to a general agreement of the kinetic energy spectra upto eight to nine decades as compared with the solutions from direct numerical simulation (DNS). We further apply these techniques to successfully capture the statistical behaviour of large scale modes in the turbulent flow. We believe such methods to have significant applications in enhancing the retrieval of existing turbulent data sets at even shorter time intervals.
The high-luminosity Super $τ$-Charm Factory (STCF) will be a crucial facility for charm-physics research, particularly for the precise measurement of electroweak parameters, measuring $D^0$-$\bar{D}^0$ mixing parameters, investigating Charge-Parity (CP) violation within the charm sector, searching for the rare and forbidden decays of charmed hadrons, and addressing other foundational questions related to charmed hadrons. With the world's largest charm-threshold data, the STCF aims to achieve high sensitivity in studying the strong phase of neutral $D$ mesons using quantum correlation, complementing studies at LHCb and Belle II, and contributing to the understanding of CP violations globally. The STCF will also enable world-leading precision in measuring the leptonic decays of charmed mesons and baryons, providing constraints on the Cabibbo-Kobayashi-Maskawa matrix and strong-force dynamics. Additionally, the STCF will explore charmed hadron spectroscopy. The advanced detector and clean experimental environment of the STCF will enable unprecedented precision, help address key challenges in the Standard Model, and facilitate the search for potential new physics.
Mauro Falconieri, Serena Gagliardi, Flaminia Rondino
et al.
Impulsive stimulated Raman scattering (ISRS) is a nonlinear pump–probe spectroscopy technique particularly suitable to study vibrational intermolecular and intramolecular modes in complex systems. For the latter, recent studies of ISRS microscopy with low-energy laser sources have attracted attention for investigation of photosensitive or biological samples. Following this stream of interest, in this paper, we report an investigation on the relationship between femtosecond ISRS data and pump–probe Z-scan measurements, showing that the latter technique is capable of capturing the Kerr nonlinearities induced by the molecular vibrational modes. To this aim, firstly, spectrally filtered and Raman-induced Kerr ISRS signals were simultaneously acquired to determine the sample nonlinear response and to establish the reference data for the Z-scan analysis. Then, by adopting a suitable experimental arrangement to avoid thermo-optical effects, we were able to unambiguously observe the Raman-induced effects in Z-scan measurements, thus obtaining a consistent picture between ISRS and Z-scan for the first time, to the best of our knowledge. Practical applications of the proposed method include calibrated measurements of the contribution of the internal (Raman) and external molecular modes to the nonlinear refractive index.
Energy, exergy, and exergoeconomic evaluations of various geothermal configurations are reported. The main operational and economic parameters of the cycles are evaluated and compared. Multi-objective optimization of the cycles is conducted using the artificial bee colony algorithm. A sensitivity assessment is carried out on the effect of production well temperature variation on system performance from energy and economic perspectives. The results show that the flash-binary cycle has the highest thermal and exergy efficiencies, at 15.6% and 64.3%, respectively. The highest generated power cost and pay-back period are attributable to the simple organic Rankine cycle (ORC). Raising the well-temperature can increase the exergy destruction rate in all configurations. However, the electricity cost and pay-back period decrease. Based on the results, in all cases, the exergoenvironmental impact improvement factor decreases, and the temperature rises. The exergy destruction ratio and efficiency of all components for each configuration are calculated and compared. It is found that, at the optimum state, the exergy efficiencies of the simple organic Rankine cycle, single flash, double flash, and flash-binary cycles respectively are 14.7%, 14.4%, 12.6%, and 14.1% higher than their relevant base cases, while the pay-back periods are 10.6%, 1.5% 1.4%, and 0.6% lower than the base cases.
Juliana Aparecida Anochi, Vinícius Albuquerque de Almeida, Haroldo Fraga de Campos Velho
Many natural disasters in South America are linked to meteorological phenomena. Therefore, forecasting and monitoring climatic events are fundamental issues for society and various sectors of the economy. In the last decades, machine learning models have been developed to tackle different issues in society, but there is still a gap in applications to applied physics. Here, different machine learning models are evaluated for precipitation prediction over South America. Currently, numerical weather prediction models are unable to precisely reproduce the precipitation patterns in South America due to many factors such as the lack of region-specific parametrizations and data availability. The results are compared to the general circulation atmospheric model currently used operationally in the National Institute for Space Research (INPE: Instituto Nacional de Pesquisas Espaciais), Brazil. Machine learning models are able to produce predictions with errors under 2 mm in most of the continent in comparison to satellite-observed precipitation patterns for different climate seasons, and also outperform INPE’s model for some regions (e.g., reduction of errors from 8 to 2 mm in central South America in winter). Another advantage is the computational performance from machine learning models, running faster with much lower computer resources than models based on differential equations currently used in operational centers. Therefore, it is important to consider machine learning models for precipitation forecasts in operational centers as a way to improve forecast quality and to reduce computation costs.
Spaceborne-airborne multistatic synthetic aperture radar (SA-MuSAR) has the ability to provide high-resolution forward-looking imagery for receivers, but it relies on careful design of the geometric configuration (GC). In this article, a forward-looking GC optimization design method is proposed to obtain a high-quality fused image with limited observation time. First, the relationship between the spatial resolution and GC is illustrated by the wavenumber spectrum distribution of SA-MuSAR. Second, GC evaluators depending on the distribution of multiple wavenumber spectrum data are proposed. The GC design problem of coherent SA-MuSAR is transformed into a constrained multiobjective optimization problem. An intelligent evolutionary algorithm is adopted to optimize the wavenumber spectrum distribution. With the proposed method, high-quality forward-looking imagery can be obtained with a short observation time. Numerical simulations are carried out to verify the effectiveness of the proposed method.
Oliver Fischer, Bruce Mellado, Stefan Antusch
et al.
The field of particle physics is at the crossroads. The discovery of a Higgs-like boson completed the Standard Model (SM), but the lacking observation of convincing resonances Beyond the SM (BSM) offers no guidance for the future of particle physics. On the other hand, the motivation for New Physics has not diminished and is, in fact, reinforced by several striking anomalous results in many experiments. Here we summarise the status of the most significant anomalies, including the most recent results for the flavour anomalies, the multi-lepton anomalies at the LHC, the Higgs-like excess at around 96 GeV, and anomalies in neutrino physics, astrophysics, cosmology, and cosmic rays. While the LHC promises up to 4/ab of integrated luminosity and far-reaching physics programmes to unveil BSM physics, we consider the possibility that the latter could be tested with present data, but that systemic shortcomings of the experiments and their search strategies may preclude their discovery for several reasons, including: final states consisting in soft particles only, associated production processes, QCD-like final states, close-by SM resonances, and SUSY scenarios where no missing energy is produced. New search strategies could help to unveil the hidden BSM signatures, devised by making use of the CERN open data as a new testing ground. We discuss the CERN open data with its policies, challenges, and potential usefulness for the community. We showcase the example of the CMS collaboration, which is the only collaboration regularly releasing some of its data. We find it important to stress that individuals using public data for their own research does not imply competition with experimental efforts, but rather provides unique opportunities to give guidance for further BSM searches by the collaborations. Wide access to open data is paramount to fully exploit the LHCs potential.
The Compact Linear Collider (CLIC) is a proposed high-luminosity collider that would collide electrons with their antiparticles, positrons, at energies ranging from a few hundred Giga-electronvolts (GeV) to a few Tera-electronvolts (TeV). By covering a large energy range and by ultimately reaching multi-TeV $e^+e^-$ collisions, scientists at CLIC aim to improve the understanding of nature's fundamental building blocks and to discover new particles or other physics phenomena. CLIC is an international project with institutes world-wide participating in the accelerator, detector and physics studies. First $e^+e^-$ collisions at CLIC are expected around 2035, following the High-Luminosity phase of the Large Hadron Collider at CERN.