We present a comprehensive review of keV-scale sterile neutrino Dark Matter, collecting views and insights from all disciplines involved—cosmology, astrophysics, nuclear, and particle physics—in each case viewed from both theoretical and experimental/observational perspectives. After reviewing the role of active neutrinos in particle physics, astrophysics, and cosmology, we focus on sterile neutrinos in the context of the Dark Matter puzzle. Here, we first review the physics motivation for sterile neutrino Dark Matter, based on challenges and tensions in purely cold Dark Matter scenarios. We then round out the discussion by critically summarizing all known constraints on sterile neutrino Dark Matter arising from astrophysical observations, laboratory experiments, and theoretical considerations. In this context, we provide a balanced discourse on the possibly positive signal from X-ray observations. Another focus of the paper concerns the construction of particle physics models, aiming to explain how sterile neutrinos of keV-scale masses could arise in concrete settings beyond the Standard Model of elementary particle physics. The paper ends with an extensive review of current and future astrophysical and laboratory searches, highlighting new ideas and their experimental challenges, as well as future perspectives for the discovery of sterile neutrinos.
Mauch-Soriano Alex, Schreiber Matthias R., Correa Diego
et al.
Context. Gas-giant planets and brown dwarfs have been discovered in large numbers around main-sequence stars and even evolved stars. In contrast, and despite ongoing imaging surveys using state-of-the-art facilities, only a handful of substellar companions to white dwarfs are known. It remains unclear whether this paucity reflects observational challenges or the consequences of stellar evolution.
Aims. We aim to carry out population synthesis of substellar objects around white dwarfs to predict the fraction and properties of white dwarfs hosting substellar companions.
Methods. We generated a representative population of white-dwarf progenitors (up to 4 M⊙) with substellar companions, adopting companion distributions derived from radial-velocity surveys of giant stars and a global age-metallicity relation. We then combined the stellar-evolution codes Modules for Experiments in Stellar Astrophysics (MESA) and Single Star Evolution (SSE) with standard prescriptions for mass loss and stellar tides to predict the resulting population of white dwarfs and their substellar companions.
Results. We find that the predicted fraction of white dwarfs hosting substellar companions in the Milky Way is, independent of uncertainties related to initial distributions, stellar tides, or stellar mass loss during the asymptotic giant branch, below ~3 ± 1.5%. The occurrence rate peaks at relatively low-mass (~0.53M⊙ to ~0.66 M⊙) white dwarfs and relatively young (~1-6 Gyr) systems, where it can reach ≳3%. The semimajor axes of the surviving companions range from 3-24 au with a median of 11 au. We estimate that ~95% of the predicted companions are gas-giant planets, which translates to a predicted general Jupiter-like planet occurrence rate around white dwarfs below ~2.9 ± 1.4%. These occurrence rates might slightly increase if multi-planetary systems are considered. Furthermore, owing to the strong dependence of companion occurrence on the metallicity of the white dwarf progenitor, the assumed age-metallicity relation strongly affects the predictions. Based on recent estimates of the local age-metallicity relation, we estimate that the fraction of white dwarfs with companions close to the Sun might reach ≲8%.
Conclusions. If the planetary and brown dwarf companion distributions derived from intermediate-mass giant stars through radial velocity surveys reflect the characteristics of the true population, less than 3 ± 1.5% of white dwarfs host substellar companions. Depending somewhat on the age-metallicity relation, this most likely represents an upper limit on possible detections because a significant number of companions might not be detectable with current facilities.
Pollock Clara L., Gottumukkala Rashmi, Heintz Kasper E.
et al.
The mass assembly and chemical enrichment of the first galaxies provide key insights into their star formation histories and the earliest stellar populations at cosmic dawn. Here we compile and utilise new, high-quality spectroscopic JWST/NIRSpec Prism observations from the JWST archive. In particular, we extend the wavelength coverage beyond the standard pipeline cut-off (5.3 μm) up to 5.5 μm, which enables for the first time a detailed examination of the rest-frame optical emission-line properties for galaxies at z ≈ 10. Crucially, the improved calibration allows us to detect Hβ and the [O III] λλ4959, 5007 doublet and resolve the auroral [O III] λ4363 line for the 11 galaxies in our sample (z = 9.3 − 10.0) to obtain direct Te-based metallicity measurements. We find that the interstellar medium (ISM) of all galaxies shows high ionisation fields and electron temperatures, with derived metallicities in the range 12 + log(O/H) = 7.1 − 8.3 (3–50% solar), consistent with previous strong-line diagnostics based on JWST data at high redshifts. We derive an empirical relation for MUV and 12 + log(O/H) at z ≈ 10, useful for future higher-redshift studies, and show that the sample galaxies are ‘typical’ star-forming galaxies though with relatively high specific star formation rates (median sSFR = SFRHβ/M★ = 38 Gyr−1) and with evidence of bursty star formation on 10 Myr versus 100 Myr timescales (log10(SFR10/SFR100)≈0.7). Combining the rest-frame optical line analysis and detailed UV to optical spectro-photometric modelling, we determine the mass-metallicity relation (MZR) and the fundamental metallicity relation (FMR) of the sample, pushing the previous redshift frontier of these measurements to z = 10. These results, together with literature measurements, point to a gradually decreasing MZR at higher redshifts, with a break in the FMR at z ≈ 3, decreasing to metallicities ≈3× lower at z = 10 than observed in galaxies during the majority of cosmic time at z = 0 − 3, likely caused by massive pristine gas inflows diluting the observed metal abundances during early galaxy assembly at cosmic dawn.
Data-driven astrophysics currently relies on the detection and characterisation of correlations between objects’ properties, which are then used to test physical theories that make predictions for them. This process fails to utilise information in the data that forms a crucial part of the theories’ predictions, namely which variables are directly correlated (as opposed to accidentally correlated through others), the directions of these determinations, and the presence or absence of confounders that correlate variables in the dataset but are themselves absent from it. We propose to recover this information through causal discovery , a well-developed methodology for inferring the causal structure of datasets that is however almost entirely unknown to astrophysics. We develop a causal discovery algorithm suitable for large astrophysical datasets and illustrate it on ∼ 4.5 × 10 5 nearby galaxies from the Nasa Sloan Atlas, demonstrating its ability to distinguish physical mechanisms that are degenerate on the basis of correlations alone.
Abstract Ionograms are radar echo graphs that depict vertical ionospheric density profiles, structures, fluctuations, and irregularities, with the F region represented by F‐trace and Spread‐F features in the graphs. In this paper, IonoGAN, an enhanced neural network based on the Generative Adversarial Network architecture, is proposed for direct prediction of ionograms and the variation of these ionospheric conditions. This estimation is based on the trends of density profiles and the waves/structures presented in the ionogram sequence. The IonoGAN extends the spatiotemporal information‐preserving and perception‐augmented (STIP) ability by incorporating a Local‐Global discriminator to focus on the F region in ionograms. In addition, two scientific characteristics of ionospheric natural phenomena are extracted and used as constraints in the modeling: Spread‐F Classification Accuracy (SFCA) and Absolute Value of the Correlation Coefficient for the F trace (AVCC‐F). For training, ionograms from Hainan Fuke station (19.5°N, 109.1°E, magnetic 11°N) during 2002–2015 were processed into 36,435 sequences with Spread‐F phenomena and 147,147 sequences without. To strengthen their features, Spread‐F phenomena were further classified into types of frequency, range, mix, and strong range. After the parameter training, the IonoGAN achieved SFCA and AVCC‐F converging to their optimal values: on the 2016 test set, SFCA = 90.92%, AVCC‐F = 0.6917. This modification enables the network to effectively capture the distinct features of the ionospheric F trace and the Spread‐F phenomenon during both quiet and disturbed periods.
Sunny Rhoades, Tucker Jones, Keerthi Vasan G. C.
et al.
The kinematics of star-forming galaxy populations at high redshifts are integral to our understanding of disk properties, merger rates, and other defining characteristics. Nebular gas emission is a common tracer of galaxies’ gravitational potential and angular momenta, but is sensitive to nongravitational forces as well as galactic outflows, and thus might not accurately trace the host galaxy dynamics. We present kinematic maps of young stars from rest-ultraviolet photospheric absorption in the star-forming galaxy CASSOWARY 13 (a.k.a. SDSS J1237+5533) at z = 1.87 using the Keck Cosmic Web Imager, alongside nebular emission measurements from the same observations. Gravitational lensing magnification of the galaxy enables good spatial sampling of multiple independent lensed images. We find close agreement between the stellar and nebular velocity fields. We measure a mean local velocity dispersion of σ = 64 ± 12 km s ^−1 for the young stars, consistent with that of the H ii regions traced by nebular C iii ] emission (52 ± 9 km s ^−1 ). The ∼20 km s ^−1 average difference in line-of-sight velocity is much smaller than the local velocity width and the velocity gradient (≳100 km s ^−1 ). We find no evidence of asymmetric drift nor evidence that outflows bias the nebular kinematics, and thus we conclude that nebular emission appears to be a reasonable dynamical tracer of young stars in the galaxy. These results support the picture of star formation in thick disks with high velocity dispersion at z ∼ 2, and they represent an important step toward establishing robust kinematics of early galaxies using collisionless tracers.
Gregory Sallaberry, Benjamin W. Priest, Robert Armstrong
et al.
Analysis of cosmic shear is an integral part of understanding structure growth across cosmic time, which in turn provides us with information about the nature of dark energy. Conventional methods generate shear maps from which we can infer the matter distribution in the universe. Current methods (e.g., Kaiser–Squires inversion) for generating these maps, however, are tricky to implement and can introduce bias. Recent alternatives construct a spatial process prior for the lensing potential, which allows for inference of the convergence and shear parameters given lensing shear measurements. Realizing these spatial processes, however, scales cubically in the number of observations—an unacceptable expense as near-term surveys expect billions of correlated measurements. Therefore, we present a linearly scaling shear map construction alternative using a scalable Gaussian process prior called MuyGPs. MuyGPs avoids cubic scaling by conditioning interpolation on only nearest neighbors and fits hyperparameters using batched leave-one-out cross-validation. This work is the first step toward a full, scalable mass mapping method. We work in a simplified regime where we validate our method by interpolating and analyzing maps given noisy point-estimate data from all three shear fields, taken from a suite of N -body ray-tracing simulations. We also show that we can perform these operations at the scale of billions of galaxies on high-performance computing platforms.
Danaisy Prado-Alvarez, Daniel Calabuig, Saúl Inca
et al.
Integrated Sensing and Communication (ISAC) is envisioned as a foundational technology for future wireless networks, enabling simultaneous wireless communication and environmental sensing using shared resources. A key challenge in ISAC systems lies in managing the trade-off between communication data rate and sensing accuracy, especially in multi-user scenarios. In this work, we investigate the joint design of transmit signal covariance matrices to optimize the sum data rate while ensuring certain sensing performance. Specifically, we formulate a constrained optimization problem where the transmit covariance matrix is allocated to maximize the communication sum-rate under sensing-related constraints. These constraints condition the design of the transmit signal’s covariance matrix, impacting both the sensing channel estimation error and the sum data rate. Our proposed method leverages convex optimization tools to achieve a principled balance between communication and sensing. Numerical results demonstrate that the proposed approach effectively manages the ISAC trade-off, achieving near-optimal communication performance while satisfying sensing requirements.
Thanks to the successful performance of the James Webb Space Telescope, our understanding of the epoch of reionization of the Universe has been advanced. The ultraviolet luminosity functions (UV LFs) of galaxies span a wide range of redshifts, not only revealing the connection between galaxies and dark matter (DM) halos but also providing information during reionization. In this work, we develop a model connecting galaxy counts and apparent magnitude based on UV LFs, which incorporates redshift-dependent star formation efficiency (SFE) and corrections for dust attenuation. By synthesizing some observations across the redshift range of 4 ≤ z ≤ 10 from various galaxy surveys, we discern the evolving SFE with increasing redshift and DM halo mass through model fitting. Subsequent analyses indicate that the Thomson scattering optical depth was ${\tau }_{{\rm{e}}}={0.054}_{-0.003}^{+0.001}$ and the epoch of reionization started (ended) at $z={18.8}_{-6.0}^{+7.2}$ ( $z={5.3}_{-1.0}^{+0.8}$ ), which is insensitive to the choice of the truncated magnitude of the UV LFs. Incorporating additional data sets and some reasonable constraints, the amplitude of matter perturbation is found to be σ _8 = 0.80 ± 0.05, which is consistent with the standard ΛCDM model. Future galaxy surveys and the dynamical simulations of galaxy evolution will break the degeneracy between SFE and cosmological parameters, improving the accuracy and the precision of the UV LF model further.
Jonatan Jacquemin-Ide, Ore Gottlieb, Beverly Lowell
et al.
The spin of a newly formed black hole (BH) at the center of a massive star evolves from its natal value due to two competing processes: accretion of gas angular momentum that increases the spin and extraction of BH angular momentum by outflows that decreases the spin. Ultimately, the final, equilibrium spin is set by a balance between both processes. In order for the BH to launch relativistic jets and power a γ -ray burst (GRB), the BH magnetic field needs to be dynamically important. Thus, we consider the case of a magnetically arrested disk (MAD) driving the spin evolution of the BH. By applying the semianalytic MAD BH spin evolution model of Lowell et al. to collapsars, we show that if the BH accretes ∼20% of its initial mass, its dimensionless spin inevitably reaches small values, a ≲ 0.2. For such spins, and for mass accretion rates inferred from collapsar simulations, we show that our semianalytic model reproduces the energetics of typical GRB jets, L _jet ∼ 10 ^50 erg s ^−1 . We show that our semianalytic model reproduces the nearly constant power of typical GRB jets. If the MAD onset is delayed, this allows powerful jets at the high end of the GRB luminosity distribution, L _jet ∼ 10 ^52 erg s ^−1 , but the final spin remains low, a ≲ 0.3. These results are consistent with the low spins inferred from gravitational wave detections of binary BH mergers. In a companion paper by Gottlieb et al., we use GRB observations to constrain the natal BH spin to be a ≃ 0.2.
Kevin K. Hardegree-Ullman, Dániel Apai, Galen J. Bergsten
et al.
Molecular oxygen is a strong indicator of life on Earth and may indicate biological processes on exoplanets too. Recent studies proposed that Earth-like O _2 levels might be detectable on nearby exoplanets using high-resolution spectrographs on future extremely large telescopes (ELTs). However, these studies did not consider constraints like relative velocities, planet occurrence rates, and target observability. We expanded on past studies by creating a homogeneous catalog of 286,391 main-sequence stars within 120 pc using Gaia DR3 and used the Bioverse framework to simulate the likelihood of finding nearby transiting Earth analogs. We also simulated a survey of M dwarfs within 20 pc accounting for η _⊕ estimates, transit probabilities, relative velocities, and target observability to determine how long ELTs and theoretical 50–100 m ground-based telescopes need to observe to probe for Earth-like O _2 levels with an R = 100,000 spectrograph. This would only be possible within 50 yr for up to ∼21% of nearby M-dwarf systems if a suitable transiting habitable-zone Earth analog was discovered, assuming signals from every observable partial transit from each ELT can be combined. If so, Earth-like O _2 levels could be detectable on TRAPPIST-1 d–g within 16–55 yr, respectively, and about half that time with an R = 500,000 spectrograph. These results have important implications for whether ELTs can survey nearby habitable-zone Earth analogs for O _2 via transmission spectroscopy. Our work provides the most comprehensive assessment to date of the ground-based capabilities to search for life beyond the solar system.
Internal gravity waves can cause mixing in the radiative interiors of stars. We study this mixing by introducing tracer particles into 2D hydrodynamic simulations. Following the work of Rogers & McElwaine, we extend our study to different masses (3, 7, and 20 M _⊙ ) and ages (ZAMS, midMS, and TAMS). The diffusion profiles of these models are influenced by various parameters such as the Brunt–Väisälä frequency, density, thermal damping, the geometric effect, and the frequencies of waves contributing to these mixing profiles. We find that the mixing profile changes dramatically across age. In younger stars, we noted that the diffusion coefficient increases toward the surface, whereas in older stars the initial increase in the diffusion profile is followed by a decreasing trend. We also find that mixing is stronger in more massive stars. Hence, future stellar evolution models should include this variation. In order to aid the inclusion of this mixing in 1D stellar evolution models, we determine the dominant waves contributing to these mixing profiles and present a prescription that can be included in 1D models.
Marko Ristić, Erika M. Holmbeck, Ryan T. Wollaeger
et al.
Kilonovae, one source of electromagnetic emission associated with neutron star mergers, are powered by the decay of radioactive isotopes in the neutron-rich merger ejecta. Models for kilonova emission consistent with the electromagnetic counterpart to GW170817 predict characteristic abundance patterns, determined by the relative balance of different types of material in the outflow. Assuming that the observed source is prototypical, this inferred abundance pattern in turn must match r -process abundances deduced by other means, such as what is observed in the solar system. We report on analysis comparing the input mass-weighted elemental compositions adopted in our radiative transfer simulations to the mass fractions of elements in the Sun, as a practical prototype for the potentially universal abundance signature from neutron star mergers. We characterize the extent to which our parameter inference results depend on our assumed composition for the dynamical and wind ejecta and examine how the new results compare to previous work. We find that a dynamical ejecta composition calculated using the FRDM2012 nuclear mass and FRLDM fission models with extremely neutron-rich ejecta ( Y _e = 0.035) along with moderately neutron-rich ( Y _e = 0.27) wind ejecta composition yields a wind-to-dynamical mass ratio of M _w / M _d = 0.47, which best matches the observed AT2017gfo kilonova light curves while also producing the best-matching abundance of neutron capture elements in the solar system, though, allowing for systematics, the ratio may be as high as of order unity.
The standard Bayesian technique for searching pulsar timing data for gravitational-wave bursts with memory (BWMs) using Markov Chain Monte Carlo (MCMC) sampling is very computationally expensive to perform. In this paper, we explain the implementation of an efficient Bayesian technique for searching for BWMs. This technique makes use of the fact that the signal model for Earth-term BWMs (BWMs passing over the Earth) is fully factorizable. We estimate that this implementation reduces the computational complexity by a factor of 100. We also demonstrate that this technique gives upper limits consistent with published results using the standard Bayesian technique, and may be used to perform all of the same analyses of BWMs that standard MCMC techniques can perform.
The PC and FCI algorithms are popular constraint-based methods for learning the structure of directed acyclic graphs (DAGs) in the absence and presence of latent and selection variables, respectively. These algorithms (and their order-independent variants, PC-stable and FCI-stable) have been shown to be consistent for learning sparse high-dimensional DAGs based on partial correlations. However, inferring conditional independences from partial correlations is valid if the data are jointly Gaussian or generated from a linear structural equation model—an assumption that may be violated in many applications. To broaden the scope of high-dimensional causal structure learning, we propose nonparametric variants of the PC-stable and FCI-stable algorithms that employ the conditional distance covariance (CdCov) to test for conditional independence relationships. As the key theoretical contribution, we prove that the high-dimensional consistency of the PC-stable and FCI-stable algorithms carry over to general distributions over DAGs when we implement CdCov-based nonparametric tests for conditional independence. Numerical studies demonstrate that our proposed algorithms perform nearly as good as the PC-stable and FCI-stable for Gaussian distributions, and offer advantages in non-Gaussian graphical models.
With the development of quantitative finance, machine learning methods used in the financial fields have been given significant attention among researchers, investors, and traders. However, in the field of stock index spot–futures arbitrage, relevant work is still rare. Furthermore, existing work is mostly retrospective, rather than anticipatory of arbitrage opportunities. To close the gap, this study uses machine learning approaches based on historical high-frequency data to forecast spot–futures arbitrage opportunities for the China Security Index (CSI) 300. Firstly, the possibility of spot–futures arbitrage opportunities is identified through econometric models. Then, Exchange-Traded-Fund (ETF)-based portfolios are built to fit the movements of CSI 300 with the least tracking errors. A strategy consisting of non-arbitrage intervals and unwinding timing indicators is derived and proven profitable in a back-test. In forecasting, four machine learning methods are adopted to predict the indicator we acquired, namely Least Absolute Shrinkage and Selection Operator (LASSO), Extreme Gradient Boosting (XGBoost), Back Propagation Neural Network (BPNN), and Long Short-Term Memory neural network (LSTM). The performance of each algorithm is compared from two perspectives. One is an error perspective based on the Root-Mean-Squared Error (RMSE), Mean Absolute Percentage Error (MAPE), and goodness of fit (R<sup>2</sup>). Another is a return perspective based on the trade yield and the number of arbitrage opportunities captured. Finally, a performance heterogeneity analysis is conducted based on the separation of bull and bear markets. The results show that LSTM outperforms all other algorithms over the entire time period, with an RMSE of 0.00813, MAPE of 0.70 percent, R<sup>2</sup> of 92.09 percent, and an arbitrage return of 58.18 percent. Meanwhile, in certain market conditions, namely both the bull market and bear market separately with a shorter period, LASSO can outperform.
Ignacio Algredo-Badillo, Miguel Morales-Sandoval, Alejandro Medina-Santiago
et al.
In emergent technologies, data integrity is critical for message-passing communications, where security measures and validations must be considered to prevent the entrance of invalid data, detect errors in transmissions, and prevent data loss. The SHA-256 algorithm is used to tackle these requirements. Current hardware architecture works present issues regarding real-time balance among processing, efficiency and cost, because some of them introduce significant critical paths. Besides, the SHA-256 algorithm itself considers no verification mechanisms for internal calculations and failure prevention. Hardware implementations can be affected by diverse problems, ranging from physical phenomena to interference or faults inherent to data spectra. Previous works have mainly addressed this problem through three kinds of redundancy: information, hardware, or time. To the best of our knowledge, pipelining has not been previously used to perform different hash calculations with a redundancy topic. Therefore, in this work, we present a novel hybrid architecture, implemented on a 3-stage pipeline structure, which is traditionally used to improve performance by simultaneously processing several blocks; instead, we propose using a pipeline technique for implementing hardware and time redundancies, analyzing hardware resources and performance to balance the critical path. We have improved performance at a certain clock speed, defining a data flow transformation in several sequential phases. Our architecture reported a throughput of 441.72 Mbps and 2255 LUTs, and presented an efficiency of 195.8 Kbps/LUT.
Multi-modal fusion can exploit complementary information from various modalities and improve the accuracy of prediction or classification tasks. In this paper, we propose a parallel, multi-modal, factorized, bilinear pooling method based on a semi-tensor product (STP) for information fusion in emotion recognition. Initially, we apply the STP to factorize a high-dimensional weight matrix into two low-rank factor matrices without dimension matching constraints. Next, we project the multi-modal features to the low-dimensional matrices and perform multiplication based on the STP to capture the rich interactions between the features. Finally, we utilize an STP-pooling method to reduce the dimensionality to get the final features. This method can achieve the information fusion between modalities of different scales and dimensions and avoids data redundancy due to dimension matching. Experimental verification of the proposed method on the emotion-recognition task using the IEMOCAP and CMU-MOSI datasets showed a significant reduction in storage space and recognition time. The results also validate that the proposed method improves the performance and reduces both the training time and the number of parameters.