M. Longair
Hasil untuk "Astrophysics"
Menampilkan 20 dari ~240949 hasil · dari CrossRef, DOAJ, Semantic Scholar
G. Raffelt
M. Arnould, S. Goriely, Kohji Takahashi
Abstract The r-process, or the rapid neutron-capture process, of stellar nucleosynthesis is called for to explain the production of the stable (and some long-lived radioactive) neutron-rich nuclides heavier than iron that are observed in stars of various metallicities, as well as in the solar system. A very large amount of nuclear information is necessary in order to model the r-process. This concerns the static characteristics of a large variety of light to heavy nuclei between the valley of stability and the vicinity of the neutron-drip line, as well as their beta-decay branches or their reactivity. Fission probabilities of very neutron-rich actinides have also to be known in order to determine the most massive nuclei that have a chance to be involved in the r-process. Even the properties of asymmetric nuclear matter may enter the problem. The enormously challenging experimental and theoretical task imposed by all these requirements is reviewed, and the state-of-the-art development in the field is presented. Nuclear-physics-based and astrophysics-free r-process models of different levels of sophistication have been constructed over the years. We review their merits and their shortcomings. The ultimate goal of r-process studies is clearly to identify realistic sites for the development of the r-process. Here too, the challenge is enormous, and the solution still eludes us. For long, the core collapse supernova of massive stars has been envisioned as the privileged r-process location. We present a brief summary of the one- or multidimensional spherical or non-spherical explosion simulations available to-date. Their predictions are confronted with the requirements imposed to obtain an r-process. The possibility of r-nuclide synthesis during the decompression of the matter of neutron stars following their merging is also discussed. Given the uncertainties remaining on the astrophysical r-process site and on the involved nuclear physics, any confrontation between predicted r-process yields and observed abundances is clearly risky. A comparison dealing with observed r-nuclide abundances in very metal-poor stars and in the solar system is attempted on grounds of r-process models based on parametrised astrophysics conditions. The virtues of the r-process product actinides for dating old stars or the solar system are also critically reviewed.
A. Boyarsky, O. Ruchayskiy, M. Shaposhnikov
We present a comprehensive overview of an extension of the Standard Model that contains three right-handed (sterile) neutrinos with masses below the electroweak scale [the Neutrino Minimal Standard Model (νMSM)]. We consider the history of the Universe from the inflationary era through today and demonstrate that most of the observed phenomena beyond the Standard Model can be explained within the framework of this model. We review the mechanism of baryon asymmetry of the Universe in the νMSM and discuss a dark matter candidate that can be warm or cold and that satisfies all existing constraints. From the viewpoint of particle physics, the model provides an explanation for neutrino flavor oscillations. Verification of the νMSM is possible with existing experimental techniques.
Erica Behrens, Jeffrey G. Mangum, Mathilde Bouvier et al.
We quantify the utility of HCN and HNC to characterize gas conditions in the nearby starburst galaxy NGC 253. We use measurements from the Atacama Large Millimeter/Submillimeter Array (ALMA) Large Program ALCHEMI: the ALMA Comprehensive High-resolution Molecular Inventory. Using different subsets of the eight total HCN and HNC transitions measured by ALCHEMI, we test the number and combinations of transitions necessary for constraining the temperature, H _2 volume and column densities, cosmic-ray ionization rate, and beam-filling factor in three representative regions within NGC 253. We use these combinations of HCN and HNC transitions to constrain chemical and radiative transfer models, and infer the gas conditions using a Bayesian nested sampling algorithm combined with neural network models for increased efficiency. By comparing the shapes of the resulting posterior distributions, as well as the medians and uncertainties for each gas parameter, from each test case to what we obtain with the full set of eight transitions (the control), we quantify how well each test reproduces the control. We find that multiple transitions each of both molecules are required to obtain a median parameter value within a factor of 2 of the control with an uncertainty less than 2–3 times that of the control. We also find that transition combinations which feature a range of upper-state energies are most effective. We show that single transitions, such as HCN J = 1–0 or 3–2, are among the worst-performing combinations and result in parameter values up to an order of magnitude different than the control.
Zhejian Zhang, Nan Li, Shude Mao et al.
IntroductionGalaxy cluster-scale strong gravitational lensing systems are rare yet valuable tools for investigating dark matter and dark energy, as well as providing the opportunity to study the distant universe at flux levels and spatial resolutions that would otherwise be unavailable. Large-scale imaging surveys present unprecedented opportunities to expand the sample of cluster lenses.MethodsIn this study, we adopt a deep learning-based approach to identify cluster lenses from the DESI Legacy Imaging Surveys, utilizing the catalog of galaxy cluster candidates identified by Zou et al. (2021). Our lens-finder employs a ResNet-18 architecture, trained with mock images of cluster lenses as positives and observational images of cluster scale non-lenses as negatives. We do an iterative operation to increase the completeness of our work, namely adding the found true positive samples back to the training set and training again for several times. Human inspection is conducted to further refine the candidates, categorizing them into grades (A, B, C) according to the significance of the strongly lensed arcs.ResultsReviewing all 540,432 objects in Zou’s catalog, we discover 485 high-confidence cluster lens candidates with a cluster M500 range of 1013.67∼14.97M⊙ and a Brightest Central Galaxy (BCG) redshift range of 0.04∼0.89. After excluding the lens candidates listed in previous studies, we identify 247 newly discovered cluster lens candidates, including 16 grade A, 90 grade B, and 141 grade C.DiscussionThis catalog of cluster lens candidates is publicly available online, and follow-up observations are encouraged to confirm and conduct thorough investigations of these systems.
F. Löffler, J. Faber, E. Bentivegna et al.
We describe the Einstein Toolkit, a community-driven, freely accessible computational infrastructure intended for use in numerical relativity, relativistic astrophysics, and other applications. The toolkit, developed by a collaboration involving researchers from multiple institutions around the world, combines a core set of components needed to simulate astrophysical objects such as black holes, compact objects, and collapsing stars, as well as a full suite of analysis tools. The Einstein Toolkit is currently based on the Cactus framework for high-performance computing and the Carpet adaptive mesh refinement driver. It implements spacetime evolution via the BSSN evolution system and general relativistic hydrodynamics in a finite-volume discretization. The toolkit is under continuous development and contains many new code components that have been publicly released for the first time and are described in this paper. We discuss the motivation behind the release of the toolkit, the philosophy underlying its development, and the goals of the project. A summary of the implemented numerical techniques is included, as are results of numerical test covering a variety of sample astrophysical problems.
S. Klein
Merlin Kole, Kasun Wimalasena, Richard Gorby et al.
Richard G. Arendt, F. Yusef-Zadeh, I. Heywood
We present a catalog of 1.28 GHz radio filaments observed by MeerKAT over the innermost 200 pc of the Galaxy (roughly ±1 $\mathop{.}\limits^{\unicode{x000b0}}$ 5), which includes the central molecular zone. The catalog is generated by repurposing software developed for the automated detection of filaments in solar coronal loops. There are two parts to the catalog. The first part, the main catalog, provides a point-by-point listing of locations and basic observational properties along each detected filament. The second part is a summary catalog that provides a listing of mean, median, or total values of various properties for each filament. Tabulated quantities include position, length, curvature, brightness, and spectral index. The catalogs contain a heterogeneous mix of filamentary structures, including nonthermal radio filaments, and parts of supernova remnants and thermally emitting regions (e.g., H ii regions). We discuss criteria for selecting useful subsamples of filaments from the catalogs, and some of the details encountered in examining filaments or selections of filaments from the catalogs.
Juseon Bak, Xiong Liu, Gonzalo González Abad et al.
We investigate the retrieval of ozone (O<sub>3</sub>) profiles, with a particular focus on tropospheric O<sub>3</sub>, from backscattered ultraviolet radiances measured by the TROPOspheric Monitoring Instrument (TROPOMI), using the UV2 (300–332 nm) and UV3 (305–400 nm) channels independently. An optimal estimation retrieval algorithm, originally developed for the Ozone Monitoring Instrument (OMI), was extended as a preliminary step toward integrating multiple satellite ozone profile datasets. The UV2 and UV3 channels exhibit distinct radiometric and wavelength calibration uncertainties, leading to inconsistencies in retrieval accuracy and convergence stability. A yearly “soft” calibration mitigates overestimation and cross-track-dependent biases (“stripes”) in tropospheric ozone retrievals, enhancing retrieval consistency between UV2 and UV3. Convergence stability is ensured by optimizing the measurement error constraints for each channel. It is shown that our research product outperforms the standard product (UV1 and UV2 combined) in capturing the seasonal and long-term variabilities of tropospheric ozone. An agreement between the retrieved tropospheric ozone and ozonesonde measurements is observed within 0–3 DU ± 5.5 DU (R = 0.75), which is better than that of the standard product by a factor of two. Despite lacking Hartley ozone information in UV2 and UV3, the retrieved stratospheric ozone columns have good agreement with ozonesondes (R = 0.96).
Chang Zhou, Yang Guo, Guoyin Chen et al.
Giulio Ruffini, Francesca Castaldo, Jakub Vohryzek
In the Kolmogorov Theory of Consciousness, algorithmic agents utilize inferred compressive models to track coarse-grained data produced by simplified world models, capturing regularities that structure subjective experience and guide action planning. Here, we study the dynamical aspects of this framework by examining how the requirement of tracking natural data drives the structural and dynamical properties of the agent. We first formalize the notion of a <i>generative model</i> using the language of symmetry from group theory, specifically employing Lie pseudogroups to describe the continuous transformations that characterize invariance in natural data. Then, adopting a generic neural network as a proxy for the agent dynamical system and drawing parallels to Noether’s theorem in physics, we demonstrate that data tracking forces the agent to mirror the symmetry properties of the generative world model. This dual constraint on the agent’s constitutive parameters and dynamical repertoire enforces a hierarchical organization consistent with the manifold hypothesis in the neural network. Our findings bridge perspectives from algorithmic information theory (Kolmogorov complexity, compressive modeling), symmetry (group theory), and dynamics (conservation laws, reduced manifolds), offering insights into the neural correlates of agenthood and structured experience in natural systems, as well as the design of artificial intelligence and computational models of the brain.
Kumail Zaidi, Danilo Marchesini, Casey Papovich et al.
We present the construction of a deep multiwavelength point-spread-function-matched photometric catalog in the Ultra-Deep Survey (UDS) field following the final UKIDSS UDS release. The catalog includes photometry in 24 filters, from the MegaCam- uS 0.38 μ m band to the Spitzer-IRAC 8 μ m band, over ∼0.9 deg ^2 and with a 5 σ depth of 25.3 AB in the K -band detection image. The catalog, containing ≈188,564 (136,235) galaxies at 0.2 < z < 8.0 with stellar mass $\mathrm{log}({M}_{* }/{M}_{\odot })\gt 8$ and K -band total magnitude K < 25.2 (24.3) AB, enables a range of extragalactic studies. We also provide photometric redshifts, corresponding redshift probability distributions, and rest-frame absolute magnitudes and colors derived using the template-fitting code eazy-py . Photometric redshift errors are less than 3%−4% at z < 4 across the full brightness range in the K band and stellar mass range $8\lt \mathrm{log}({M}_{* }/{M}_{\odot })\lt 12$ . Stellar population properties (e.g., stellar mass, star formation rate, dust extinction) are derived from the modeling of the spectral energy distributions using the codes FAST and Dense Basis.
Z. Xin, C. C. Espaillat, A. M. Rilinger et al.
Linchang Han, Liming Yang, Zhihui Li et al.
How to improve the computational efficiency of flow field simulations around irregular objects in near-continuum and continuum flow regimes has always been a challenge in the aerospace re-entry process. The discrete velocity method (DVM) is a commonly used algorithm for the discretized solutions of the Boltzmann-BGK model equation. However, the discretization of both physical and molecular velocity spaces in DVM can result in significant computational costs. This paper focuses on unlocking the key to accelerate the convergence in DVM calculations, thereby reducing the computational burden. Three versions of DVM are investigated: the semi-implicit DVM (DVM-I), fully implicit DVM (DVM-II), and fully implicit DVM with an inner iteration of the macroscopic governing equation (DVM-III). In order to achieve full implicit discretization of the collision term in the Boltzmann-BGK equation, it is necessary to solve the corresponding macroscopic governing equation in DVM-II and DVM-III. In DVM-III, an inner iterative process of the macroscopic governing equation is employed between two adjacent DVM steps, enabling a more accurate prediction of the equilibrium state for the full implicit discretization of the collision term. Fortunately, the computational cost of solving the macroscopic governing equation is significantly lower than that of the Boltzmann-BGK equation. This is primarily due to the smaller number of conservative variables in the macroscopic governing equation compared to the discrete velocity distribution functions in the Boltzmann-BGK equation. Our findings demonstrate that the fully implicit discretization of the collision term in the Boltzmann-BGK equation can accelerate DVM calculations by one order of magnitude in continuum and near-continuum flow regimes. Furthermore, the introduction of the inner iteration of the macroscopic governing equation provides an additional 1–2 orders of magnitude acceleration. Such advancements hold promise in providing a computational approach for simulating flows around irregular objects in near-space environments.
E. Fernández-Martínez, J. López-Pavón, J. M. No et al.
Abstract We perform a comprehensive scan of the parameter space of a general singlet scalar extension of the Standard Model to identify the regions which can lead to a strong first-order phase transition, as required by the electroweak baryogenesis mechanism. We find that taking into account bubble nucleation is a fundamental constraint on the parameter space and present a conservative and fast estimate for it so as to enable efficient parameter space scanning. The allowed regions turn out to be already significantly probed by constraints on the scalar mixing from Higgs signal strength measurements. We also consider the addition of new neutrino singlet fields with Yukawa couplings to both scalars and forming heavy (pseudo)-Dirac pairs, as in the linear or inverse Seesaw mechanisms for neutrino mass generation. We find that their inclusion does not alter the allowed parameter space from early universe phenomenology in a significant way. Conversely, there are allowed regions of the parameter space where the presence of the neutrino singlets would remarkably modify the collider phenomenology, yielding interesting new signatures in Higgs and singlet scalar decays.
K. Urbanowski
Abstract We try to find conditions, the fulfillment of which allows a universe born in a metastable false vacuum state to survive and not to collapse. The conditions found are in the form of inequalities linking the depending on time t instantaneous decay rate $${\varGamma }(t)$$ Γ ( t ) of the false vacuum state and the Hubble parameter H(t). Properties of the decay rate of a quantum metastable states are discussed and then the possible solutions of the conditions found are analyzed and discussed. Within the model considered it is shown that a universe born in the metastable vacuum state has a very high chance of surviving until very late times if the lifetime, $$\tau _{0}^{F}$$ τ 0 F , of the metastable false vacuum state is much shorter, than the duration of the inflation process. Our analysis shows that the instability of the electroweak vacuum does not have to result in the tragic fate of our Universe leading to its death.
Nizar Bouhlel, David Rousseau
This paper introduces a closed-form expression for the Kullback–Leibler divergence (KLD) between two central multivariate Cauchy distributions (MCDs) which have been recently used in different signal and image processing applications where non-Gaussian models are needed. In this overview, the MCDs are surveyed and some new results and properties are derived and discussed for the KLD. In addition, the KLD for MCDs is showed to be written as a function of Lauricella D-hypergeometric series <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msubsup><mi>F</mi><mi>D</mi><mrow><mo>(</mo><mi>p</mi><mo>)</mo></mrow></msubsup></semantics></math></inline-formula>. Finally, a comparison is made between the Monte Carlo sampling method to approximate the KLD and the numerical value of the closed-form expression of the latter. The approximation of the KLD by Monte Carlo sampling method are shown to converge to its theoretical value when the number of samples goes to the infinity.
Merab Gogberashvili
Abstract Standard cosmological equations are written for the Hubble volume, while the real boundary of space-time is the event horizon. Within the unimodular and thermodynamic approaches to gravity, the dark energy term in cosmological equations appears as an integration constant, which we fix at the event horizon and obtain the observed value for the cosmological constant.
Halaman 7 dari 12048