Context. A large proportion of observed white dwarfs show evidence of debris disks, remnants of the former planetary systems, and/or signatures of heavy elements in their atmospheres, induced by the accretion of planetary matter onto their surfaces. The observed abundances are the result of the balance between the accretion flux and the dilution of this planetary material by internal transport processes. A recent study showed that more massive DA white dwarfs are less polluted than smaller mass ones. It was suggested that the reason could be related to the formation of planetary systems when these stars were on the main sequence.
Aims. The aim of this work is to test how internal dilution processes, including thermohaline convection, change with white dwarf masses, and whether such an effect could account for variations in the observed pollution.
Methods. We computed the efficiency of atomic diffusion and thermohaline convection after the accretion of heavy elements onto white dwarfs using static DA models with various masses, effective temperatures, and hydrogen contents.
Results. We confirm that thermohaline convection is always more efficient in diluting accreted elements than atomic diffusion, as previously shown in the literature. However, we find that element dilution by thermohaline convection is less efficient in massive white dwarfs than in smaller mass ones, due to their larger internal density.
Conclusions. We showed that the differences in observed heavy element pollution in white dwarfs according to their masses cannot be explained by the dilution induced by atomic diffusion and thermohaline mixing alone. Indeed, the pollution by planetary system accretion should be more easily detectable in massive white dwarfs than in low-mass ones. We discuss other processes that should be taken into account before drawing any conclusion about the occurrences of planetary systems according to the mass of the star on the main sequence.
Abstract The Dirac equation with mass and axial chemical potential is solved analytically, obtaining the mode spinors and the corresponding projection operators, giving the spectral representations of the principal conserved operators. In this framework, the odd partner of the Pryce spin operator is defined for the first time, showing how these operators may be combined for defining the particle and antiparticle spin and polarization operators of Dirac’s theory of massive fermions, either in the free case or in the presence of the axial chemical potential. The quantization procedure is applied in both these cases, obtaining two distinct operator algebras in which the particle and antiparticle spin and polarization operators take canonical forms. In this approach, statistical operators with independent particle and antiparticle vortical chemical potentials may be constructed.
Astrophysics, Nuclear and particle physics. Atomic energy. Radioactivity
In this Roadmap, we present a vision for the future of submillimetre and millimetre astronomy in the United Kingdom over the next decade and beyond. This Roadmap has been developed in response to the recommendation of the Astronomy Advisory Panel (AAP) of the STFC in the AAP Astronomy Roadmap 2022. In order to develop our stragetic priorities and recommendations, we surveyed the UK submillimetre and millimetre community to determine their key priorities for both the near-term and long-term future of the field. We further performed detailed reviews of UK leadership in submillimetre/millimetre science and instrumentation. Our key strategic priorities are as follows: 1. The UK must be a key partner in the forthcoming AtLAST telescope, for which it is essential that the UK remains a key partner in the JCMT in the intermediate term. 2. The UK must maintain, and if possible enhance, access to ALMA and aim to lead parts of instrument development for ALMA2040. Our strategic priorities complement one another: AtLAST (a 50m single-dish telescope) and an upgraded ALMA (a large configurable interferometric array) would be in synergy, not competition, with one another. Both have identified and are working towards the same overarching science goals, and both are required in order to fully address these goals.
For the needs of manned landing, station construction, and material transfer in future lunar exploration missions, the paper proposes a landing–moving integrated gear (LMIG) for mobile lunar lander (MLL), establishes and optimizes the models of cushioning energy-absorbing and movement planning, respectively, and conducts the prototype tests. First, the design requirements of LMIG are given, and the system composition of LMIG and the configuration design of each subsystem are introduced. Second, the effective energy-absorbing model of the aluminum honeycomb is established and experimentally verified, a three-stage aluminum honeycomb buffer is designed and experimentally verified, and the buffer mechanism of LMIG is verified by simulations under various landing conditions. Furthermore, the kinematic and dynamic models of LMIG are established, the moving gait is designed by the center of gravity trajectory planning method, and the driving trajectory during the stepping process is optimized with the goal of minimal jerk of motion. Finally, a cushioning test prototype and a walking test scaled prototype of LMIG are developed, and single leg drop test and ground walking test are carried out. The results show that the established model of LMIG is reasonable, the designed buffer and gait of LMIG are effective, the developed prototypes of LMIG have good cushioning and movement performance, the LMIG’s maximum value of overload acceleration is 6.5g, and the moving speed is 108 m/h, which meets the design requirements.
Motor vehicles. Aeronautics. Astronautics, Astronomy
Ni Putu Audita Placida Emas, Chris Blake, Rossana Ruggeri
et al.
The ratio of the average tangential shear signal of different weak lensing source populations around the same lens galaxies, also known as a shear ratio, provides an important test of lensing systematics and a potential source of cosmological information. In this paper we measure shear ratios of three current weak lensing surveys --KiDS, DES, and HSC-- using overlapping data from the Baryon Oscillation Spectroscopic Survey. We apply a Bayesian method to reduce bias in shear ratio measurement, and assess the degree to which shear ratio information improves the determination of important astrophysical parameters describing the source redshift distributions and intrinsic galaxy alignments, as well as cosmological parameters, in comparison with cosmic shear and full 3x2-pt correlations (cosmic shear, galaxy-galaxy lensing, and galaxy clustering). We consider both Fisher matrix forecasts, as well as full likelihood analyses of the data. We find that the addition of shear ratio information to cosmic shear allows the mean redshifts of the source samples and intrinsic alignment parameters to be determined significantly more accurately. Although the additional constraining power enabled by the shear ratio is less than that obtained by introducing an accurate prior in the mean source redshift using photometric redshift calibration, the shear ratio allows for a useful cross-check. The inclusion of shear ratio data consistently benefits the determination of cosmological parameters such as S_8, for which we obtain improvements up to 34%. However these improvements are less significant when shear ratio is combined with the full 3x2-pt correlations. We conclude that shear ratio tests will remain a useful source of cosmological information and cross-checks for lensing systematics, whose application will be further enhanced by upcoming datasets such as the Dark Energy Spectroscopic Instrument.
A number of stellar astrophysical phenomena, such as tidal novae and planetary engulfment, involve sudden injection of subbinding energy in a thin layer within the star, leading to mass ejection of the stellar envelope. We use a 1D hydrodynamical model to survey the stellar response and mass loss for various amounts ( E _dep ) and locations of the energy deposition. We find that the total mass ejection has a nontrivial dependence on E _dep due to the varying strengths of mass ejection events, which are associated with density/pressure waves breaking out from the stellar surface. The rapid occurrence of multiple breakouts may present a unique observational signature for sudden envelope heating events in stars.
The healthy function of the vestibular system (VS) is of vital importance for individuals to carry out their daily activities independently and safely. This study carries out Tsallis entropy (TE)-based analysis on insole force sensor data in order to extract features to differentiate between healthy and VS-diseased individuals. Using a specifically developed algorithm, we detrend the acquired data to examine the fluctuation around the trend curve in order to consider the individual’s walking habit and thus increase the accuracy in diagnosis. It is observed that the TE value increases for diseased people as an indicator of the problem of maintaining balance. As one of the main contributions of this study, in contrast to studies in the literature that focus on gait dynamics requiring extensive walking time, we directly process the instantaneous pressure values, enabling a significant reduction in the data acquisition period. The extracted feature set is then inputted into fundamental classification algorithms, with support vector machine (SVM) demonstrating the highest performance, achieving an average accuracy of 95%. This study constitutes a significant step in a larger project aiming to identify the specific VS disease together with its stage. The performance achieved in this study provides a strong motivation to further explore this topic.
Marion Pillas, Tito Dal Canton, Cosmin Stachie
et al.
GW170817–GRB 170817A provided the first observation of gravitational waves from a neutron star merger with associated transient counterparts across the entire electromagnetic spectrum. This discovery demonstrated the long-hypothesized association between short gamma-ray bursts and neutron star mergers. More joint detections are needed to explore the relation between the parameters inferred from the gravitational wave and the properties of the gamma-ray burst signal. We developed a joint multimessenger analysis of LIGO, Virgo, and Fermi/GBM data designed for detecting weak gravitational-wave transients associated with weak gamma-ray bursts. As such, it does not start from confident (GWTC-1) events only. Instead, we take the full list of existing compact binary coalescence triggers generated with the PyCBC pipeline from the second Gravitational-Wave Observing Run (O2), and reanalyze the entire set of public Fermi/GBM data covering this observing run to generate a corresponding set of gamma-ray burst candidate triggers. We then search for coincidences between the gravitational-wave and gamma-ray burst triggers without requiring a confident detection in any channel. The candidate coincidences are ranked according to a statistic combining each candidate’s strength in gravitational-wave and gamma-ray data, their time proximity, and the overlap of their sky localization. The ranking is then converted to a false alarm rate using time shifts between the gravitational-wave and gamma-ray burst triggers. We present the results using O2 triggers, which allowed us to check the validity of our method against GW170817–GRB 170817A. We also discuss the different configurations tested to maximize the significance of the joint detection.
Abstract The standard polar cap (PC) indices, PCN (North) based on magnetic data from Qaanaaq in Greenland and PCS (South) based on data from Vostok in Antarctica, have been submitted from the Arctic and Antarctic Research Institute in St. Petersburg, Russia, the Danish Meteorological Institute, and the Danish Space Research Institute in different versions. In order to consolidate PCS indices based on Vostok data or replace poor or missing index data, derivation procedures have been developed to generate alternative PCS index values based on data from Dome Concordia (Dome‐C) magnetic observations from epoch 2009–2020 of solar cycle 24. The reference levels and calibration parameters needed for calculations of Dome‐C‐based PCS values in post‐event and real‐time versions are defined and explained in the present work. Assessments of the new PCS index have shown its unprecedented high relevance. Part of the methods used here, such as the quiet reference level construction and the correlation and regression procedures used for calculations of scaling parameters, deviate from corresponding features considered inadequate of the International Association for Geomagnetism and Aeronomy‐endorsed PC index derivation methods.
Trends showing increase in the number of mobile device users, as well as the number of tourists, imply that more people rely on their smartphones when navigating in a new environment. Based on these facts, the idea for this experimental research appeared. That idea is applying the process of machine learning, more precisely, the implementation of a neural network, to investigate the possibility of improving the accuracy of smartphone navigation. The achieved results indicate that machine learning algorithms (neural networks) are a powerful tool that can also be applied to GNSS data collected by a smartphone device, in order to improve accuracy. Based on the collected data in the field, preprocessing and machine learning process, it is concluded that it is possible to improve the accuracy of mobile device navigation by up to 50%.
DNA glycosylase, as one member of DNA repair machineries, plays an essential role in correcting mismatched/damaged DNA nucleotides by cleaving the N-glycosidic bond between the sugar and target nucleobase through the base excision repair (BER) pathways. Efficient corrections of these DNA lesions are critical for maintaining genome integrity and preventing premature aging and cancers. The target-site searching/recognition mechanisms and the subsequent conformational dynamics of DNA glycosylase, however, remain challenging to be characterized using experimental techniques. In this review, we summarize our recent studies of sequential structural changes of thymine DNA glycosylase (TDG) during the DNA repair process, achieved mostly by molecular dynamics (MD) simulations. Computational simulations allow us to reveal atomic-level structural dynamics of TDG as it approaches the target-site, and pinpoint the key structural elements responsible for regulating the translocation of TDG along DNA. Subsequently, upon locating the lesions, TDG adopts a base-flipping mechanism to extrude the mispaired nucleobase into the enzyme active-site. The constructed kinetic network model elucidates six metastable states during the base-extrusion process and suggests an active role of TDG in flipping the intrahelical nucleobase. Finally, the molecular mechanism of product release dynamics after catalysis is also summarized. Taken together, we highlight to what extent the computational simulations advance our knowledge and understanding of the molecular mechanism underlying the conformational dynamics of TDG, as well as the limitations of current theoretical work.
We explore how astronomers take observational data from telescopes, process them into usable scientific data products, curate them for later use, and reuse data for further inquiry. Astronomers have invested heavily in knowledge infrastructures - robust networks of people, artifacts, and institutions that generate, share, and maintain specific knowledge about the human and natural worlds. Drawing upon a decade of interviews and ethnography, this article compares how three astronomy groups capture, process, and archive data, and for whom. The Sloan Digital Sky Survey is a mission with a dedicated telescope and instruments, while the Black Hole Group and Integrative Astronomy Group (both pseudonyms) are university-based, investigator-led collaborations. Findings are organized into four themes: how these projects develop and maintain their workflows; how they capture and archive their data; how they maintain and repair knowledge infrastructures; and how they use and reuse data products over time. We found that astronomers encode their research methods in software known as pipelines. Algorithms help to point telescopes at targets, remove artifacts, calibrate instruments, and accomplish myriad validation tasks. Observations may be reprocessed many times to become new data products that serve new scientific purposes. Knowledge production in the form of scientific publications is the primary goal of these projects. They vary in incentives and resources to sustain access to their data products. We conclude that software pipelines are essential components of astronomical knowledge infrastructures, but are fragile, difficult to maintain and repair, and often invisible. Reusing data products is fundamental to the science of astronomy, whether or not those resources are made publicly available. We make recommendations for sustaining access to data products in scientific fields such as astronomy.
Joshua Lukemire, Qian Xiao, Abhyuday Mandal
et al.
We introduce statistical techniques required to handle complex computer models with potential applications to astronomy. Computer experiments play a critical role in almost all fields of scientific research and engineering. These computer experiments, or simulators, are often computationally expensive, leading to the use of emulators for rapidly approximating the outcome of the experiment. Gaussian process models, also known as Kriging, are the most common choice of emulator. While emulators offer significant improvements in computation over computer simulators, they require a selection of inputs along with the corresponding outputs of the computer experiment to function well. Thus, it is important to select inputs judiciously for the full computer simulation to construct an accurate emulator. Space-filling designs are efficient when the general response surface of the outcome is unknown, and thus they are a popular choice when selecting simulator inputs for building an emulator. In this tutorial we discuss how to construct these space filling designs, perform the subsequent fitting of the Gaussian process surrogates, and briefly indicate their potential applications to astronomy research.
Recent studies have shown that complex systems are often best represented by generalized networks such as hypergraphs, multilayer networks, and temporal networks. Here, the authors propose a unified framework to investigate cluster synchronization patterns in generalized networks and demonstrate the existence of chimera states that emerge exclusively in the presence of higher-order interactions.
Bohdan Slavko, Mikhail Prokopenko, Kirill S. Glavatskiy
We propose a non-equilibrium framework for modelling the evolution of cities, which describes intra-urban migration as an irreversible diffusive process. We validate this framework using the actual migration data for the Australian capital cities. With respect to the residential relocation, the population is shown to be composed of two distinct groups, exhibiting different relocation frequencies. In the context of the developed framework, these groups can be interpreted as two components of a binary fluid mixture, each with its own diffusive relaxation time. Using this approach, we obtain long-term predictions of the cities’ spatial structures, which define their equilibrium population distribution.
L. A. Harland-Lang, M. Tasevsky, V. A. Khoze
et al.
Abstract We present the results of the new SuperChic 4 Monte Carlo implementation of photon-initiated production in proton–proton collisions, considering as a first example the case of lepton pair production. This is based on the structure function calculation of the underlying process, and focusses on a complete account of the various contributing channels, including the case where a rapidity gap veto is imposed. We provide a careful treatment of the contributions where either (single dissociation), both (double dissociation) or neither (elastic) proton interacts inelastically and dissociates, and interface our results to Pythia for showering and hadronization. The particle decay distribution from dissociation system, as well the survival probability for no additional proton–proton interactions, are both fully accounted for; these are essential for comparing to data where a rapidity gap veto is applied. We present detailed results for the impact of the veto requirement on the differential cross section, compare to and find good agreement with ATLAS 7 TeV data on semi-exclusive production, and provide a new precise evaluation of the background from semi-exclusive lepton pair production to SUSY particle production in compressed mass scenarios, which is found to be low.
Astrophysics, Nuclear and particle physics. Atomic energy. Radioactivity
Dulcilena de Matos Castro e Silva, Rosa Maria Nascimento Marcusso, Cybelli Gonçalves Gregório Barbosa
et al.
In the context of megacities in an urban environment, air quality is an important issue, due to the direct correlation to population's health. The biomonitoring of pollutants can indicate subtle environmental alterations, for that, anemophilous fungi can be monitored for changes in atmospheric conditions related to pollution. In the present study, the concentration of fungi and bacteria in the atmosphere was measured during a specific vehicle fleet reduction in the city of São Paulo, Brazil, from May 24 to 30, 2018, using impactor air samplers. The number of isolated developed colonies was related to atmospheric conditions and the concentration of other air pollutants constantly monitored. Aspergillus, Curvularia, Penicillium, Neurospora, Rhizopus and Trichoderma were identified. The number of colony-forming units increased by approximately 80% during the sampling period in response to environmental changes favored by the fleet reduction. This result implies the relation between fuel emissions, concentration of atmospheric pollutants, and the presence of viable fungal spores in the urban environment, which highlights the importance of combined public policies for air quality in large cities.
A system’s heterogeneity (<i>diversity</i>) is the effective size of its event space, and can be quantified using the Rényi family of indices (also known as Hill numbers in ecology or Hannah–Kay indices in economics), which are indexed by an elasticity parameter <inline-formula><math display="inline"><semantics><mrow><mi>q</mi><mo>≥</mo><mn>0</mn></mrow></semantics></math></inline-formula>. Under these indices, the heterogeneity of a composite system (the <inline-formula><math display="inline"><semantics><mi>γ</mi></semantics></math></inline-formula>-heterogeneity) is decomposable into heterogeneity arising from variation <i>within</i> and <i>between</i> component subsystems (the <inline-formula><math display="inline"><semantics><mi>α</mi></semantics></math></inline-formula>- and <inline-formula><math display="inline"><semantics><mi>β</mi></semantics></math></inline-formula>-heterogeneity, respectively). Since the average heterogeneity of a component subsystem should not be greater than that of the pooled system, we require that <inline-formula><math display="inline"><semantics><mrow><mi>γ</mi><mo>≥</mo><mi>α</mi></mrow></semantics></math></inline-formula>. There exists a multiplicative decomposition for Rényi heterogeneity of composite systems with discrete event spaces, but less attention has been paid to decomposition in the continuous setting. We therefore describe multiplicative decomposition of the Rényi heterogeneity for continuous mixture distributions under parametric and non-parametric pooling assumptions. Under non-parametric pooling, the <inline-formula><math display="inline"><semantics><mi>γ</mi></semantics></math></inline-formula>-heterogeneity must often be estimated numerically, but the multiplicative decomposition holds such that <inline-formula><math display="inline"><semantics><mrow><mi>γ</mi><mo>≥</mo><mi>α</mi></mrow></semantics></math></inline-formula> for <inline-formula><math display="inline"><semantics><mrow><mi>q</mi><mo>></mo><mn>0</mn></mrow></semantics></math></inline-formula>. Conversely, under parametric pooling, <inline-formula><math display="inline"><semantics><mi>γ</mi></semantics></math></inline-formula>-heterogeneity can be computed efficiently in closed-form, but the <inline-formula><math display="inline"><semantics><mrow><mi>γ</mi><mo>≥</mo><mi>α</mi></mrow></semantics></math></inline-formula> condition holds reliably only at <inline-formula><math display="inline"><semantics><mrow><mi>q</mi><mo>=</mo><mn>1</mn></mrow></semantics></math></inline-formula>. Our findings will further contribute to heterogeneity measurement in continuous systems.
The new field of multi-messenger astronomy aims at the study of astronomical sources using different types of "messenger" particles: photons, neutrinos, cosmic rays and gravitational waves. These lectures provide an introductory overview of the observational techniques used for each type of astronomical messenger, of different types of astronomical sources observed through different messenger channels and of the main physical processes involved in production of the messenger particles and their propagation through the Universe.