Rushil Kukreja, Edward J. Oughton, Richard Linares
The proliferation of satellite megaconstellations in low Earth orbit (LEO) represents a significant advancement in global broadband connectivity. However, we urgently need to understand the potential environmental impacts, particularly greenhouse gas (GHG) emissions associated with these constellations. This study addresses a critical gap in modeling current and future GHG emissions by developing a comprehensive open-source life cycle assessment (LCA) methodology, applied to 10 launch vehicles and 15 megaconstellations. Our analysis reveals that the production of launch vehicles and propellant combustion during launch events contribute most significantly to overall GHG emissions, accounting for 72.6% of life cycle emissions. Among the rockets analyzed, reusable vehicles like Falcon-9 and Starship demonstrate 95.4% lower production emissions compared to non-reusable alternatives, highlighting the environmental benefits of reusability in space technology. The findings underscore the importance of launch vehicle and satellite design choices to minimize potential environmental impacts. The Open-source Rocket and Constellation Lifecycle Emissions (ORACLE) repository is freely available and aims to facilitate further research in this field. This study provides a critical baseline for policymakers and industry stakeholders to develop strategies for reducing the carbon footprint of the space industry, especially satellite megaconstellations.
Abstract Electrical resistivity tomography (ERT) is a popular geophysical tool used for a variety of prospecting. To derive a true resistivity model from the observed apparent (ERT) data, the smoothness-constrained least squares inversion method is still frequently employed. However, the smooth inversion usually obtained unclear interfaces of resistivity changes, which exacerbates the final interpretation of the inverted model. To overcome the drawback related to the smoothness-constrained inversion, I proposed using the Euler deconvolution (ED) method as a layer interface detector for interpreting ERT data. By employing the ED approach, the boundaries of various resistivity zones could be automatically identified rather than relying on manual detection. To achieve this, the efficiency of the ED method in interpreting ERT data was evaluated using both synthetic models and actual field cases. In this paper, five models were used to simulate different scenarios of horizontally stratified and undulating layers using RES2DMOD software. The response of these models was calculated using the Wenner and dipole–dipole array. Then the synthetically apparent data were inverted using Res2dinv software. The results obtained from the inversion process were interpreted using the ED method. The overall findings demonstrate that, for both the simulated and actual data, the calculated Euler depth solution closely matches the layer interface of the inverted resistivity sections. A structure index of 0 produced the tightest cluster solutions. This study highlights that in order to improve the interpretation of the inversion results, the ED approach can be utilized as an additional processing tool for ERT data interpretation.
Mohammad Hossein Khosravi, Mohammad Emami Niri, Mohammad Reza Saberi
AbstractCarbonate rocks are geologically complex due to the diagenetic processes they experience before and after lithification. Diagenetic processes alter their matrix and pore structure leading to the modification in their sonic velocities. Understanding the effect of these diagenetic features on the seismic velocities is crucial to have a reliable image of the subsurface. The dataset used in this study comprises well logs, and core data. Core data were analyzed using different methods (i.e., thin section analysis, X-ray diffraction (XRD) analysis, and scanning electron microscopy (SEM images)) to investigate the presence or absence of different diagenetic processes in each depth interval of the Sarvak formation. In order to minimize porosity effects on velocity variations, we divided all porosity data into five equal porosity classes and performed bar chart analysis in each class. The results indicated that bioturbation (through stiff pore creation and infilling with stiff minerals) and compaction (through pore space volume reduction) increase velocities, but dissolution increased velocities only for the low porosity samples (through moldic and vuggy pores creation) while reduced it in the high porosity samples (through the interconnection of the isolated pores). Furthermore, porosity enhancement (through increasing pore space volume), micritization (through porosity reduction inhibition during compaction), open fracture (through creation of soft pores and cracks), and neomorphism (through the creation of microporosity during compaction) reduce sonic velocities.
Recently, Physics-Informed Neural Networks (PINNs) have gained significant attention for their versatile interpolation capabilities in solving partial differential equations (PDEs). Despite their potential, the training can be computationally demanding, especially for intricate functions like wavefields. This is primarily due to the neural-based (learned) basis functions, biased toward low frequencies, as they are dominated by polynomial calculations, which are not inherently wavefield-friendly. In response, we propose an approach to enhance the efficiency and accuracy of neural network wavefield solutions by modeling them as linear combinations of Gabor basis functions that satisfy the wave equation. Specifically, for the Helmholtz equation, we augment the fully connected neural network model with an adaptable Gabor layer constituting the final hidden layer, employing a weighted summation of these Gabor neurons to compute the predictions (output). These weights/coefficients of the Gabor functions are learned from the previous hidden layers that include nonlinear activation functions. To ensure the Gabor layer's utilization across the model space, we incorporate a smaller auxiliary network to forecast the center of each Gabor function based on input coordinates. Realistic assessments showcase the efficacy of this novel implementation compared to the vanilla PINN, particularly in scenarios involving high-frequencies and realistic models that are often challenging for PINNs.
Marko Toroš, Marion Cromb, Mauro Paternostro
et al.
Many phenomena and fundamental predictions, ranging from Hawking radiation to the early evolution of the Universe rely on the interplay between quantum mechanics and gravity or more generally, quantum mechanics in curved spacetimes. However, our understanding is hindered by the lack of experiments that actually allow us to probe quantum mechanics in curved spacetime in a repeatable and accessible way. Here we propose an experimental scheme for a photon that is prepared in a path superposition state across two rotating Sagnac interferometers that have different diameters and thus represent a superposition of two different spacetimes. We predict the generation of genuine entanglement even at low rotation frequencies and show how these effects could be observed even due to the Earth's rotation. These predictions provide an accessible platform in which to study the role of the underlying spacetime in the generation of entanglement.
The link between the surface temperature of Mercury and the exosphere sodium content has been investigated. Observations show that, along the orbit of Mercury, two maxima of total Na content are present: one at aphelion and one at perihelion. Previous models, based on a simple thermal map, were not able to reproduce the aphelion peak. Here we introduce a new thermophysical model giving soil temperatures as an input for the IAPS exospheric model already used in the past with the input of a simple thermal map. By comparing the reference model output with the new one, we show that such improved surface temperature map is crucial to explain the temporal variability of Sodium along the orbit.
The rate of aftershocks in the sequence initiated by the DPRK underground tests has been increasing since January 2021. In total, 22 reliable aftershocks were detected between January 13 and October 1, 2021. Their characteristics are similar to the aftershocks in one of two clusters: 1) the fifth DPRK (DPRK5) test (mb(IDC)=5.09) conducted on September 9, 2016, which induced the first DPRK aftershock in the sequence detected at 1:50:48 UTC on September 11, 2016; 2) the sixth DPRK (DPRK6) explosion (mb(IDC)=6.07), which generates its aftershock sequence with characteristics significantly different from the aftershocks in the DPRK5 sequence. The length, intensity, and alternating character of these sequences suggest specific mechanisms of energy release likely associated with the interaction of the damaged zones of the DPRK5 and DPRK6 and the collapse of their cavities with progressive propagation of the collapsing chimneys to the free surface. According to the depth estimates based on the moment tensor modelling, the DPRK5 and DPRK6 were conducted at practically the same depths. The difference in magnitudes suggests that their damaged zones differ by a factor of 2 or more. The first aftershock of the DPRK6 (mb(IDC=4.12) 8.5 minutes after the test is evidence of the cavity collapse and creation of a chimney, which did not reach the surface. The activity in 2021 indicates that the chimney collapse is not finished yet. One can expect more aftershocks in the near future, likely ended with the chimney reaching the free surface.
In the quest to determine fault weakening processes that govern earthquake mechanics, it is common to infer the earthquake breakdown energy from seismological measurements. Breakdown energy is observed to scale with slip, which is often attributed to enhanced fault weakening with continued slip or at high slip rates, possibly caused by flash heating and thermal pressurization. However, breakdown energy varies by more than six orders of magnitude, which is physically irreconcilable with prevailing material properties. We present a dynamic model that demonstrates that breakdown energy scaling can occur despite constant fracture energy and does not require thermal pressurization or other enhanced weakening. Instead, earthquake breakdown energy scaling occurs simply due to scale-invariant stress drop overshoot, which is affected more directly by the overall rupture mode -- crack-like or pulse-like -- rather than from a specific slip-weakening relationship. Our findings suggest that breakdown energy may be used to discern crack-like earthquakes from self-healing pulses with negative breakdown energy.
It is well known that rapid changes in tropical cyclone motion occur during interaction with extratropical waves. While the translation speed has received much attention in the published literature, acceleration has not. Using a large data sample of Atlantic tropical cyclones, we formally examine the composite synoptic-scale patterns associated with tangential and \curvature components of their acceleration. During periods of rapid tangential acceleration, the composite tropical cyclone moves poleward between an upstream trough and downstream ridge of a developing extratropical wavepacket. The two systems subsequently merge in a manner that is consistent with extratropical transition. During rapid curvature acceleration, a prominent downstream ridge promotes recurvature of the tropical cyclone. In contrast, during rapid tangential or curvature deceleration, a ridge is located directly poleward of the tropical cyclone. Locally, this arrangement takes the form of a cyclone-anticyclone vortex pair somewhat akin to a dipole block. On average, the tangential acceleration peaks 18 hours prior to extratropical transition while the curvature acceleration peaks at recurvature. These findings confirm that rapid acceleration of tropical cyclones is mediated by interaction with extratropical baroclinic waves. Furthermore, The tails of the distribution of acceleration and translation speed show a robust reduction over the past 5 decades. We speculate that these trends may reflect the poleward shift and weakening of extratropical Rossby waves.
Daniel E. Lalich, Alexander G. Hayes, Valerio Poggiali
Recent discoveries of anomalously bright radar reflections below the Mars South Polar Layered Deposit (SPLD) have sparked new speculation that liquid water may be present below the ice cap. The reflections, discovered in data acquired by the Mars Advanced Radar for Subsurface and Ionospheric Sounding (MARSIS) on board the Mars Express orbiter, were interpreted as reflections from damp materials or even subsurface ponds and lakes similar to those found beneath Earth's ice sheets. Recent studies, however, have questioned the feasibility of melting and maintaining liquid water below the SPLD. Herein, we compare radar simulations to MARSIS observations in order to present an alternate hypothesis: that the bright reflections are the result of interference between multiple layer boundaries, with no liquid water present. This new interpretation is more consistent with known conditions on modern Mars.
AuScope's Downward Looking Telescope (DLT) is the Australian geoscience research community's vision for a futureproof research infrastructure system to support Australia's sustainable future. Image: AuScope <em> This article was originally published by AuScope, Australia's provider of research infrastructure to the Earth and Geospatial Science community. </em>
Mohammadamin Torabi, Amirmasoud Hamedi, Ebrahim Alamatian
et al.
One of the most critical problems in the river engineering field is scouring, sedimentation and morphology of a river bed. In this paper, a finite volume method FORTRAN code is provided and used. The code is able to model the sedimentation. The flow and sediment were modeled at the interception of the two channels. It is applied an experimental model to evaluate the results. Regarding the numerical model, the effects of geometry parameters such as proportion of secondary channel to main channel width and intersection angle and also hydraulic conditionals like secondary to main channel discharge ratio and inlet flow Froude number were studied on bed topographical and flow pattern. The numerical results show that the maximum height of bed increased to 32 percent as the discharge ratio reaches to 51 percent, on average. It is observed that the maximum height of sedimentation decreases by declining in main channel to secondary channel Froude number ratio. On the assessment of the channel width, velocity and final bed height variations have changed by given trend, in all the ratios. Also, increasing in the intersection angle accompanied by decreasing in flow velocity variations along the channel. The pattern of velocity and topographical bed variations are also constant in any studied angles.
In order to estimate the seismic vulnerability of a densely populated urban area, it would in principle be necessary to evaluate the dynamic behaviour of individual and aggregate buildings. These detailed seismic analyses, however, are extremely cost-intensive and require great processing time and expertise judgment. The aim of the present study is to propose a new methodology able to combine information and tools coming from different scientific fields in order to reproduce the effects of a seismic input in urban areas with known geological features and to estimate the entity of the damages caused on existing buildings. In particular, we present new software called ABES (Agent-Based Earthquake Simulator), based on a Self-Organized Criticality framework, which allows to evaluate the effects of a sequence of seismic events on a certain large urban area during a given interval of time. The integration of Geographic Information System (GIS) data sets, concerning both geological and urban information about the territory of Avola (Italy), allows performing a parametric study of these effects on a real context as a case study. The proposed new approach could be very useful in estimating the seismic vulnerability and defining planning strategies for seismic risk reduction in large urban areas
Seismic tomography is a methodology to image the interior of solid or fluid media, and is often used to map properties in the subsurface of the Earth. In order to better interpret the resulting images it is important to assess imaging uncertainties. Since tomography is significantly nonlinear, Monte Carlo sampling methods are often used for this purpose, but they are generally computationally intractable for large datasets and high-dimensional parameter spaces. To extend uncertainty analysis to larger systems we use variational inference methods to conduct seismic tomography. In contrast to Monte Carlo sampling, variational methods solve the Bayesian inference problem as an optimization problem, yet still provide probabilistic results. In this study, we applied two variational methods, automatic differential variational inference (ADVI) and Stein variational gradient descent (SVGD), to 2D seismic tomography problems using both synthetic and real data and we compare the results to those from two different Monte Carlo sampling methods. The results show that variational inference methods can produce accurate approximations to the results of Monte Carlo sampling methods at significantly lower computational cost, provided that gradients of parameters with respect to data can be calculated efficiently. We expect that the methods can be applied fruitfully to many other types of geophysical inverse problems.
Nozomi Sugiura, Shinya Kouketsu, Shuhei Masuda
et al.
Energy dissipation rates are an important characteristic of turbulence; however, their magnitude in observational profiles can be incorrectly determined owing to their irregular appearance during vertical evolution. By analysing the data obtained from oceanic turbulence measurements, we demonstrate that the vertical sequences of energy dissipation rates exhibit a scaling property. Utilising this property, we propose a method to estimate the population mean for a profile. For scaling in the observed profiles, we demonstrate that our data exhibit a statistical property consistent with that exhibited by the universal multifractal model. Meanwhile, the population mean and its uncertainty can be estimated by inverting the probability distribution obtained by Monte Carlo simulations of a cascade model; to this end, observational constraints from several moments are imposed over each vertical sequence. This approach enables us to determine, to some extent, whether a profile shows an occasionally large mean or whether the population mean itself is large. Thus, it will contribute to the refinement of the regional estimation of the ocean energy budget, where only a small amount of turbulence observation data is available.
Céline Guervilly, Philippe Cardin, Nathanaël Schaeffer
Convection is a fundamental physical process in the fluid cores of planets because it is the primary transport mechanism for heat and chemical species and the primary energy source for planetary magnetic fields. Key properties of convection, such as the characteristic flow velocity and lengthscale, are poorly quantified in planetary cores due to their strong dependence on planetary rotation, buoyancy driving and magnetic fields, which are all difficult to model under realistic conditions. In the absence of strong magnetic fields, the core convective flows are expected to be in a regime of rapidly-rotating turbulence, which remains largely unexplored to date. Here we use a combination of numerical models designed to explore this low-viscosity regime to show that the convective lengthscale becomes independent of the viscosity and is entirely determined by the flow velocity and planetary rotation. For the Earth's core, we find that the characteristic con-vective lengthscale is approximately 30km and below this scale, motions are very weak. The 30-km cutoff scale rules out small-scale dynamo action and supports large-eddy simulations of core dynamics. Furthermore, it implies that our understanding of magnetic reversals from numerical geodynamo models does not relate to the Earth, because they require too intense flows. Our results also indicate that the liquid core of the Moon might still be in an active convective state despite the absence of a present-day dynamo.
Scott K. Hansen, Velimir V. Vesselinov, Paul W. Reimus
et al.
We consider the late-time tailing in a tracer test performed with a push-drift methodology (i.e., quasi-radial injection followed by drift under natural gradient). Numerical simulations of such tests are performed on 1000 multi-Gaussian 2D log-hydraulic conductivity field realizations of varying heterogeneity, each under eight distinct mean flow directions. The ensemble pdfs of solute return times are found to exhibit power law tails for each considered variance of the log-hydraulic conductivity field, $σ^2_{\ln K}$. The tail exponent is found to relate straightforwardly to $σ^2_{\ln K}$ and, within the parameter space we explored, to be independent of push-phase pumping rate and pumping duration. We conjecture that individual push-drift tracer tests in wells with screened intervals much greater than the vertical correlation length of the aquifer will exhibit quasi-ergodicity and that their tail exponent may be used to infer $σ^2_{\ln K}$. We calibrate a predictive relationship of this sort from our Monte Carlo study, and apply it to data from a push-drift test performed at a site of approximately known heterogeneity---closely matching the existing best estimate of heterogeneity.