Abstract Equilibrium climate sensitivity (ECS) quantifies surface warming in response to doubled pre‐industrial CO2 (2xCO2). Uncertainty in estimates arises from diverse model representations and climate‐chemistry feedbacks. We quantify ECS with five atmospheric chemistry‐composition model representations (sea salt, dust, organics, ozone, sea ice masking) with slab ocean configurations of the GFDL‐CM4.0 physical climate (4.1 ± 0.1 K) and GFDL‐ESM4.1 coupled climate‐carbon‐chemistry Earth system (3.4 ± 0.1 K) models. Estimated ECS is 0.7 K lower with GFDL‐ESM4.1, attributed to additive 0.1–0.3 K decreases as each GFDL‐ESM4.1 driver is included and a 0.4–0.6 K decrease with the addition of all five GFDL‐ESM4.1 model drivers in combination, relative to estimated ECS with GFDL‐CM4.0 neglecting these negative climate‐chemistry feedbacks. Interactive stratospheric ozone included in GFDL‐ESM4.1 contributes to the largest ECS decrease in response to 2xCO2 of all five drivers, characterized by reduced polar amplification. We demonstrate strong climate sensitivity to complex but often ignored climate‐chemistry feedbacks.
While most existing advanced large-scale point cloud semantic segmentation methods can accurately identify most large-scale objects, there is still room for improvement in the recognition accuracy of small-scale, low-proportion objects. Compared to point clouds, digital orthophoto maps (DOMs) has a more structured data format, allowing for better recognition of small-scale surface features. However, in existing projection-based methods, directly mapping images onto point clouds leads to occlusion issues. If image and point cloud features are simply concatenated, it results in feature blurring. Based on this observation, this article proposes a DAPSS network for point cloud semantic segmentation, assisted by prior knowledge constructed from DOM. The pretrained DOM features can provide a broader receptive field as guidance for learning the local context features of point clouds. Vertical occlusion has an issue, making ray-based mapping methods unsuitable. We propose a method that search for the nearest mapped point cloud in spherical space to fill in the occluded point cloud based on the already mapped point cloud. The traditional approach of directly concatenating point cloud features with image features often leads to feature blurring. Therefore, we propose a plug-and-play multimodal feature adaptive fusion module, which can adaptively select and aggregate features from different modalities to reduce redundant information further. In addition, we designed a cascaded multimodal feature deep fusion module to promote deep fusion between different modal features. Experiments on two large datasets demonstrate that DAPSS outperforms current mainstream methods, achieving mean Intersection-over-Union scores of 65.9% and 82.9% on the SansetUrban and SUM-Helsinki datasets, respectively. DAPSS not only effectively addresses the recognition of small-scale surface features, but also resolves the occlusion problems associated with projection-based methods.
The latest generation of cosmic-ray direct detection experiments is providing a wealth of high-precision data, stimulating a very rich and active debate in the community on the related strong discovery and constraining potentials on many topics, namely dark matter nature, and the sources, acceleration, and transport of Galactic cosmic rays. However, interpretation of these data is strongly limited by the uncertainties on nuclear and hadronic cross-sections. This contribution is one of the outcomes of the \textit{Cross-Section for Cosmic Rays at CERN} workshop series, that built synergies between experimentalists and theoreticians from the astroparticle, particle physics, and nuclear physics communities. A few successful and illustrative examples of CERN experiments' efforts to provide missing measurements on cross-sections are presented. In the context of growing cross-section needs from ongoing, but also planned, cosmic-ray experiments, a road map for the future is highlighted, including overlapping or complementary cross-section needs from applied topics (e.g., space radiation protection and hadrontherapy).
Gabrielle M. Hobson, Dave A. May, Alice-Agnes Gabriel
Subsurface geometries are often poorly constrained, yet they exert first-order control on key geophysical processes, including subduction zone thermal structure and earthquake rupture dynamics. Quantifying model sensitivity to geometric variability remains challenging due to the manual effort of mesh generation and the computational cost of exploring high-dimensional parameter spaces in high-fidelity simulations. We present a mesh morphing approach that deforms a reference mesh into geometrically varying configurations while preserving mesh connectivity. This enables the automated generation of large ensembles of geometrically variable meshes with minimal user input. Importantly, the preserved connectivity allows for the application of data-driven, non-intrusive reduced-order models (ROMs) to perform robust sensitivity analysis and uncertainty quantification. We demonstrate mesh morphing in two geophysical applications: (i) 3D dynamic rupture simulations with fault dip angles varying across a 40° range, and (ii) 2D thermal models of subduction zones incorporating realistic slab interface curvature and depth uncertainties informed by the Slab2 geometry dataset. In both cases, morphed meshes retain high quality and lead to accurate simulation results that closely match those obtained using exactly generated meshes. For the dynamic rupture case, we further construct ROMs that efficiently predict surface displacement and velocity time series as functions of fault geometry, achieving speedups of up to $10^9 \times$ relative to full simulations. Our results show that mesh morphing can be a powerful and generalizable tool for incorporating geometric uncertainty into physics-based modeling. The method supports efficient ensemble modeling for rigorous sensitivity studies applicable across a range of problems in computational geophysics.
The High-Altitude Water Cherenkov (HAWC) Observatory comprises 300 water Cherenkov detectors, each equipped with four photomultipliers, located on the Volcán Sierra Negra in Mexico at 4,100 masl. This observatory can detect gamma rays in an energy range from 300 GeV to 100 TeV and cosmic rays from 100 GeV to 1 PeV. One of HAWC's primary challenges is characterizing air showers and estimate their physical parameters, a highly complex task due to the nature of the data and the processes involved. Currently, HAWC employs two energy estimators for gamma rays: the ground parameter method and a neural network-based approach. However, for cosmic rays, only the likelihood-based estimator is available. In this work, we leverage machine learning techniques to achieve more accurate estimation of the physical parameters of cosmic rays. These techniques are explored as an alternative for reconstructing the physical properties of extensive air showers using simulated data aligned with the observatory's configuration. Various models were trained and evaluated through an optimized pipeline and the most effective one was selected as the final implementation after a comprehensive comparison. This approach improves the accuracy of physical parameter estimation, contributing significantly to the detailed characterization of cosmic ray events.
Deep learning (DL) plays an increasingly important role in Earth observation by multisource remote sensing. However, the current DL-based methods do not make a fully use of the complementary information among multisource remote sensing data, such as hyperspectral image and light detection and ranging data, and lack the consideration of multiscale, directional, and fine-grained features. To address these issues, a multiscale and multidirection feature extraction network is proposed in this article. Specifically, the multiscale spatial feature (MSSpaF) module is designed to extract the MSSpaFs, and then, these features are fused by feature concatenation operation. In addition, the multidirection spatial feature module is designed to further extract multidirection and frequency information, employing cross-layer connection and multiscale feature fusion strategy to improve the fineness of the proposed network. Moreover, the spectral feature module is employed to provide detailed spectral information for enhancing the expression ability of multiscale features. Experimental results on three different datasets demonstrate the superior classification performance of the proposed framework.
Abstract Tree rings are the most widely‐used proxy records for reconstructing Common Era temperatures. Tree‐ring records correlate strongly with temperature on an interannual basis, but studies have found discrepancies between tree rings and climate models on longer timescales, indicating that low‐frequency noise could be prevalent in these archives. Using a large network of temperature‐sensitive tree‐ring records, we partition timeseries variance into a common (i.e., “signal”) and non‐climatic (i.e., “noise”) component using a frequency‐resolved signal‐to‐noise ratio (SNR) analysis. We find that the availability of stored resources from prior years (i.e., biological “memory”) dampens the climate signal at high‐frequencies, and that independent noise reduces the SNR on long timescales. We also find that well‐replicated, millennial‐length records had the strongest common signal across centuries. Our work suggests that low‐frequency noise models are appropriate for use in pseudoproxy experiments, and speaks to the continued value of high‐quality data development as a top priority in dendroclimatology.
Zero-shot remote sensing scene classification refers to making the model to have the ability to identify the unseen class scenes based on seen class scenes, and has become a research hotspot in the field of remote sensing. Contemporary approaches in zero-shot remote sensing scene classification primarily focus on extracting global information from scenes, neglecting nuanced local landscape features. This oversight diminishes the discriminative capabilities of recognition models. Furthermore, these methods overlook the semantic relevance between seen and unseen class scenes in training, leading to reduced emphasis on learning from varied scenes and subsequent declines in classification performance. To address these challenges, this article proposes the “Zero-Shot Remote Sensing Scene Classification Method Based on Local-Global Feature Fusion and Weight Mapping Loss (LGFFWM).” The design incorporates a local-global feature fusion (LGFF) module enabling adaptive labeling and feature modeling of internal local landscapes, effectively merging them with global features for a more discriminative representation of remote sensing scenes. Furthermore, a weight mapping loss (WM Loss) function is introduced, leveraging a semantic correlation matrix to compel the model to prioritize learning seen class scenes that exhibit strong correlations with unseen class scenes by assigning higher training weights. Extensive experiments have been conducted on classical remote sensing scene datasets, including UCM, AID, and NWPU, demonstrate the superiority of the proposed LGFFWM method over ten advanced comparative methods, yielding overall accuracy improvements of over 2.25%, 3.47%, and 0.44%, respectively. Additional experiments on the SIRI-WHU and RSSCN7 datasets underscore the transferability of LGFFWM, achieving overall accuracies of 53.50% and 47.37%, respectively.
Owing to the characteristics of long distance and strong penetration, a synthetic aperture radar (SAR) imaging system could provide ground information with high resolution under a poor climate environment. Nevertheless, speckle is still a common interference of the output that deteriorates the content of SAR images and further affects the recognition of real objects. In this article, a new speckle suppression method is proposed from the perspective of exploring nonlocal and local SAR image features. Considering the statistical distribution of SAR images, a novel local filter termed SAR-orientated guided bilateral filter is proposed to characterize the range and spatial similarity of SAR images. Meanwhile, an optimized nonlocal filter based on the weight Schatten-<inline-formula><tex-math notation="LaTeX">$p$</tex-math></inline-formula> norm is introduced to characterize the nonlocal self-similarity of SAR images by a low-rank model. As a preprocessing step, it yields nonlocal filtering features as the guidance image of the proposed SAR-oriented guided bilateral filter. By incorporating the nonlocal filtering feature into the local filter, the structured method could achieve desirable despeckling results. Extensive experiments on real SAR images demonstrate that the proposed method outperforms several state-of-the-art methods in terms of both visual satisfaction and quantitative metrics.
Abstract The fundamental features of one kind of rarely known stratocumulus, which was termed as “Millipede Cloud,” occurred over the Eastern Pacific Ocean in 2017 were first documented by using Moderate Resolution Imaging Spectroradiometer (MODIS) satellite imagery. These clouds had long and meandering “central axes” extending from several hundreds to thousands kilometers, and a number of “radical cloud arms” extending several tens of kilometers in its two sides. Total 59 “Millipede Clouds,” 4 and 55 of them, were formed over the Northern and the Southern Hemispheres, respectively. Their environmental backgrounds were analyzed by using ERA5 reanalysis data and MODIS sensor Level‐2 data. The cloud top pressures of these “Millipede Clouds” were between 850 and 800 hPa, and their top heights were about 1–2 km. There existed “inversion layer” of air temperature near the cloud tops at 800 hPa, which strongly suggested that these clouds were lower stratocumulus in essence.
This paper introduces variational design methods that are novel to Geophysics, and discusses their benefits and limitations in the context of geophysical applications and more established design methods. Variational methods rely on functional approximations to probability distributions and model-data relationships. They can be used to design experiments that best resolve either all model parameters, or the answer to specific questions about the system to be interrogated. The methods are tested in three schematic geophysical applications: (i) estimating a source location given arrival times at sensor locations, and (ii) estimating the contrast in seismic wavefield velocity across a stratal interface given measurements of the amplitudes of seismic wavefield reflections from that interface, and (iii) designing a survey to best constrain CO2 saturation in a subsurface storage scenario. Variational methods allow the value of an experiment to be calculated and optimised simultaneously, which results in substantial savings in computational cost. In the context of designing a survey to best constrain CO2 in a subsurface storage scenario, we show that optimal designs may change substantially depending on the questions of interest. Overall, this work indicates that optimal design methods should be used more widely in Geophysics, as they are in other scientifically advanced fields.
Abstract The Colorado Plateau and its surroundings serve as an archetypal case to investigate the interaction of mantle melting processes and lithospheric structure. It has been hypothesized that widespread Cenozoic volcanism indicates the encroachment of the convective upwelling of asthenosphere toward the Plateau center. In this study, we generate a Common Conversion Point (CCP) stack of S‐to‐p (Sp) receiver functions to image the locations of lithospheric discontinuities in the southwestern United States. Our results are broadly similar to prior work, showing a strong and continuous Negative Velocity Gradient (NVG) consistent with the Lithosphere‐Asthenosphere Boundary (LAB) over much of the study area. However, with several methodological improvements, we are able to obtain more reliable NVG depth picks below the Colorado Plateau where the LAB becomes weaker, deeper, and broader. We compare the inferred topography of NVGs with the locations of volcanoes, and find that the majority of recent volcanoes are co‐located with lithosphere that is ∼80 km thick. This appears to be the critical depth at which partial melt from upwelling asthenosphere pooling at the base of (or within) the lithosphere may percolate to the surface. We compare our CCP profiles with magma equilibration conditions determined from petrologic analysis and find good agreement between the depth of NVGs and depth of magma equilibration. This analysis provides insight into the progression of magmatism and lithospheric loss toward the center of the Colorado Plateau, and demonstrates how small‐scale processes like melting influence lithosphere‐asthenosphere interactions that persist over large temporal and spatial scales.
The dynamic response of rocks to thermal, hydrodynamical, mechanical, and geochemical solicitations is of fundamental interest in several disciplines of geosciences, including geo-engineering, geophysics, rock physics, hydrology, mineralogy, and environmental and soil sciences. From crystal shape to rock microstructure or pore space and fluid distribution, parameters characterizing the rock physico-chemical properties evolve at different time and spatial scales. X-ray micro-tomography (XMT), as a non-invasive and non-destructive imaging technique, offers an unprecedented opportunity to add the fourth dimension, i.e. time, to the three-dimensional spatial visualization of rock and mineral microstructures. The technique is increasingly used to explore dynamic processes in porous and fractured rocks, thanks to synchrotron sources and laboratory XMT scanners, new generations of detectors, and increasing computational power. Image processing allows for tracking the evolution of the fluid–fluid or fluid–mineral interfaces as well as measuring incremental deformations, as rocks deform and react through time under in situ conditions of the sub-surface. Here, we review recent advances in 4D X-ray micro-tomography applied to thermo-hydro-mechano-chemical (THMC) sub-surface processes where fluids, porosity, minerals, and rock microstructures evolve together.
Abstract A new method to determine fluid flux at high pressures and temperatures has been developed and used to study serpentinites at subduction zone conditions. Drill cores of a natural antigorite‐serpentinite with a strong foliation were used in multi‐anvil experiments in the range of 2–5 GPa and 450–800°C. Fluids released upon dehydration are fixed by the formation of brucite in an adjacent fluid sink. The amount and distribution of brucite serves as a proxy for fluid flow. In our specific setup the sample reacted with the surrounding fluid sink to form an additional layer of olivine, which has the potential to limit fluid flux within our experiments. For conditions prior to serpentine dehydration we used Al(OH)3 as fluid source. Fluid in this experiment did not migrate through the serpentinite, indicating that serpentine has a low diffusivity. The experiments also show that small deviatoric stresses have an influence on the fluid flux and can cause an anisotropic fluid flux. Comparison between the time scales of the determined fluid flux with fluid production rates indicates fluid pressure buildup during dehydration reactions. Adjacent less permeable layers can inhibit fluid flux and cause fluid pressure buildup even at conditions when an interconnected pore space formed.
Ramprasad Yaddanapudi, Ashok Mishra, Whitney Huang
et al.
Abstract Compound wind and precipitation (CWP) extreme events can cause a significant increase in socio‐economic loss in coastal regions. This study investigated the potential impact of climate change on CWP events using Coupled Model Intercomparison Project model outputs for the coastal areas impacted by tropical cyclones on a global scale. We identified global hotspots of higher dependence between extreme wind and precipitation events. Under climate change, the results show a substantial increase in precipitation extremes compared to individual wind extreme events. The likelihood of CWP events under climate change indicates an increase (about 40%–50%) in most coastal regions in North Atlantic, East, and South Asia. The results of this study can help to identify hotspot regions under climate change and further assist in minimizing the impact of future disasters in vulnerable coastal areas.
When ground-based radar and range-Doppler algorithm are used to image the nearside of the Moon, it is inevitable to encounter the problem of "north-south ambiguity". This is because when the range-Doppler imaging algorithm is used to image a rotating celestial body, the echoes of the two points conjugated at the apparent equator are superposed together and cannot be resolved in the range-Doppler image. We propose a solution to this problem based on the Sanya Incoherent Scatter Radar, namely the mosaic imaging technology of the northern and southern hemispheres of the near side of the Moon. In this technology, two independent experiments were carried out to separately illuminate the northern and southern hemispheres of the nearside of the Moon by adjusting the beam direction to a specific position. Finally, a complete Moon nearside map was obtained by combining the images of the northern and southern hemispheres. The results of experiments show that this technique can successfully get the Moon images, but there are still some defects that need to be improved.
<p>Current data are an important input data for electrical
prospecting data postprocessing. The existing current recorder is
inadequate for continuous recording, precision, bandwidth, dynamic range,
and input range. A new full waveform current recorder that is ideal for
measuring current signal for electrical prospecting applications is
presented. The new measurement principle enables the fabrication of a
high-precision current sensor with an autonomous data logger as well as
continuous measurement capabilities for full waveforms that are comparable
to recent developments for electrical prospecting applications. The full
waveform current recorder is capable of measuring current with bandwidth
from direct current (DC) to 10 kHz, with a power spectrum density noise floor of
10 <span class="inline-formula">µ</span>A/rt(Hz) at 10 Hz. The current recorder has a dynamic range that is higher
than 97 dB over a range of 100 A at peak, with time synchronisation error as
low as <span class="inline-formula">±0.1</span> <span class="inline-formula">µs</span>. These features make new current recorder a
promising technology for high-precision measurement with long-duration,
autonomous data logging for field electrical prospecting applications.</p>
The description of the transport of cosmic rays in magnetized media is central to both acceleration and propagation of these particles in our Galaxy and outside. The investigation of the process of particle acceleration, especially at shock waves, has already emphasized that non-linear effects such as self-generation of waves and dynamical reaction of cosmic rays on the background plasmas, are crucial if to achieve a physical understanding of the origin of cosmic rays. Here we discuss how similar non-linear effects on Galactic scales may affect the propagation of cosmic rays, not only through the excitation of plasma waves important for particle scattering, but also by inducing the motion of the interstellar medium in the direction opposite to the gravitational pull exerted by matter in the Galaxy, thereby resulting in the launching of a wind. The recent discovery of several unexpected features in cosmic ray spectra (discrepant hardening, spectral breaks in the H and He spectra, rising positron fraction and unexpectedly hard antiproton spectrum) raises the question of whether at least some of these effects may be attributed to poorly understood aspects of cosmic ray transport.