Revealing the spatial-temporal evolution and interactions of ecosystem services (ESs) in mining area is critical for sustainable environmental management. The temporal and spatial characteristics and changing trends of six ESs in Yuzhong mining area from 2000 to 2020 were analyzed. Pearson correlation analysis explored and elucidated the intricate tradeoffs and synergies that manifest across diverse ecosystems. The integrated ecosystem service landscape index (IESLI) was constructed on this basis, and 8 factors (both natural and human) were selected to identify the driving forces. The findings indicated that: 1) Over the past two decades, five categories of ESs have exhibited a declining trend, with water yield experiencing the most significant reduction, reaching 38.7% . 2) Among the 15 ESs pairings, tradeoffs were predominantly negatively correlated. 3) The interaction between land use/land cover and precipitation (54.5% ) emerged as the primary driving force behind the spatial heterogeneity of ESs. 4) The IESLI showed a general downward trend, decreasing from 0.51 in 2005 to 0.44 in 2020. This study provides quantitative evidence of ecosystem degradation and the intricate interrelationships among ESs in mining landscapes, highlighting the critical role of coupled spatial models in uncovering underlying patterns and mechanisms. The findings offer a scientific foundation for ecological restoration and policy-making in mining regions.
Abstract The formation, storage, and evolution of granitic magmas are fundamental processes driving the growth of continental crust. While traditionally attributed to crystal fractionation in high‐melt fraction magma chambers, the model invoking low‐melt fraction crystal mushes has gained wide acceptance. However, the chemical and textural impacts of crystal mush rejuvenation remain elusive and the precise petrological record is relatively poorly studied. The rapakivi K‐feldspar identified in the early Eocene monzogranitic porphyry of the Caina intrusive complex, Gangdese batholith, is an ideal candidate for investigating these issues, as feldspar can record clues to magmatic processes. Field survey, optical and mineral flake scanning observations, X‐ray fluorescence analysis, in situ Sr and mineral Sm‐Nd isotopic analyses, TESCAN integrated mineral analysis, electron probe microanalysis, and three‐dimensional crystal shape modeling were performed on the collected samples. K‐feldspars can be divided into three types based on chemical zonation: normal, reverse, and oscillatory zoning crystals. Varying isotopic signatures between the K‐feldspar and associated mantle suggest that the rapakivi texture originated in heterogeneous magmatic pulse recharge. Crystal shape modeling of the plagioclase chadacryst, mantle, and matrix plagioclase, combined with compositions, indicates that mantle plagioclase originated from the quenching of recharge magmas. We propose a model for the formation of rapakivi K‐feldspar and the rejuvenation of crystal mush. Repeated hot magma pulses recharged the mush, triggering magma convection and thermal perturbations. This process enabled the prolonged growth of K‐feldspar megacrysts, which were subsequently capped by plagioclase, resulting in the formation of the rapakivi texture.
Foundation models, as a mainstream technology in artificial intelligence, have demonstrated immense potential across various domains in recent years, particularly in handling complex tasks and multimodal data. In the field of geophysics, although the application of foundation models is gradually expanding, there is currently a lack of comprehensive reviews discussing the full workflow of integrating foundation models with geophysical data. To address this gap, this paper presents a complete framework that systematically explores the entire process of developing foundation models in conjunction with geophysical data. From data collection and preprocessing to model architecture selection, pre-training strategies, and model deployment, we provide a detailed analysis of the key techniques and methodologies at each stage. In particular, considering the diversity, complexity, and physical consistency constraints of geophysical data, we discuss targeted solutions to address these challenges. Furthermore, we discuss how to leverage the transfer learning capabilities of foundation models to reduce reliance on labeled data, enhance computational efficiency, and incorporate physical constraints into model training, thereby improving physical consistency and interpretability. Through a comprehensive summary and analysis of the current technological landscape, this paper not only fills the gap in the geophysics domain regarding a full-process review of foundation models but also offers valuable practical guidance for their application in geophysical data analysis, driving innovation and advancement in the field.
Snow depth monitoring is crucial for hydrology, climate research, and avalanche prediction. While traditional global navigation satellite system (GNSS) reflectometer methods offer cost-effective snow thickness retrieval, they suffer from poor accuracy and robustness, especially in complex terrains and extreme weather. This study proposes an innovative snow depth retrieval technique employing a time-series recurrent neural network with bidirectional gated recurrent units (Bi-GRUs). Unlike traditional methods using signal-to-noise ratio (SNR) features, our algorithm utilizes the detrended SNR as Bi-GRU input, aiming to enhance accuracy, particularly in low snow depths and complex terrains. SNR observations from GPS L1 carriers at stations P351 and AB33 were analyzed. The Bi-GRU algorithm demonstrated high consistency with true snow depths at station P351 (coefficient of determination: 0.9766), with the root-mean-square error (RMSE) and the mean absolute error (MAE) of 9.1559 and 6.4185 cm, respectively. Compared to traditional methods, the Bi-GRU model improved the RMSE by 30.9% and the MAE by 44.5%. At station AB33, where snow depth variations were significant, accuracy improvements of 65.6% (RMSE: 7.4905 cm) and 63.2% (MAE: 5.6074 cm) were observed. In addition, the Bi-GRU model exhibited greater robustness compared to long short-term memory. These findings highlight the efficacy of the Bi-GRU-based approach, suggesting its superiority and broader applicability.
Song Yang, Melinda Surratt, Timothy R. Whitcomb
et al.
Abstract This study compares rainfall from NASA Integrated Multi‐satellitE Retrievals for Global Precipitation Measurement (IMERG V06) and JAXA Global Satellite Mapping of Precipitation (GSMaP V4) for tropical cyclone (TC) applications against satellite microwave‐derived Goddard Profiling Algorithm (GPROF V05) precipitation data retrieved from 2000 to 2012. From a global data set of storms, all three products show consistent patterns in 1‐dimensional azimuthal averages and in 2‐dimensional rainfall distributions (where spatial correlation values are near 1.0). However, both IMERG and GSMaP overestimate precipitation amounts against GPROF, and IMERG overestimations are much higher than GSMaP within 125 km of the storm center. Based on this analysis, IMERG and GSMaP rainfall could be used to analyze TC precipitation patterns at high spatiotemporal resolutions. However, caution is required if high accuracy TC precipitation amplitude is required, particularly for IMERG. This study highlights opportunities to improve future versions of IMERG and GSMaP retrieval processing to reduce the discrepancies with GPROF.
Abstract Over‐estimation of summer precipitation over the Tibetan Plateau (TP) is a well‐known and persistent problem in most climate models. This study demonstrates the impact of a Gaussian Probability Density Function cloud fraction scheme on rainfall simulations using the Weather Research and Forecasting model. It is found that this scheme in both 0.1° and 0.05° resolutions significantly reduces the wet bias through both local feedbacks and large‐scale dynamic process. Specifically, increased cloud water/ice content with this scheme reduces surface shortwave radiation, and consequently surface heat fluxes and evapotranspiration. This, in turn, dampens the large‐scale thermal effect of the TP and weakens the exaggerated monsoon circulation and low‐level moisture convergence. It is this large‐scale dynamic process that contributes the most (∼70%) to the wet bias reduction. Although this paper presents a modeling study, it highlights the cloud radiative feedback to the large‐scale dynamics and precipitation over the TP.
Recently, large models, or foundation models, have exhibited remarkable performance, profoundly impacting research paradigms in diverse domains. Foundation models, trained on extensive and diverse datasets, provide exceptional generalization abilities, allowing for their straightforward application across various use cases and domains. Exploration geophysics is the study of the Earth's subsurface to find natural resources and help with environmental and engineering projects. It uses methods like analyzing seismic, magnetic, and electromagnetic data, which presents unique challenges and opportunities for the development of geophysical foundation models (GeoFMs). This perspective explores the potential applications and future research directions of GeoFMs in exploration geophysics. We also review the development of foundation models, including large language models, large vision models, and large multimodal models, as well as their advancement in the field of geophysics. Furthermore, we discuss the hierarchy of GeoFMs for exploration geophysics and the critical techniques employed, providing a foundational research workflow for their development. Lastly, we summarize the challenges faced in developing GeoFMs, along with future trends and their potential impact on the field. In conclusion, this perspective provides a comprehensive overview of the development, hierarchy, applications, development workflow, and challenges of foundation models, highlighting their transformative potential in advancing exploration geophysics.
The second academic forum of the Committee on the Earthquake Hazard Chain, Seismological Society of China was held on 12 November 2022 in Beijing, China. The theme of this forum was theoretical research, technical application and popularization of science related to the earthquake hazard chain. It includes an opening ceremony and online lecture presentations. The work related to disaster prevention, mitigation, and relief for the earthquake hazard chain has been widely concerned. There are 49 speeches or lectures at the conference. The contents involve multiple stages and aspects of earthquake hazard chain, such as formation mechanism, database establishment, identification methods, risk assessment, monitoring and early warning, post-disaster rescue and reconstruction, and hazard prevention measures. This activity specially promotes the communication on the first few aspects. In future, the study on monitoring and early warning, emergency response and rescue, post-disaster reconstruction, and other related science and technology of earthquake hazard chain should be pay more attention.
Li-Xin Guo, Meng-Long Hsieh, Olga Gorodetskaya
et al.
Abstract The Yellow River Plain (YRP), being regarded as the cradle of Chinese civilization, is traditionally thought to be the locale of the Great Flood, a hazardous flood (or floods) tamed by Yu who started China’s first “dynasty”, Xia, in ~ 2000 BC. However, by integrating published archaeological data, we propose that the Great Flood in fact impacted the Jianghan Plain (JHP) along the middle course of the Yangtze River. The arguments include: (1) around the era of the Great Flood, the most civilized and populated society in East Asia, named the Jianghan society, was located around the JHP (at that time, the habitation on the YRP remained limited); (2) the Jianghan society lived on river resources (shipping and rice growing) and was thus subject to flood risks (but not for the people inhabiting the YRP); (3) the people in the Jianghan society were experienced in dredging moats/ditches for shipping and irrigation; (4) unlike the floods on the YRP that were characterized by dynamic sedimentation and channel avulsion, those on the JHP typically occurred with slow-moving water manageable to ancient people; (5) the JHP has been associated with lake/wetland systems serving as detention basins during floods. Here, the recorded method for controlling the Great Flood, dredging channels to divert flood water to a “sea”, was feasible. Known speleothem paleo-rainfall data from multiple sites show that the climate of the JHP had been wet since the middle Holocene (earlier than the era of the Great Flood) and significantly turned dry after ~ 1850 BC (~ 150 years later than the Great Flood). Thus, the uniqueness of the Great Flood was likely to reflect an increase in land use on the JHP with the expansion of the Jianghan society, and the success in taming this flood was mainly due to the efforts of the society, not by luck.
Research on fault interaction and earthquake triggering, which is a hot issue in the field of source physics, can facilitate understanding of the underlying mechanisms of strong earthquakes and also has good application prospects in earthquake risk analysis and prediction research. Previous review articles provided detailed explanations from the perspectives of basic principles, methods, and applicability, as well as multiple earthquake case studies of stress triggering. However, the introduction to earthquake triggering from the perspective of seismicity analysis is not exhaustive, and the combination and complementarity of these two perspectives are not provided in detail. This paper summarizes the achievements and progress of research on fault interaction and earthquake triggering mechanism through the past few decades from the perspectives of physical and statistical models. The current challenges and possible future directions are reviewed and evaluated. From the perspective of the physical model, three important mechanisms of sources of fault interaction are analyzed: static stress triggering, dynamic stress triggering, and viscoelastic stress triggering, as well as the basic principles and methods of calculation. In the aspect of the statistical model, the basic principles and methods of seismicity analysis are introduced, and applications of the epidemic-type aftershock sequence (ETAS) model and b-value in fault interaction and earthquake triggering mechanism are analyzed. From the perspective of the combination of these two models, the unified connotation of mutual verification and the basic principle of the rate-and-state friction law are introduced. The analysis points out that the stress interaction between multiple faults or earthquakes can be comprehensively studied through the two different schools of Coulomb stress calculation and the ETAS model and that cross-validation can increase the reliability of the results. Retroactive application of rate-and-state friction law can provide a new perspective for understanding the earthquake triggering relationship and fault interaction.
Vasilis Belis, Patrick Odagiu, Thea Klæboe Årrestad
The detection of out-of-distribution data points is a common task in particle physics. It is used for monitoring complex particle detectors or for identifying rare and unexpected events that may be indicative of new phenomena or physics beyond the Standard Model. Recent advances in Machine Learning for anomaly detection have encouraged the utilization of such techniques on particle physics problems. This review article provides an overview of the state-of-the-art techniques for anomaly detection in particle physics using machine learning. We discuss the challenges associated with anomaly detection in large and complex data sets, such as those produced by high-energy particle colliders, and highlight some of the successful applications of anomaly detection in particle physics experiments.
<p>This work presents an analysis of the ionospheric responses to the solar
eclipse that occurred on 14 December 2020 over the Brazilian sector. This
event partially covers the south of Brazil, providing an excellent
opportunity to study the modifications in the peculiarities that occur in
this sector, as the equatorial ionization anomaly (EIA). Therefore, we used
the Digisonde data available in this period for two sites: Campo Grande (CG;
20.47<span class="inline-formula"><sup>∘</sup></span> S, 54.60<span class="inline-formula"><sup>∘</sup></span> W; dip <span class="inline-formula">∼23</span><span class="inline-formula"><sup>∘</sup></span> S) and
Cachoeira Paulista (CXP; 22.70<span class="inline-formula"><sup>∘</sup></span> S, 45.01<span class="inline-formula"><sup>∘</sup></span> W; dip <span class="inline-formula">∼35</span><span class="inline-formula"><sup>∘</sup></span> S), assessing the E and F regions and E<span class="inline-formula"><sub>s</sub></span> layer behaviors.
Additionally, a numerical model (MIRE, Portuguese acronym for E Region
Ionospheric Model) is used to analyze the E layer dynamics modification
around these times. The results show the F1 region disappearance and an
apparent electronic density reduction in the E region during the solar
eclipse. We also analyzed the total electron content (TEC) maps from the
Global Navigation Satellite System (GNSS) that indicate a weakness in the
EIA. On the other hand, we observe the rise of the E<span class="inline-formula"><sub>s</sub></span> layer electron
density, which is related to the gravity waves strengthened during solar
eclipse events. Finally, our results lead to a better understanding of the
restructuring mechanisms in the ionosphere at low latitudes during the solar
eclipse events, even though they only partially reached the studied regions.</p>
Studying the structures, properties and origins of the Earth's internal discontinuities is an important part in the efforts to understand the physical and chemical properties of the layered Earth, as well as to explore the dynamic processes and driving mechanisms of plate tectonics and the whole Earth system. Receiver function imaging is a well-known and widely-adopted seismological method in extracting the structural information of the Earth's internal discontinuities, and has become an indispensable tool to investigate the layering in structure and composition, and the thermal states and deformation behaviors of the crust and upper mantle, lithosphere-asthenosphere, mantle transition zone, and even shallow part of the lower mantle in the deep Earth. Since the receiver function method was proposed about half a century ago, great progress has been made in both methodology and application, targeting to subsurface structures of various spatial scales and from one- to three-dimension. In particular, with more and more seismic arrays being deployed in global and regional scales, and the continuous advancement of computing power and imaging theory during the last two decades, receiver function imaging has only become more powerful to constrain the subsurface structures. In this paper, we first briefly review the development history of the receiver function method. After introducing the basic principles involved, we then outline the major progress made during the last two decades in both methodology and application of this method, including but not limited to receiver function construction and forward modeling, receiver functions analysis for complex media or detailed discontinuity structures (e.g., anisotropy, dipping structures, irregular topography, sharpness of discontinuities), ray and wave-equation based receiver function migration in imaging crustal and upper mantle discontinuities, velocity inversion of receiver functions as well as its combination with other types of data. We focus mainly on the following three aspects: deconvolution techniques to construct receiver functions, imaging of discontinuity structures and inversion of velocity structures using receiver functions, with specific emphasis on the recent advances, challenges, and possible solutions. In the light of the emerging and future trends in seismology, we finally discuss the directions of receiver function studies from the viewpoints of both methodology and application.
Mathias Louboutin, Philipp A. Witte, Ali Siahkoohi
et al.
We present the SLIM (https://github.com/slimgroup) open-source software framework for computational geophysics, and more generally, inverse problems based on the wave-equation (e.g., medical ultrasound). We developed a software environment aimed at scalable research and development by designing multiple layers of abstractions. This environment allows the researchers to easily formulate their problem in an abstract fashion, while still being able to exploit the latest developments in high-performance computing. We illustrate and demonstrate the benefits of our software design on many geophysical applications, including seismic inversion and physics-informed machine learning for geophysics (e.g., loop unrolled imaging, uncertainty quantification), all while facilitating the integration of external software.
This paper presents a coherent exposition of the modern statistical theory of the transport of fast charged particles (cosmic rays) in the solar wind. Observations are discussed only as they illustrate the phenomena under discussion. A brief introductory section surveys the historical development of the theory. The dominant effect on the motion of cosmic rays in the solar wind is the interplanetary magnetic field, which is irregular and which is therefore best treated statistically, using random functions. The magnetic irregularities scatter the cosmic rays in pitch angle, so that to a good approximation the cosmic rays diffuse through the irregular magnetic field. Using a statistical analysis of the equations of motion, one may relate the diffusion tensor to the power spectrum of the magnetic field, which is in principle measurable. The resulting general transport theory relates the motion of cosmic rays, statistically, to the solar‐wind velocity and magnetic field. Application of the theory both to the modulation of galactic cosmic rays by the solar wind and to the propagation of solar cosmic rays is discussed in detail. It is concluded that the present theory explains the principal phenomena quite well. Future theoretical work will probably be devoted to obtaining better solutions of the equations, to obtaining better values of the parameters, and to studying higher‐order or more subtle effects.
Abstract The Arase satellite observed clear dipolarization signatures at r~4.3–4.6 RE, GMLAT~16°–18°, and MLT~5.5–5.7 hr around 15:00 UT on 27 March 2017 when Dst~−70 nT. Strong magnetic field fluctuations were embedded and their characteristic frequency was close to the local gyrofrequency of O+ ions. After the dipolarization, O+ flux was enhanced at ≤15 keV, while H+ flux showed no clear variations. These observations provide evidence for the direct supply of O+ ions from the ionosphere. There were no clear signatures for the nonadiabatic local acceleration of O+ ions. We consider that a bump‐on‐tail structure in the energy spectrum around 30–50 keV due to a combination of charge exchange loss and drift motion of ions masks the nonadiabatic acceleration. Occurrence of the magnetic field dipolarization at dawn, which is far from the well‐known premidnight occurrence peak, may be due to an eastward skewing of partial ring current during the storm main phase.
The purpose of this study is to estimate maximum ground motions in southern Taiwan as well as to assess potential human fatalities from scenario earthquakes on the Chishan active faults in this area. The resultant Shake Map patterns of maximum ground motion in a case of Mw 7.2 show the areas of PGA above 400 gals are located in the northeastern, central and northern parts of southwestern Kaohsiung as well as the southern part of central Tainan, as shown in the regions inside the yellow lines in the corresponding figure. Comparing cities with similar distances located in Tainan, Kaohsiung, and Pingtung to the Chishan fault, the cities in Tainan area have relatively greater PGA and PGV, due to large site response factors in Tainan area. Furthermore, seismic hazards in terms of PGA and PGV in the vicinity of the Chishan fault are not completely dominated by the Chishan fault. The main reason is that some areas located in the vicinity of the Chishan fault are marked with low site response amplification values from 0.55 - 1.1 and 0.67 - 1.22 for PGA and PGV, respectively. Finally, from estimation of potential human fatalities from scenario earthquakes on the Chishan active fault, it is noted that potential fatalities increase rapidly in people above age 45. Total fatalities reach a high peak in age groups of 55 - 64. Another to pay special attention is Kaohsiung City has more than 540 thousand households whose residences over 50 years old. In light of the results of this study, I urge both the municipal and central governments to take effective seismic hazard mitigation measures in the highly urbanized areas with a large number of old buildings in southern Taiwan.
Existing single phase induction motors exhibit low starting torque. Moreover, during accelerating time and at steady state, they produce a significant level of torque pulsations which gives rise to noise and vibration in the machine. As part of efforts to mitigate these problems, a performance improvement strategy using a PWM inverter to drive the existing motor is implemented in MATLAB/Simulink environment in this work. The drive supplies variable voltage and phase to the auxiliary winding with the aid of a pulse width modulation (PWM) technique and a PID controller. Simulation results show the starting torque of the motor increased by 75% under the developed drive scheme. In addition, torque pulsations reduced from 1.4 Nm peak-peak to 0.14 Nm peak-peak at steady state. It was observed that the accelerating time reduced by 30% compared to the accelerating time under line operation. The strategy eliminates the need for series-connected capacitors thereby potentially enhancing the reliability of the motor.