Deep learning and remote sensing are critical to flood mapping, with synthetic aperture radar (SAR) offering a weather-independent data source. To improve segmentation accuracy, we designed a GSConv Block and introduced LSK Attention for low-level feature extraction, forming a lightweight network, GLNet. Shadow-induced misclassification was mitigated through feature-level fusion of SAR and terrain slope, leading to the proposed SSFNet. Supplementary samples were generated from land cover products to enlarge the training dataset. Results showed that GLNet achieved 88.57% IoU on the S1-Water dataset, outperforming SegFormer by 1.8%. SSFNet achieved 93.28% IoU, outperforming pixel-level fusion by 3.16%. After expanding the training set, SSFNet achieved R2 > 0.95 and reduced RMSE by 1.4 km2 across 256 sites, demonstrating strong generalization for Chaohu Lake. Applied to the August 2024 flood in Liaoning, it revealed a strong correlation between rainfall and inundation. This study provides support for rapid flood mapping using SAR imagery.
Formulation of the problem. Snow cover has a great influence on the climate, relief, hydrological and soil-forming processes, plant and animal life. It protects the soil from deep freezing and preserves winter crops, absorbs nitrogenous compounds, thus fertilizing the soil, adsorbs atmospheric dust and cools the surface layers of the air. Snow cover and its duration are social and economic important and affect the environment. Snow-related difficulties are a dangerous factor in the human environment, and more attention should be paid to the impact of snow on the economy or its great value as a natural resource. The investigation of the spatial distribution and temporal variability of the snow cover is an urgent and important issue, especially in the conditions of modern climate changes.
The purpose of the article is to establish the characteristics of the spatio-temporal distribution of snow cover in the territory of Vinnytsia region for the period 1996-2018 and the features of its characteristics at the beginning of the XXI century.
Methods. The data of daily meteorological observations on the height of the snow cover at the stations of the Vinnytsia region were used as the initial information. To characterize the height of the snow cover, its average values are calculated not for months, but for decades of winter months. Decadal heights and their repeatability by decade were calculated for each station for the period from 1996 to 2018 based on data on the distribution of snow cover height.
Results. The analysis of the spatio-temporal distribution of snow cover for the period 1996-2018 makes it possible to describe certain features of the formation of snow cover in the territory of Vinnytsia region: the snow cover in Vinnytsia region is formed in the third decade of October, with the exception of the Vinnytsia and Khmilnyk stations, where the appearance of snow cover begins in the second decades of October; disappearance occurs in the third decade of April throughout the region; decadal heights of snow cover have the maximum repeatability in the gradation from 0 to 5 cm; average decadal heights with the largest values from 10 to 13 cm were found in January and February at the stations of Bilopillia, Vinnytsia and Khmilnyk; the formation of stable snow cover on the territory of Vinnytsia region occurs in the second decade of November; the destruction was recorded from the second decade of March to the first decade of April.
Scientific novelty and practical significance. The article provides an analysis of the distribution of snow cover and features of its formation at the end of the 20th and the beginning of the 21st centuries on the territory of the Vinnytsia region. The snow cover has a significant impact on the branches of Ukraine's economy. Agriculture is the most sensitive to the features of snow cover formation, especially when it comes to overwintering of winter crops. One of the main agrometeorological factors that determine the overwintering of winter crops is the height of the snow, its spatial and temporal variability, the period of appearance and disappearance of the snow cover. We believe that our investigation can be used to clarify the forecast of the yield of winter crops in Vinnytsia.
Marcia Silva dos Santos, Silvana Nunes de Queiroz, Ricardo Monteiro de Carvalho
A necessidade da criação de uma Região Metropolitana surge com a conurbação das cidades, através do crescimento dos municípios com o núcleo, se interligando através de fluxos de bens, serviços, capitais e pessoas. A Região Metropolitana do Cariri (RM Cariri), instituída em 2009, é a área de estudo deste trabalho, que tem como objetivo analisar o fluxo migratório e distribuição espacial da e para a RM Cariri, no interregno 2005/2010. Os microdados da amostra do Censo Demográfico de 2010 (IBGE) são a principal fonte de informações. Os principais resultados mostram o baixo dinamismo econômico, falta de infraestrutura, oportunidades de trabalho e de estudo na maioria dos municípios da RM Cariri, sendo que o CRAJUBAR (Crato, Juazeiro do Norte e Barbalha) cresce cada vez mais e atrai migrantes, e os outros municípios perdem.
Human ecology. Anthropogeography, Physical geography
Liquid Argon Time Projection Chamber (LArTPC) detector technology offers a wealth of high-resolution information on particle interactions, and leveraging that information to its full potential requires sophisticated automated reconstruction techniques. This article describes NuGraph2, a Graph Neural Network (GNN) for low-level reconstruction of simulated neutrino interactions in a LArTPC detector. Simulated neutrino interactions in the MicroBooNE detector geometry are described as heterogeneous graphs, with energy depositions on each detector plane forming nodes on planar subgraphs. The network utilizes a multi-head attention message-passing mechanism to perform background filtering and semantic labelling on these graph nodes, identifying those associated with the primary physics interaction with 98.0\% efficiency and labelling them according to particle type with 94.9\% efficiency. The network operates directly on detector observables across multiple 2D representations, but utilizes a 3D-context-aware mechanism to encourage consistency between these representations. Model inference takes 0.12~s/event on a CPU, and 0.005s/event batched on a GPU. This architecture is designed to be a general-purpose solution for particle reconstruction in neutrino physics, with the potential for deployment across a broad range of detector technologies, and offers a core convolution engine that can be leveraged for a variety of tasks beyond the two described in this article.
We construct a surrogate loss to directly optimise the significance metric used in particle physics. We evaluate our loss function for a simple event classification task using a linear model and show that it produces decision boundaries that change according to the cross sections of the processes involved. We find that the models trained with the new loss have higher signal efficiency for similar values of estimated signal significance compared to ones trained with a cross-entropy loss, showing promise to improve sensitivity of particle physics searches at colliders.
The surface net radiation (Rn) represents the balance of the radiative budget on the land surface and drives many physical and biological processes. An accurate and long-term product for global daily coverage of Rn at a high spatial resolution is needed for a variety of applications at regional and local scales. This study proposes two algorithms, called the downward shortwave radiation (DSR)-based algorithm and the top-of-atmosphere (TOA)-based algorithm, to estimate Rn by using Landsat data. The DSR-based algorithm consists of three conditional models, and was developed based on the analysis of the relationship between Rn and shortwave radiation as well as ancillary information from ground measurements and various datasets. The TOA-based algorithm was developed by linking Rn to TOA observations from Landsat sensors and ancillary information. The two algorithms were developed by using the random forest method. The results of their validation against ground measurements showed that the DSR-based algorithm outperformed the TOA-based algorithm in terms of accuracy, with a determination coefficient (R2) of 0.93, root-mean-squared error (RMSE) of 17.58 Wm−2, and bias of −4.27 Wm−2. It was stable under various conditions. We then applied the DSR-based algorithm to generate a product of the global daily Rn, called the High-resolution (Hi)- Global LAnd Surface Satellite (GLASS), from 2013 to 2018 at a spatial resolution of 30 m under a clear sky based on remotely sensed products, including the DSR from GLASS, the normalized difference vegetation index (NDVI) obtained from Landsat, surface broadband albedo from Hi-GLASS, and meteorological factors based on reanalysis data from MERRA2. Following its validation using in-situ observations from 2013 to 2018, the overall accuracy of the daily Rn acquired by Hi-GLASS under clear sky was found to be satisfactory, with a value of R2 of 0.90 and an RMSE of 25.03 Wm−2. Moreover, compared with the daily Rn obtained from the GLASS product at a spatial resolution of 5 km, that obtained by Hi-GLASS can better characterize the surface by providing more details and capturing the variations in the measurements, especially large and small values. However, due to limitations of the available datasets and the algorithm, the data on Rn for most regions lacked information on cloudy skies and areas at high latitudes. This information thus cannot be provided by Hi-GLASS yet. Moreover, the influence of the topography on values of Rn has not been thoroughly considered. Nonetheless, values of Rn under clear sky obtained from Hi-GLASS offer promise for use in a wide range of areas, and efforts are underway to improve this product.
Zero-shot remote sensing scene classification aims to solve the scene classification problem on unseen categories and has attracted numerous research attention in the remote sensing field. Existing methods mostly use shallow networks for visual and semantic feature learning, and the semantic encoder networks are usually fixed during the zero-shot learning process, thus failing to capture powerful feature representations for classification. In this work, we introduced a vision-language model for remote sensing scene classification based on contrastive vision-language supervision. Our method is capable of learning semantic-aware visual representations using a contrastive vision-language loss in the embedding space. By pretraining on large-scale image–text datasets, our baseline method shows good transferring ability on remote sensing scenes. To enable model training in zero-shot settings, we introduced a pseudo-labeling technique that can automatically generate pseudo labels from unlabeled data. A curriculum learning strategy is developed to boost the performance of zero-shot remote sensing scene classification with multiple stages of model finetuning. We conducted experiments on four benchmark datasets and showed considerable performance improvement on both zero-shot and few-shot remote sensing scene classification. The proposed RS-CLIP method achieved a zero-shot classification accuracy of 95.94%, 95.97%, 85.76%, and 87.52% on the novel classes of UCM-21, WHU-RS19, NWPU-RESISC45, and AID-30 datasets respectively. Our code will be released at https://github.com/lx709/RS-CLIP.
Bamboo, a fast-growing vegetation with high carbon sequestration efficiency, is widely distributed across Asia, Central and South America, and Africa. However, mapping aboveground carbon (AGC) density (kgC m−2) in bamboo can be challenging due to the changing composition of old and new culms or the phenology of the canopy. In this study, we conducted a UAV-lidar survey on 120 ha of subalpine dwarf bamboo (Yushania niitakayamensis) vegetation in Central Taiwan. We destructively collected dwarf bamboo plants from seventy-four 1 × 1 m plots and derived 64 spatially corresponding lidar height and density distribution metrics to model dwarf bamboo AGC density. We applied five regression models (stepwise linear regression, principal component regression, partial least squares regression, elastic net, and multivariate adaptive regression splines [MARS]) to model dwarf bamboo AGC density. MARS outperformed other models by referring to model residuals. The metrics zmax (maximum of lidar return height distribution), zq95 (95th percentile), and zq65 (65th percentile) were salient variables (p < 0.001), especially zq65, suggesting that the conventional model specification of height percentiles of the canopy top might overlook that near the canopy bottom or might be due to insufficient point density. Finally, we used MARS to map the dwarf bamboo AGC density of the study area. We found that AGC spatial variation in dwarf bamboo may be related to topographic characteristics and/or microclimate. This study proposes a regression model to integrate UAV-lidar metrics for precise subalpine dwarf bamboo carbon density mapping, aiding regional spatial carbon-cycle monitoring.
Improving our understanding of how the ocean absorbs carbon dioxide is critical to climate change mitigation efforts. We, a group of early career ocean professionals working in Canada, summarize current research and identify steps forward to improve our understanding of the marine carbon sink in Canadian national and offshore waters. We have compiled an extensive collection of reported surface ocean air–sea carbon dioxide exchange values within each of Canada's three adjacent ocean basins. We review the current understanding of air–sea carbon fluxes and identify major challenges limiting our understanding in the Pacific, the Arctic, and the Atlantic Ocean. We focus on ways of reducing uncertainty to inform Canada's carbon stocktake, establish baselines for marine carbon dioxide removal projects, and support efforts to mitigate and adapt to ocean acidification. Future directions recommended by this group include investing in maturing and building capacity in the use of marine carbon sensors, improving ocean biogeochemical models fit-for-purpose in regional and ocean carbon dioxide removal applications, creating transparent and robust monitoring, verification, and reporting protocols for marine carbon dioxide removal, tailoring community-specific approaches to co-generate knowledge with First Nations, and advancing training opportunities for early career ocean professionals in marine carbon science and technology.
In real-world human-robot systems, it is essential for a robot to comprehend human objectives and respond accordingly while performing an extended series of motor actions. Although human objective alignment has recently emerged as a promising paradigm in the realm of physical human-robot interaction, its application is typically confined to generating simple motions due to inherent theoretical limitations. In this work, our goal is to develop a general formulation to learn manipulation functional modules and long-term task goals simultaneously from physical human-robot interaction. We show the feasibility of our framework in enabling robots to align their behaviors with the long-term task objectives inferred from human interactions.
Species transport models typically combine partial differential equations (PDEs) with relations from hindered transport theory to quantify electromigrative, convective, and diffusive transport through complex nanoporous systems; however, these formulations are frequently substantial simplifications of the governing dynamics, leading to the poor generalization performance of PDE-based models. Given the growing interest in deep learning methods for the physical sciences, we develop a machine learning-based approach to characterize ion transport across nanoporous membranes. Our proposed framework centers around attention-enhanced neural differential equations that incorporate electroneutrality-based inductive biases to improve generalization performance relative to conventional PDE-based methods. In addition, we study the role of the attention mechanism in illuminating physically-meaningful ion-pairing relationships across diverse mixture compositions. Further, we investigate the importance of pre-training on simulated data from PDE-based models, as well as the performance benefits from hard vs. soft inductive biases. Our results indicate that physics-informed deep learning solutions can outperform their classical PDE-based counterparts and provide promising avenues for modelling complex transport phenomena across diverse applications.
Fangjun Hu, Gerasimos Angelatos, Saeed A. Khan
et al.
The expressive capacity of physical systems employed for learning is limited by the unavoidable presence of noise in their extracted outputs. Though present in physical systems across both the classical and quantum regimes, the precise impact of noise on learning remains poorly understood. Focusing on supervised learning, we present a mathematical framework for evaluating the resolvable expressive capacity (REC) of general physical systems under finite sampling noise, and provide a methodology for extracting its extrema, the eigentasks. Eigentasks are a native set of functions that a given physical system can approximate with minimal error. We show that the REC of a quantum system is limited by the fundamental theory of quantum measurement, and obtain a tight upper bound for the REC of any finitely-sampled physical system. We then provide empirical evidence that extracting low-noise eigentasks can lead to improved performance for machine learning tasks such as classification, displaying robustness to overfitting. We present analyses suggesting that correlations in the measured quantum system enhance learning capacity by reducing noise in eigentasks. The applicability of these results in practice is demonstrated with experiments on superconducting quantum processors. Our findings have broad implications for quantum machine learning and sensing applications.
George Manolakos, Pantelis Manousselis, Danai Roumelioti
et al.
Here we present an overview on the various works, in which many collaborators have contributed, regarding the interesting dipole of noncommutativity and physics. In brief, we present the features that noncommutativity triggers both in the fields of gravity and particle physics, from a matrix-realized perspective, with the notion of noncommutative gauge theories to play the most central role in the whole picture. Also, under the framework of noncommutativity, we examine the possibility of unifying the two fields (gravity-particle physics) in a single configuration.
Athiwat Wattanapituksakul, Rasmi Shoocongdej, Cyler Conrad
Ban Rai Rockshelter in northwest Thailand, dating to the Terminal Pleistocene and Middle Holocene, includes evidence for hunter-gatherer exploitation of mammals, birds, reptiles, fish, and arthropods. Abundant faunal remains, identified throughout site deposits, include macaques (<i>Macaca</i> sp.) and Sambar deer (<i>Rusa unicolor</i>), but these identifications are influenced by an assemblage largely comprised of preserved tooth elements and fragmented bone. Area 3 at Ban Rai has the largest abundance and diversity of faunal remains recovered and identified in this study. Here, we examine the zooarchaeological assemblage from Ban Rai Rockshelter, to understand long-term hunter-gatherer subsistence change, influenced by site preservation, during and after the Pleistocene–Holocene transition. Our results support the presence of the exploitation of arboreal taxa during the Early and Middle Holocene in northwest Thailand.
<p>Evaluation of climate model simulations is a crucial task in climate research. Here, a new
statistical framework is proposed for evaluation of simulated temperature responses
to climate forcings against temperature reconstructions derived from climate proxy data for
the last millennium. The framework includes two types of statistical models, each of which is
based on the concept of latent (unobservable)
variables: <i>confirmatory factor analysis</i> (CFA) models and <i>structural equation modelling</i>
(SEM) models. Each statistical model presented is developed for use with data from a single region,
which can be of any size. The ideas behind the framework arose partly from a statistical model
used in many detection and attribution (D&A) studies.
Focusing on climatological characteristics of
<i>five specific</i> forcings of natural and anthropogenic origin, the present work theoretically
motivates an extension of the statistical model used in D&A studies to CFA and SEM models,
which allow, for example, for non-climatic noise in observational data without assuming
the additivity of the forcing effects.
The application of the ideas of CFA is exemplified in a small numerical study, whose aim was
to check the assumptions typically placed on ensembles
of climate model simulations when constructing mean sequences. The result of this study indicated
that some ensembles for some regions may not satisfy the assumptions in question.</p>
Sara Gonzalez-Rodriguez, Maria Luisa Fernandez-Marcos
Sorption of oxyanions by soils and mineral surfaces is of interest due to their role as nutrients or pollutants. Volcanic soils are variable charge soils, rich in active forms of aluminum and iron, and capable of sorbing anions. Sorption and desorption of vanadate, arsenate, and chromate by two African andosols was studied in laboratory experiments. Sorption isotherms were determined by equilibrating at 293 K soil samples with oxyanion solutions of concentrations between 0 and 100 mg L<sup>−1</sup> V, As, or Cr, equivalent to 0−2.0 mmol V L<sup>−1</sup>, 0−1.3 mmol As L<sup>−1</sup>, and 0−1.9 mmol Cr L<sup>−1</sup>, in NaNO<sub>3</sub>; V, As, or Cr were determined by ICP-mass spectrometry in the equilibrium solution. After sorption, the soil samples were equilibrated with 0.02 M NaNO<sub>3</sub> to study desorption. The isotherms were adjusted to mathematical models. After desorption with NaNO<sub>3</sub>, desorption experiments were carried out with a 1 mM phosphate. The sorption of vanadate and arsenate was greater than 90% of the amount added, while the chromate sorption was much lower (19–97%). The sorption by the Silandic Andosol is attributed to non-crystalline Fe and Al, while in the Vitric Andosol, crystalline iron species play a relevant role. The V and Cr sorption isotherms fitted to the Freundlich model, while the As sorption isotherms conformed to the Temkin model. For the highest concentrations of oxyanions in the equilibrating solution, the sorbed concentrations were 37–38 mmol V kg<sup>−1</sup>, 25 mmol As kg<sup>−1</sup>, and 7.2–8.8 mmol Cr kg<sup>−1</sup>. The desorption was low for V and As and high for Cr. The comparison of the sorption and desorption isotherms reveals a pronounced hysteresis for V in both andosols and for Cr in the Silandic Andosol. Phosphate induced almost no V desorption, moderate As desorption, and considerable Cr desorption.
The American Physical Society calls on its members to improve the diversity of physics by supporting an inclusive culture that encourages women and Black, Indigenous, and people of color to become physicists. In the current educational system, it is unlikely for a student to become a physicist if they do not share the same attitudes about what it means to learn and do physics as those held by most professional physicists. Evidence shows college physics courses and degree programs do not support students in developing these attitudes. Rather physics education filters out students who do not enter college physics courses with these attitudes. To better understand the role of attitudes in the lack of diversity in physics, we investigated the intersecting relationships between racism and sexism in inequities in student attitudes about learning and doing physics using a critical quantitative framework. The analyses used hierarchical linear models to examine students attitudes as measured by the Colorado learning attitudes about science survey. The data came from the LASSO database and included 2170 students in 46 calculus-based mechanics courses and 2503 students in 49 algebra-based mechanics courses taught at 18 institutions. Like prior studies, we found that attitudes either did not change or slightly decreased for most groups. Results identified large differences across intersecting race and gender groups representing educational debts society owes these students. White students, particularly White men in calculus-based courses, tended to have more expert-like attitudes than any other group of students. Instruction that addresses society's educational debts can help move physics toward an inclusive culture supportive of diverse students and professionals.
Chinmaya Mahesh, Kristin Dona, David W. Miller
et al.
Data-intensive science is increasingly reliant on real-time processing capabilities and machine learning workflows, in order to filter and analyze the extreme volumes of data being collected. This is especially true at the energy and intensity frontiers of particle physics where bandwidths of raw data can exceed 100 Tb/s of heterogeneous, high-dimensional data sourced from hundreds of millions of individual sensors. In this paper, we introduce a new data-driven approach for designing and optimizing high-throughput data filtering and trigger systems such as those in use at physics facilities like the Large Hadron Collider (LHC). Concretely, our goal is to design a data-driven filtering system with a minimal run-time cost for determining which data event to keep, while preserving (and potentially improving upon) the distribution of the output as generated by the hand-designed trigger system. We introduce key insights from interpretable predictive modeling and cost-sensitive learning in order to account for non-local inefficiencies in the current paradigm and construct a cost-effective data filtering and trigger model that does not compromise physics coverage.
M. A. Stelmaszczuk-Górska, E. Aguilar-Moreno, S. Casteleyn
et al.
With new Earth Observation (EO) and Geoinformation (GI) data sources increasingly becoming available, evermore new skills for data collection, processing, analysis and application are required. They are needed not only from scientists, but also from practitioners working in businesses, public and private EO*GI and related sectors. Aligning the continuously evolving skill sets demanded by the market and existing academic and vocational training programmes is not an easy task. Training programmes should be grounded in real needs of the sector and its labour market. To do this, it is necessary to identify the knowledge and skills needed, and map their interconnectivity in specific frameworks, which can later be used for the definition of new curricula or job-oriented learning paths. This paper presents a framework for the EO*GI sector, based on a Body of Knowledge (BoK), by creating a complete set of concepts with a semantic structure underneath that supports academia and industry. Creating and updating the BoK is supported by an editing tool, the Living Textbook and by experts in the EO*GI domain, who contributes to the BoK’s enrichment.