High-resolution population maps play a critical role in addressing the growing risks of urban disasters. This study develops a transferable, building-scale population spatialization framework for residential areas, entirely using freely accessible open data. The framework avoids dependence on costly or sensitive fine-grained demographic datasets and overcomes the limitations of census data, which are updated infrequently and available only at coarse spatial scales. Using 10-meter SDGSAT-1 NTL data, we applied a statistical modeling approach directly at the community level within residential areas, effectively resolving the scale inconsistency that often arises when coarse-scale models are downscaled to finer resolutions. We further introduced a Building Residential Weight index that integrates building capacity, occupancy rate, and functional attributes. This factor enables the population of each community to be proportionally allocated to its buildings, producing a detailed and realistic building-level population distribution. Model evaluation experiments demonstrate that the Random Forest algorithm achieved the highest modeling accuracy in this study, with an R2 of 0.779, representing an improvement of more than 0.55 compared with widely used global population datasets such as WorldPop, LandScan, and GHS-Pop. The generated building-level population distribution maps provide a high-resolution spatial foundation for megacity disaster risk management, resource allocation, and urban planning.
We study classical continuous automorphisms of the torus (cat maps) from the viewpoint of the Koopman theory. We find analytical formulae for Koopman modes defined coherently on the whole of the torus, and their decompositions associated with the partition of the torus into ergodic components. The spectrum of the Koopman operator is studied in four cases of cat maps: cyclic, quasi-cyclic, critical (transition from quasi-cyclic to chaotic behaviour) and chaotic. The synthetic spectrum associated with the ergodic decomposition is also studied.
This qualitative study examines how children living in a public housing neighborhood engage in multimodal, embodied meaning-making to restory their community. Focusing on two participants and in partnership with The Kids Club, this paper explores children’s spatial reclamation through embodied and spatialized literacies, complicating stories where children assert whose stories matter and why. Drawing on nexus analysis and narrative inquiry, this study conceptualizes the body as central to cognition and comprehension through texts in action. The sisters spatially reclaim neighborhood narratives via walking tours, heart maps, and photographs that function as multimodal action texts. These practices invite a rethinking of comprehension beyond traditional textual modes, illuminating how children navigate and transform literacy landscapes. This work contributes to conversations about equity in literacy environments and calls on educators and researchers to honor children’s multimodal literacy practices as vital forms of critical comprehension, storytelling, and belonging.
Abstract Medical treatments using potent neutralizing SARS-CoV-2 antibodies have achieved remarkable improvements in clinical symptoms, changing the situation for the severity of COVID-19 patients. We previously reported an antibody, NT-108 with potent neutralizing activity. However, the structural and functional basis for the neutralizing activity of NT-108 has not yet been understood. Here, we demonstrated the therapeutic effects of NT-108 in a hamster model and its protective effects at low doses. Furthermore, we determined the cryo-EM structure of NT-108 in complex with SARS-CoV-2 spike. The single-chain Fv construction of NT-108 improved the cryo-EM maps because of the prevention of preferred orientations induced by Fab orientation. The footprints of NT-108 illuminated how escape mutations such as E484K evade from class 2 antibody recognition without ACE2 affinity attenuation. The functional and structural basis for the potent neutralizing activity of NT-108 provides insights into the rational design of therapeutic antibodies.
We present new Galactic dust reddening maps of the high Galactic latitude sky using DESI imaging and spectroscopy. We directly measure the reddening of 2.6 million stars by comparing the observed stellar colors in $g-r$ and $r-z$ from DESI imaging with the synthetic colors derived from DESI spectra from the first two years of the survey. The reddening in the two colors is on average consistent with the Fitzpatrick (1999) extinction curve with $R_\mathrm{V}=3.1$. We find that our reddening maps differ significantly from the commonly used Schlegel et al. (1998) (SFD) reddening map (by up to 80 mmag in $E(B-V)$), and we attribute most of this difference to systematic errors in the SFD map. To validate the reddening map, we select a galaxy sample with extinction correction based on our reddening map, and this yields significantly better uniformity than the SFD extinction correction. Finally, we discuss the potential systematic errors in the DESI reddening measurements, including the photometric calibration errors that are the limiting factor on our accuracy. The $E(g-r)$ and $E(r-z)$ maps presented in this work, and for convenience their corresponding $E(B-V)$ maps with SFD calibration, are publicly available.
This study aimed to delineate groundwater recharge zones using a combination of analytical hierarchy process (AHP), fuzzy-AHP, and frequency ratio (FR) models. Additionally, it aimed to compare the effectiveness of these models in groundwater recharge potential zone mapping. To achieve these objectives, nine groundwater influencing factors were considered, including geology, soil types, lineament density, elevation, slope, topographic wetness index, drainage density, land use land cover, and rainfall. Thematic maps for all these factors were generated using satellite and conventional data in the ArcGIS environment. Weight was assigned to each thematic layer based on its significance to recharge. All thematic layers were combined using AHP model-l (WLC), AHP model-ll (Weighted sum), fuzzy-AHP overlay, and FR-based model using ArcGIS. The findings revealed that 15% and 39% of the study area have high recharge potentials according to AHP-based model-l and model-ll, respectively. The FAHP model demarcated 43% of the area as high recharge zones while the FR model demarcated 42% of the area as high recharge zones. The majority of high groundwater recharge areas were found in the central part of the study area, while the southern part was demarcated as a moderate recharge zone. The eastern and western parts were demarcated as low recharge potentials zones. To validate the accuracy of these models, the study used receiver operating characteristic (ROC) validation curves. The ROC curves revealed that AHP model-ll had the highest accuracy (AUC=89%) followed by the FAHP model (AUC=88%), AHP model-l (AUC=84%), and FR (AUC=81%)...
Abderrazak Khediri, Ayoub Yahiaoui, Mohamed Ridda Laouar
et al.
Blackout events in smart grids can have significant impacts on individuals, communities and businesses, as they can disrupt the power supply and cause damage to the grid. In this paper, a new proactive approach to an early warning system for predicting blackout events in smart grids is presented. The system is based on deep learning models: convolutional neural networks (CNN) and deep self-organizing maps (DSOM), and is designed to analyse data from various sources, such as power demand, generation, transmission, distribution and weather forecasts. The system performance is evaluated using a dataset of time windows and labels, where the labels indicate whether a blackout event occurred within a given time window. It is found that the system is able to achieve an accuracy of 98.71% and a precision of 98.65% in predicting blackout events. The results suggest that the early warning system presented in this paper is a promising tool for improving the resilience and reliability of electrical grids and for mitigating the impacts of blackout events on communities and businesses.
We propose a new generative model of projected cosmic mass density maps inferred from weak gravitational lensing observations of distant galaxies (weak lensing mass maps). We construct the model based on a neural style transfer so that it can transform Gaussian weak lensing mass maps into deeply non-Gaussian counterparts as predicted in ray-tracing lensing simulations. We develop an unpaired image-to-image translation method with Cycle-Consistent Generative Adversarial Networks (Cycle GAN), which learn efficient mapping from an input domain to a target domain. Our model is designed to enjoy important advantages; it is trainable with no need for paired simulation data, flexible to make the input domain visually meaningful, and expandable to rapidly-produce a map with a larger sky coverage than training data without additional learning. Using 10,000 lensing simulations, we find that appropriate labeling of training data based on field variance allows the model to reproduce a correct scatter in summary statistics for weak lensing mass maps. Compared with a popular log-normal model, our model improves in predicting the statistical natures of three-point correlations and local properties of rare high-density regions. We also demonstrate that our model enables us to produce a continuous map with a sky coverage of $\sim166\, \mathrm{deg}^2$ but similar non-Gaussian features to training data covering $\sim12\, \mathrm{deg}^2$ in a GPU minute. Hence, our model can be beneficial to massive productions of synthetic weak lensing mass maps, which is of great importance in future precise real-world analyses.
Morphology and color patterns hold fundamental insights into the early formation history of high-z galaxies. However, 2D reconstruction of rest-frame (RF) color maps of such systems from imaging data is a non-trivial task. This is mainly because the spectral energy distribution (SED) of high-sSFR (starburst) galaxies near and far is spatially inhomogeneous and thus the common practice of applying a spatially constant "morphological" k-correction can lead to serious observational biases. In this study we use the nearby blue compact galaxy Haro11 to illustrate how the spatial inhomogeneity of the SED impacts the morphology and color maps in the observer's frame (ObsF) visual and NIR, and potentially affects the physical characterization of distant starburst galaxies with the JWST and Euclid. Based on MUSE spectroscopy and spectral modeling, we first examine the elements shaping the spatially varying optical SED of Haro11, namely intrinsic stellar age gradients, strong nebular emission and its spatial decoupling from the ionizing stellar background, and differing extinction patterns in the stellar and nebular component both spatially and in their amount. Our simulations show, inter alia, that an optically bright yet dusty star-forming (SF) region may evade detection whereas a gas-evacuated (thus, potentially Lyman continuum photon-leaking) region with weaker SF activity can dominate the ObsF (RF UV) morphology of a high-z galaxy. We also show that ObsF color maps are affected by strong emission lines moving in and out of filter passbands depending on z, and, if taken at face value, can lead to erroneous conclusions about the nature, evolutionary status and dust content of a galaxy. A significant additional problem stems from the uncertain prominence of the 2175 Å extinction bump that translates to appreciable inherent uncertainties in RF color maps of high-z galaxies. (abridged)
Let $X$ be a closed smooth manifold, $G$ be a simple connected compact real Lie group, $M (G)$ be the group of all smooth maps from $X$ to $G$, and $M_0 (G)$ be its connected component for the $\mathcal C^\infty$-compact open topology. It is shown that maximal normal subgroups of $M_0 (G)$ are precisely the inverse images of the centre $Z(G)$ of $G$ by the evaluation homomorphisms $M_0 (G) \to G, \hskip.1cm γ\mapsto γ(a)$, for $a \in X$. This in turn is a consequence of a result on the group $\mathcal C^\infty_{n, G}$ of germs at the origin $O$ of $\mathbf R^n$ of smooth maps $\mathbf R^n \to G$: this group has a unique maximal normal subgroup, which is the inverse image of $Z(G)$ by the evaluation homomorphism $\mathcal C^\infty_{n, G} \to G, \hskip.1cm \underline γ\mapsto \underline γ(O)$. This article provides corrections for part of an earlier article [Harp--88].
Previously, we have systematically constructed explicit real algebraic functions which are represented as the compositions of smooth real algebraic maps whose images are domains surrounded by hypersurfaces of degree 1 or 2 with canonical projections. Here we give new examples with the hypersurfaces each of which is the product of a connected component of a hyperbola and a copy of the $1$-dimensional affine space explicitly. As a related future work we also discuss problems to obtain the zero sets of some real polynomials explicitly from increasing sequences of real numbers. This is motivated by a problem in theory of smooth functions proposed first by Sharko: can we construct nice smooth functions whose Reeb graphs are as prescribed? The Reeb space of a smooth function is the naturally obtained graph whose underlying space is the quotient space of the manifold consisting of connected components of preimages. The author first considered variants respecting the topologies of the preimages and obtained several results before. Our work is also motivated by real algebraic geometry, pioneered by Nash. We can know existence of real algebraic structures of smooth manifolds and some general sets and we already know several approximations of smooth maps by real algebraic maps. Our interest lies in explicit construction, which is difficult.
Romulus Costache, Hazem Ghassan Abdo, Arun Pratap Mishra
et al.
AbstractIn this work, the vulnerability to flooding in the Prahova River basin was calculated and analyzed using advanced methods and techniques. Thus, 2 hybrid models represented by Iterative Classifier Optimizer – Multiclass Alternating Decision Tree – Certainty Factor (ICO-LADT-CF) and Fuzzy-Analytical Hierarchy Process – Certainty Factor (FAHP-CF) were generated, which had as input data the values of 10 flood predictors and a number of 158 points where historical floods occurred. In the first step, the Certainty Factor values were calculated, which were then used in the Fuzzy-Analytical Hierarchy Process and Multiclass Alternating Decision Tree models. It should be mentioned that the Multiclass Alternating Decision Tree model was optimized with the help of the Iterative Classifier Optimizer. In the case of both ensemble models the slope angle was the most important flood conditioning factor. Moreover, according to Certainty Factor modelling the 8 classes/categories achieved the maximum value of 1. Next, the susceptibility to floods on the surface of the study area was derived. On average, about 20% of the study area has areas with high and medium susceptibility to flash floods. After evaluating the quality of the models through Receiver Operating Characteristics (ROC) Curve, the following results emerged: Success Rate for Flood Potential Index (FPI) Iterative Classifier Optimizer – Multiclass Alternating Decision Tree – Certainty Factor (ICO-LADT-CF) (Area Under Curve = 0.985) and Flood Potential Index (FPI) Fuzzy-Analytical Hierarchy Process – Certainty Factor (FAHP-CF) (Area Under Curve = 0.967); Prediction Rate for Flood Potential Index (FPI) Iterative Classifier Optimizer – Multiclass Alternating Decision Tree – Certainty Factor (ICO-LADT-CF) (Area Under Curve = 0.952) and Flood Potential Index Fuzzy-Analytical Hierarchy Process – Certainty Factor (FAHP-CF) (Area Under Curve = 0.913). At the same time, the accuracies of the models were: Training dataset − 0.943 (Iterative Classifier Optimizer – Multiclass Alternating Decision Tree – Certainty Factor) and 0.931 (Fuzzy-Analytical Hierarchy Process – Certainty Factor); Validating dataset − 0.935 (Iterative Classifier Optimizer – Multiclass Alternating Decision Tree – Certainty Factor) and 0.926 (Fuzzy-Analytical Hierarchy Process – Certainty Factor). As main conclusion, it can be mentioned that the 2 ensemble models outperform the previous machine learning models applied on the same study area before.
Aishworya Shrestha, Katarina Hoernke, Thomas Timberlake
et al.
Background Young people will suffer most from climate change yet are rarely engaged in dialogue about it. Citizen science offers a method for collecting policy-relevant data, whilst promoting awareness and capacity building. We tested the feasibility and acceptability of engaging Nepalese adolescents in climate change and health-related citizen science. Methods We purposively selected 33 adolescents from two secondary schools in one remote and one relatively accessible district of Nepal. We contextualised existing apps and developed bespoke apps to survey climate hazards, waste and water management, local biodiversity, nutrition and sociodemographic information. We analysed and presented quantitative data using a descriptive analysis. We captured perceptions and learnings via focus group discussions and analysed qualitative data using thematic analysis. We shared findings with data collectors using tables, graphs, data dashboards and maps. Results Adolescents collected 1667 biodiversity observations, identified 72 climate-change related hazards, and mapped 644 geolocations. They recorded 286 weights, 248 heights and 340 dietary recalls. Adolescents enjoyed learning how to collect the data and interpret the findings and gained an appreciation of local biodiversity which engendered ‘environmental stewardship’. Data highlighted the prevalence of failing crops and landslides, revealed both under- and over-nutrition and demonstrated that children consume more junk foods than adults. Adolescents learnt about the impacts of climate change and the importance of eating a diverse diet of locally grown foods. A lack of a pre-established sampling frame, multiple records of the same observation and spurious nutrition data entries by unsupervised adolescents limited data quality and utility. Lack of internet access severely impacted feasibility, especially of apps which provide online feedback. Conclusions Citizen science was largely acceptable, educational and empowering for adolescents, although not always feasible without internet access. Future projects could improve data quality and integrate youth leadership training to enable climate-change advocacy with local leaders.
Mohamed Mahmoud Sebbab, Abdelhadi El Ouahidi, Mehdi Ousbih
et al.
The purpose of this paper is to identify, quantify and delineate the areas with suitable aggregate resources in the Precambrian massif of Ifni and the limestone plateau of Lakhssas (southwest Morocco). To fulfill this objective, a study was undertaken on the geotechnical parameters of the various geological outcrops of the region based on the analysis of 42 rock samples (carbonate, magmatic, detritic and volcano-detritic). Initially, we subjected these samples to a series of laboratory tests (impact resistance (L.A), wear resistance (MDE), density, porosity, absorption), to classify them according to geotechnical standards. Then, a geospatial database was created, to exploit these geotechnical data, from a geographical information system (GIS) to produce various thematic maps. Based on the results of this study, all geotechnical classes according to the standards (A to E for the European standard and 1A to 6D for the Moroccan standard) are present with good to very good geomechanical properties (L.A between 12% and 35%, MDE between 5% and 30%). This classification allowed us to use GIS to identify and quantify potential areas for exploitation by assigning five categories of geotechnical suitability levels (high (4), medium (3), low (2), very low (1) and others (0)) and to show that approximately 72% of the study area belongs to the categories high, medium and low. The combination of laboratory results and GIS has allowed us to carry out geotechnical mapping that will be used by regional authorities and actors for good management of the field of quarrying to rationalize the national natural heritage.
The pH effect on the surface and interfacial films on η-phase (MgZn2) in aqueous solutions under acidic, neutral, and alkaline conditions has been evaluated using time of flight-secondary ion spectroscopy (TOF-SIMS), Atomic force microscopy (AFM) and scanning electron microscopy/energy dispersive X-ray spectroscopy (SEM/EDX). TOF-SIMS depth profile plots reveal that under an acidic environment (pH2) deep corrosion penetration occurs with a dispersion of corrosion products which claims a considerable depth matrix cross-section. Under near neutral environments (pH 6), the corrosion film is seen to be stratified into two layers of different compositions, while in a slightly alkaline environment (pH 10) the film appears not to be distinctly differentiated, whereas in a very alkaline environment (pH 13) a compact film rich in hydroxides develops. TOF-SIMs surface and depth profile maps were consistent with the depth profile plots. SEM and AFM images reveal that the surface roughness increased in with a decrease in pH value from the acidic to the alkaline environments. EDX elemental composition analysis also indicated a severe drop in the zinc content of the film in the alkaline environment. Largely, metallic zinc enrichment occurs following the initial magnesium dissolution whose stability is greatly affected by the near-surface pH of the bulk solution, thus, giving rise to different film structures.
Materials of engineering and construction. Mechanics of materials, Industrial electrochemistry
S. Mazdak Abulnaga, Oded Stein, Polina Golland
et al.
Although shape correspondence is a central problem in geometry processing, most methods for this task apply only to two-dimensional surfaces. The neglected task of volumetric correspondence--a natural extension relevant to shapes extracted from simulation, medical imaging, and volume rendering--presents unique challenges that do not appear in the two-dimensional case. In this work, we propose a method for mapping between volumes represented as tetrahedral meshes. Our formulation minimizes a distortion energy designed to extract maps symmetrically, i.e., without dependence on the ordering of the source and target domains. We accompany our method with theoretical discussion describing the consequences of this symmetry assumption, leading us to select a symmetrized ARAP energy that favors isometric correspondences. Our final formulation optimizes for near-isometry while matching the boundary. We demonstrate our method on a diverse geometric dataset, producing low-distortion matchings that align closely to the boundary.
Francesco Caleca, Veronica Tofani, Samuele Segoni
et al.
Landslides are a worldwide natural hazard that cause more damage and casualties than other hazards. Therefore, social and economic losses can be reduced through a landslide quantitative risk assessment (QRA). In the last two decades, many attempts of quantitative analysis on various scales have been performed; nevertheless, the major difficulty of QRA lies in how precise and reliable the assessment should have to be useful. For this reason, in this paper, we analyzed different freely available datasets and some products of previous research to assess the soundness of the outcomes performed by a recent QRA of slow-moving landslides in the Arno River basin (Central Italy). The validation process was carried out by comparing the abovementioned datasets and two components of the selected QRA (hazard and risk). The obtained results showed a robust correlation between most of the testing dataset and risk components, highlighting the accuracy of the selected QRA.