J. Snyder
Hasil untuk "Cartography"
Menampilkan 20 dari ~102635 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar
J. Harley
Julio C. Serrano, Joonas Kevari, Rumy Narayan
Systematic literature reviews in the social sciences overwhelmingly follow arborescent logics -- hierarchical keyword filtering, linear screening, and taxonomic classification -- that suppress the lateral connections, ruptures, and emergent patterns characteristic of complex research landscapes. This research note presents the Rhizomatic Research Agent (V3), a multi-agent computational pipeline grounded in Deleuzian process-relational ontology, designed to conduct non-linear literature analysis through 12 specialized agents operating across a seven-phase architecture. The system was developed in response to the methodological groundwork established by (Narayan2023), who employed rhizomatic inquiry in her doctoral research on sustainable energy transitions but relied on manual, researcher-driven exploration. The Rhizomatic Research Agent operationalizes the six principles of the rhizome -- connection, heterogeneity, multiplicity, asignifying rupture, cartography, and decalcomania -- into an automated pipeline integrating large language model (LLM) orchestration, dual-source corpus ingestion from OpenAlex and arXiv, SciBERT semantic topography, and dynamic rupture detection protocols. Preliminary deployment demonstrates the system's capacity to surface cross-disciplinary convergences and structural research gaps that conventional review methods systematically overlook. The pipeline is open-source and extensible to any phenomenon zone where non-linear knowledge mapping is required.
Zhe Zhou, Hao Wu, Zhenyu Zhang et al.
Saddle-point extraction is essential for accurately identifying topographic features and landforms and conducting geomorphological mapping. However, the widely used positive–negative terrain method (PNTM) is often plagued by a substantial number of false saddle points, a prevalent issue in many extraction techniques. To address this challenge, this study presents a novel model that combines the PNTM with a convolutional neural network (CNN) called PNTM-CNN. In this approach, candidate saddle points are first identified using the PNTM and then refined using a CNN that integrates multiscale topographic features. The experimental results indicate that the PNTM-CNN model, which leverages four scales of features (elevation, aspect, curvature, slope, and hillshade), effectively reduces the occurrence of false saddle points, achieving a precision of 89%, a recall of 83%, and an F1 score of 85%. This performance significantly exceeds that of the traditional moving window analysis and topological association methods. Although the automation level of the PNTM-CNN model requires improvement, the integration of deep learning methods offers new insights for addressing complex topographic feature extraction challenges and shows a promising application potential.
Han Hu, Ying Jiang, Zeyuan Dai et al.
Tunnel mapping systems are essential for tunnel inspection, integrating sensors like LiDAR, cameras, and odometers to enhance data accuracy. However, calibration is challenging due to mechanical constraints and repetitive sensor installations, especially for LiDAR-Camera alignment. Existing methods struggle in tunnels with poor lighting and low texture, and they fail to address irregular vibrations from the flashing light system, causing instability. We propose a robust online calibration technique for LiDAR-Camera extrinsic parameters. By establishing a reversible mapping through surface parameterization, our approach ensures accurate cross-modality alignment. Additionally, we use depth constraints to stabilize adjacent camera stations, which are typically short-edge connections and prone to instability in photogrammetric bundle adjustment. This effectively mitigates irregular vibration effects. Validation in real-world tunnels confirms persistent vibration issues despite mechanical reinforcement. Our algorithm achieves precise point cloud and image alignment, reducing back-projection errors by over 50% and significantly improving data fusion accuracy in challenging conditions.
Imane Hamdi, Mariam Chabou Othmani, Larbi Bengana
This paper examines how urban renewal strategies can effectively transform a self-built informal settlement, such as Boubsila (commune of Bourouba), into a sustainable and inclusive urban space. To achieve this objective, the study advocates the use of spatial techniques such as geographic information systems (GIS) and cartography, as well as conducting field surveys to understand the spatial configuration of the neighbourhood and the needs of its residents. The integration of geo-referenced data will provide a better understanding of the physical situation of the district and explore both the quantitative and qualitative aspects of urban renewal. This multi-dimensional approach aims to address the specific challenges of informal settlements and propose practical solutions for sustainable and inclusive urban development.
Chenglong Wang, Yuhao Kang, Zhaoya Gong et al.
The rapid development of generative artificial intelligence (GenAI) presents new opportunities to advance the cartographic process. Previous studies have either overlooked the artistic aspects of maps or faced challenges in creating both accurate and informative maps. In this study, we propose CartoAgent, a novel multi-agent cartographic framework powered by multimodal large language models (MLLMs). This framework simulates three key stages in cartographic practice: preparation, map design, and evaluation. At each stage, different MLLMs act as agents with distinct roles to collaborate, discuss, and utilize tools for specific purposes. In particular, CartoAgent leverages MLLMs' visual aesthetic capability and world knowledge to generate maps that are both visually appealing and informative. By separating style from geographic data, it can focus on designing stylesheets without modifying the vector-based data, thereby ensuring geographic accuracy. We applied CartoAgent to a specific task centered on map restyling-namely, map style transfer and evaluation. The effectiveness of this framework was validated through extensive experiments and a human evaluation study. CartoAgent can be extended to support a variety of cartographic design decisions and inform future integrations of GenAI in cartography.
Alessandro De Angelis
In 1492, for the first time, an unknown ocean opened up before sailors: weeks of navigation and no idea how to pinpoint their location. Since ancient times, navigators had known how to determine latitude by using the North Star, but the "problem of longitude" was different. More than a century later, Galileo Galilei discovered in Padua Jupiter's satellites and quickly realized that a sailor who could observe their eclipses would know his own longitude. Yet his brilliant insight was 400 years ahead of the technology of his time. Impractical at sea, on land this idea became a formidable tool for cartography and ushered in the age of the image of the world. Today the technique can be realized thanks to artificial satellites, and the Tuscan genius' name has reached space with the European satellite system named Galileo. An exhibition in Paris, organized by the Permanent Representation of Italy to the International Organizations, Sorbonne University, and the Galileo Museum in Florence, and directed by Asia Ruffo di Calabria of the Musee des Arts et Metiers, by Quentin Cheval-Galland of the Sorbonne University, and by Alessandro De Angelis, allowed visitors to observe inventions of the time and some writings by Galileo on the theme of geolocation. The exhibition was held in Paris in June 2024. It was replicated in Prague in October 2024, in Amsterdam in December 2025, and at the Perimeter Institute in Waterloo, Canada, in February 2025.
Weishuai Xu, Lei Zhang, Hua Wang
The convergence zone holds significant importance in deep-sea underwater acoustic propagation, playing a pivotal role in remote underwater acoustic detection and communication. Despite the adaptability and predictive power of machine learning, its practical application in predicting the convergence zone remains largely unexplored. This study aimed to address this gap by developing a high-resolution ocean front-based model for convergence zone prediction. Out of 24 machine learning algorithms tested through K-fold cross-validation, the multilayer perceptron–random forest hybrid demonstrated the highest accuracy, showing its superiority in predicting the convergence zone within a complex ocean front environment. The research findings emphasized the substantial impact of ocean fronts on the convergence zone’s location concerning the sound source. Specifically, they highlighted that in relatively cold (or warm) water, the intensity of the ocean front significantly influences the proximity (or distance) of the convergence zone to the sound source. Furthermore, among the input features, the turning depth emerged as a crucial determinant, contributing more than 25% to the model’s effectiveness in predicting the convergence zone’s distance. The model achieved an accuracy of 82.43% in predicting the convergence zone’s distance with an error of less than 1 km. Additionally, it attained a 77.1% accuracy in predicting the convergence zone’s width within a similar error range. Notably, this prediction model exhibits strong performance and generalizability, capable of discerning evolving trends in new datasets when cross-validated using in situ observation data and information from diverse sea areas.
Zishuo Liu, Haishan Xia, Tong Zhang
ABSTRACTUrban rail transit (URT) systems play an evident role in shaping city spatial structures; however, the principles and mechanisms behind this influence are not fully understood. This paper reviews research progress on the coupling relationship between URT and urban space, focusing on big data analysis methods and the timeliness and sequence of coupling effects. It highlights the importance of the temporal dimension in coupling analysis. By thoroughly exploiting data value and extracting key elements, big data technology imparts temporal attributes to these elements, exploring their interaction and influence mechanisms over different time sequences. The paper also discusses the potential application of big data to urban planning to support sustainable urban development. Finally, the paper outlines future research directions, including the deepened application of big data to urban spatial analysis and the role of new data sources in understanding and shaping the coupling relationship between URT and urban space. This analysis offers new perspectives and methodologies for urban development and transportation planning.
Zhiwei Wei, Nai Yang
The popularity of tag clouds has sparked significant interest in the geographic research community, leading to the development of map-based adaptations known as intrinsic tag maps. However, existing methodologies for tag maps primarily focus on tag layout at specific scales, which may result in large empty areas or close proximity between tags when navigating across multiple scales. This issue arises because initial tag layouts may not ensure an even distribution of tags with varying sizes across the region. To address this problem, we incorporate the negative spatial auto-correlation index into tag maps to assess the uniformity of tag size distribution. Subsequently, we integrate this index into a TIN-based intrinsic tag map layout approach to enhance its ability to support multi-scale visualization. This enhancement involves iteratively filtering out candidate tags and selecting optimal tags that meet the defined index criteria. Experimental findings from two representative areas (the USA and Italy) demonstrate the efficacy of our approach in enhancing multi-scale visualization capabilities, albeit with trade-offs in compactness and time efficiency. Specifically, when retaining the same number of tags in the layout, our approach achieves higher compactness but requires more time. Conversely, when reducing the number of tags in the layout, our approach exhibits reduced time requirements but lower compactness. Furthermore, we discuss the effectiveness of various applied strategies aligned with existing approaches to generate diverse intrinsic tag maps tailored to user preferences. Additional details and resources can be found on our project website: https://github.com/TrentonWei/Multi-scale-TagMap.git.
M. Berrocoso, A. Fernández-Ros, M. E. Ramírez et al.
Since 1987, Spain has been continuously developing several scientific projects, mainly based on Earth Sciences, in Geodesy, Geochemistry, Geology or Volcanology. The need of a geodetic reference frame when doing hydrographic and topographic mapping meant the organization of the earlier campaigns with the main goals of updating the existing cartography and of making new maps of the area. During this period of time, new techniques arose in Space Geodesy improving the classical methodology and making possible its applications to other different fields such as tectonic or volcanism. Spanish Antarctic Geodetic activities from the 1987/1988 to 2006/2007 campaigns are described as well as a geodetic and a levelling network are presented. The first network, RGAE, was designed and established to define a reference frame in the region formed by the South Shetlands Islands, the Bransfield Sea and the Antarctic Peninsula whereas the second one, REGID, was planned to control the volcanic activity in Deception Island. Finally, the horizontal and vertical deformation models are described too, as well as the strategy which has been followed when computing an experimental geoid.
Jun Cao, Tanhua Jin, Tao Shou et al.
Car-dominated daily travel has caused many severe and urgent urban problems across the world, and such travel patterns have been found to be related to the built environment. However, few existing studies have uncovered the nonlinear relationship between the built environment and car dependency using a machine learning method, thus failing to provide policymakers with nuanced evidence-based guidance on reducing car dependency. Using data from Puget Sound regional household travel surveys, this study analyzes the complicated relationship between car dependency and the built environment using the gradient boost decision tree method. The results show that people living in high-density areas are less likely to rely on private cars than those living in low-density neighborhoods. Both threshold and nonlinear effects are observed in the relationships between the built environment and car dependency. Increasing road density promotes car usage when the road density is below 6 km/km2. However, the positive association between road density and car use is not observed in areas with high road density. Increasing pedestrian-oriented road density decreases the likelihood of using cars as the main mode. Such a negative effect is most effective when the pedestrian-oriented road density is over 14.5 km/km2. More diverse land use also discourages people’s car use, probably because those areas are more likely to promote active modes. Destination accessibility has an overall negative effect and a significant threshold effect on car dependency. These findings can help urban planners formulate tailored land-use interventions to reduce car dependency.
Qinjun Wang, Jingjing Xie, Jingyi Yang et al.
Fine-grained sediments are Quaternary sediments with grain sizes of not more than 2 mm. They start first when meeting water, their stability is related to the initial water volume triggering debris flow, and thus plays an important role in debris flow hazards early warning. The permeability coefficient is the inter-controlled factor of fine-grained sediment stability. However, there is no hyperspectral model for detecting the fine-grained sediment permeability coefficient in large areas, which seriously affects the progress of debris flow hazards early warning. Therefore, it is of great significance to establish a hyperspectral detection model for the permeability coefficient of fine-grained sediments. Taking Beichuan County, Southwestern China as the case, a permeability coefficient hyperspectral detection model was established. The results show that eight bands are sensitive to the permeability coefficient with a correlation coefficient (R) of 0.6343. T-test on the model shows that P-values for sensitive bands are all less than 0.05, indicating the established model has a good prediction ability with a precision of 85.83%. These sensitive bands also indicate the spectral characteristics of the permeability coefficient. Therefore, it provides a scientific basis for fine-grained sediment stability detection in large areas and lays a theoretical foundation for debris flow hazards’ early warning.
Xinkai Liu, Lingyun Ji, Chen Zhang et al.
High-quality, normalized differential vegetation index (NDVI) time-series data are fundamental for environmental remote sensing applications; however, their quality is often influenced by complicated factors such as atmospheric aerosols and cloud coverage. Hence, in the current study, a robust reconstruction method based on envelope detection and the Savitzky-Golay filter (ED-SG) was developed to reduce noise in the NDVI time-series. To verify the performance of ED-SG, simulation experiments were implemented and NDVI time-series samples were selected for different land cover types derived from MOD09GQ, Sentinel-2 and Landsat 8 OLI of Yangtze River Basin, between December 2018 and December 2019. The experimental results yielded an agreement coefficient and variance of 0.9599 and 0.0006, respectively on simulated time-series, Additionally, the smoothness metrics of evergreen broadleaf forests, evergreen needleleaf forests, deciduous broadleaf forests, herbaceous, and croplands were 0.0019, 0.0017, 0.0012, 0.0012, and 0.0013, respectively. Ultimately, the reconstructed time-series metrics showed significant improvements in robustness and smoothness over conventional methods. Moreover, the simplistic mechanisms of the ED-SG model enabled it to run effectively in the Google Earth Engine over the NDVI time-series of the whole Yangtze River Basin.
Mabrouk Laâbar, Mongi Sghaier
La présente communication interroge le changement institutionnel induit par le grand projet de développement agro-pastoral et de promotion des initiatives locales pour le sud-est (PRODESUD) dans le système de gouvernance des parcours collectifs du sud de la Tunisie. L’analyse institutionnelle que nous proposons pour la discussion de cette problématique comprend deux étapes. La première fait intervenir l’outil de grammaire institutionnel développé par Crawford et Ostrom (1995, 2005) pour l’identification et la structuration des nouvelles règles de gestion soutenues par PRODESUD toute au long de la période 2003-2020. La deuxième étape propose la discussion de la conformité de ces nouvelles règles aux principes de bonne gouvernance identifiés par la théorie des communs (Ostrom, 1990, 2000, 2009). Les résultats de l’analyse montrent que le projet PRODESUD a apporté un bon nombre de solutions pratiques permettant à une conciliation assez intéressante entre les conditions socio-écologiques contraignantes des grands parcours du sud tunisien et les principes de bonne gouvernance énoncés par Ostrom (1990).
Edoardo Legnaro, Christos Efthymiopoulos
The focus of this paper is on inclination-only dependent lunisolar resonances, which shape the dynamics of a MEO (Medium Earth Orbit) object over secular time scales (i.e. several decades). Following the formalism of arXiv:2107.14507, we discuss an analytical model yielding the correct form of the separatrices of each one of the major lunisolar resonances in the "action" space $(i, e)$ (inclination, eccentricity) for any given semi-major axis $a$. We then highlight how our method is able to predict and explain the main structures found numerically in Fast Lyapunov Indicator (FLI) cartography. We focus on explaining the dependence of the FLI maps from the initial phase of the argument of perigee $ω$ and of the longitude of the ascending node $Ω$ of the object and of the moon $Ω_L$. In addition, on the basis of our model, we discuss the role played by the $Ω-Ω_L$ and the $2 Ω-Ω_L$ resonances, which overlap with the inclination-only dependent ones as they sweep the region for increasing values of $a$, generating large domains of chaotic motion. Our results provide a framework useful in designing low-cost satellite deployment or space debris mitigation strategies, exploiting the natural dynamics of lunisolar resonances that increase an object's eccentricity up until it reaches a domain where friction leads to atmospheric re-entry.
Saeid Gholinejad, Elahe Khesali
Fire, especially wildfire, which can be considered as one of the main threats to vegetation cover and animals' life, has attracted lots of attention from environmental researchers. To better manage the fire crisis and take the necessary measures to compensate for its damages, it is essential to have detailed information about the burn severity levels. Accordingly, satellite images and their spectral indices have been widely considered in the literature as powerful tools in producing burn severity information. Despite the efficiency of the previously proposed methods, the necessity of ground reference data for their thresholding step faces them with serious challenges. To address this problem, in this study, an automatic procedure based on the change-point analysis is presented for thresholding differenced normalized burn ratio (dNBR) and its another version, dNBR2. In this procedure, a mean-shift based change-point analysis is performed on the dNBR and dNBR2 images for classifying them into burn severity levels. Experiments, conducted on some parts of Alaska and California in the United States, illustrated the high efficiency of the proposed method. Moreover, as an applied experiment, the severity of the fires, occurred in 2020 in the Khaeiz protected area in Iran, was estimated and compared with local reports.
David H. Weinberg, Jon A. Holtzman, Jennifer A. Johnson et al.
We apply a novel statistical analysis to measurements of 16 elemental abundances in 34,410 Milky Way disk stars from the final data release (DR17) of APOGEE-2. Building on recent work, we fit median abundance ratio trends [X/Mg] vs. [Mg/H] with a 2-process model, which decomposes abundance patterns into a "prompt" component tracing core collapse supernovae and a "delayed" component tracing Type Ia supernovae. For each sample star, we fit the amplitudes of these two components, then compute the residuals Δ[X/H] from this two-parameter fit. The rms residuals range from ~0.01-0.03 dex for the most precisely measured APOGEE abundances to ~0.1 dex for Na, V, and Ce. The correlations of residuals reveal a complex underlying structure, including a correlated element group comprised of Ca, Na, Al, K, Cr, and Ce and a separate group comprised of Ni, V, Mn, and Co. Selecting stars poorly fit by the 2-process model reveals a rich variety of physical outliers and sometimes subtle measurement errors. Residual abundances allow comparison of populations controlled for differences in metallicity and [α/Fe]. Relative to the main disk (R=3-13 kpc, |Z|<2 kpc), we find nearly identical abundance patterns in the outer disk (R=15-17 kpc), 0.05-0.2 dex depressions of multiple elements in LMC and Gaia Sausage/Enceladus stars, and wild deviations (0.4-1 dex) of multiple elements in ωCen. Residual abundance analysis opens new opportunities for discovering chemically distinctive stars and stellar populations, for empirically constraining nucleosynthetic yields, and for testing chemical evolution models that include stochasticity in the production and redistribution of elements.
Rafael C. Cardoso, Angelo Ferrando, Fabio Papacchini et al.
In this paper, we describe the strategies used by our team, MLFC, that led us to achieve the 2nd place in the 15th edition of the Multi-Agent Programming Contest. The scenario used in the contest is an extension of the previous edition (14th) "Agents Assemble" wherein two teams of agents move around a 2D grid and compete to assemble complex block structures. We discuss the languages and tools used during the development of our team. Then, we summarise the main strategies that were carried over from our previous participation in the 14th edition and list the limitations (if any) of using these strategies in the latest contest edition. We also developed new strategies that were made specifically for the extended scenario: cartography (determining the size of the map); formal verification of the map merging protocol (to provide assurances that it works when increasing the number of agents); plan cache (efficiently scaling the number of planners); task achievement (forming groups of agents to achieve tasks); and bullies (agents that focus on stopping agents from the opposing team). Finally, we give a brief overview of our performance in the contest and discuss what we believe were our shortcomings.
Halaman 9 dari 5132