Hasil untuk "Dynamic and structural geology"
Menampilkan 20 dari ~2095121 hasil · dari DOAJ, arXiv, CrossRef, Semantic Scholar
I. Galeczka, I. Galeczka, F. Óskarsson et al.
<p>Well IDDP-2 was drilled through deepening of RN-15 which is one of the geothermal wells producing from the Reykjanes field (Iceland). It was drilled in 2016 and 2017 as part of the Iceland Deep Drilling Project (IDDP), which aim has been to assess the economic viability of supercritical fluids utilization. Well testing, temperature and pressure logging of RN-15/IDDP-2 together with sampling of the discharge fluid was carried out in 2022 to investigate the well production properties and its feeding aquifers. The chemical composition of the discharge samples obtained during the recent RN-15/IDDP-2 flow test in 2022 suggests a slightly higher reservoir temperature of 294 °C compared to 290 °C in 2016. Most of the major non-volatiles are in a low range of the concentrations calculated for the Reykjanes deep reservoir. The RN-15/IDDP-2 feeding aquifer is, however, enriched in gases such as CO<span class="inline-formula"><sub>2</sub></span>, H<span class="inline-formula"><sub>2</sub></span>S, N<span class="inline-formula"><sub>2</sub></span>, and H<span class="inline-formula"><sub>2</sub></span> compared to their content before drilling. In general, the temperature, gas content and the discharge composition suggest that part of the fluid entering the well originates at greater depths compared to the depths of the main feed zones in other Reykjanes wells. Additional research is required to evaluate if the discharge of the well has a signature of a fluid that originates at depths corresponding to supercritical conditions of seawater (<span class="inline-formula"><i>T</i></span> <span class="inline-formula">></span> 403 °C, <span class="inline-formula"><i>P</i></span> <span class="inline-formula">></span> 285 bar).</p>
Jiahao Yuan, Xingzhe Sun, Xing Yu et al.
The LLMSR@XLLM25 formulates a low-resource structural reasoning task that challenges LLMs to generate interpretable, step-by-step rationales with minimal labeled data. We present Less is More, the third-place winning approach in the LLMSR@XLLM25, which focuses on structured reasoning from only 24 labeled examples. Our approach leverages a multi-agent framework with reverse-prompt induction, retrieval-augmented reasoning synthesis via GPT-4o, and dual-stage reward-guided filtering to distill high-quality supervision across three subtasks: question parsing, CoT parsing, and step-level verification. All modules are fine-tuned from Meta-Llama-3-8B-Instruct under a unified LoRA+ setup. By combining structure validation with reward filtering across few-shot and zero-shot prompts, our pipeline consistently improves structure reasoning quality. These results underscore the value of controllable data distillation in enhancing structured inference under low-resource constraints. Our code is available at https://github.com/JhCircle/Less-is-More.
Huifang Lyu, James Alvey, Noemi Anau Montel et al.
Simulation-based inference (SBI) is emerging as a new statistical paradigm for addressing complex scientific inference problems. By leveraging the representational power of deep neural networks, SBI can extract the most informative simulation features for the parameters of interest. Sequential SBI methods extend this approach by iteratively steering the simulation process towards the most relevant regions of parameter space. This is typically implemented through an algorithmic structure, in which simulation and network training alternate over multiple rounds. This strategy is particularly well suited for high-precision inference in high-dimensional settings, which are commonplace in physics applications with growing data volumes and increasing model fidelity. Here, we introduce dynamic SBI, which implements the core ideas of sequential methods in a round-free, asynchronous, and highly parallelisable manner. At its core is an adaptive dataset that is iteratively transformed during inference to resemble the target observation. Simulation and training proceed in parallel: trained networks are used both to filter out simulations incompatible with the data and to propose new, more promising ones. Compared to round-based sequential methods, this asynchronous structure can significantly reduce simulation costs and training overhead. We demonstrate that dynamic SBI achieves significant improvements in simulation and training efficiency while maintaining inference performance. We further validate our framework on two challenging astrophysical inference tasks: characterising the stochastic gravitational wave background and analysing strong gravitational lensing systems. Overall, this work presents a flexible and efficient new paradigm for sequential SBI.
Shulun Chen, Wei Shao, Flora D. Salim et al.
Supporting decision-making has long been a central vision in the field of spatio-temporal intelligence. While prior work has improved the timeliness and accuracy of spatio-temporal forecasting, converting these forecasts into actionable strategies remains a key challenge. A main limitation is the decoupling of the prediction and the downstream decision phases, which can significantly degrade the downstream efficiency. For example, in emergency response, the priority is successful resource allocation and intervention, not just incident prediction. To this end, it is essential to propose an Adaptive Spatio-Temporal Early Decision model (ASTER) that reforms the forecasting paradigm from event anticipation to actionable decision support. This framework ensures that information is directly used for decision-making, thereby maximizing overall effectiveness. Specifically, ASTER introduces a new Resource-aware Spatio-Temporal interaction module (RaST) that adaptively captures long- and short-term dependencies under dynamic resource conditions, producing context-aware spatiotemporal representations. To directly generate actionable decisions, we further design a Preference-oriented decision agent (Poda) based on multi-objective reinforcement learning, which transforms predictive signals into resource-efficient intervention strategies by deriving optimal actions under specific preferences and dynamic constraints. Experimental results on four benchmark datasets demonstrate the state-of-the-art performance of ASTER in improving both early prediction accuracy and resource allocation outcomes across six downstream metrics.
E. Lodes, D. Scherler, D. Scherler et al.
<p>While landscapes are broadly sculpted by tectonics and climate, on a catchment scale, sediment size can regulate hillslope denudation rates and thereby influence the location of topographic highs and valleys. In this work, we used in situ <span class="inline-formula"><sup>10</sup>Be</span> cosmogenic radionuclide analysis to measure the denudation rates of bedrock, boulders, and soil in three granitic landscapes with different climates in Chile. We hypothesize that bedrock and boulders affect differential denudation by denuding more slowly than the surrounding soil; the null hypothesis is that no difference exists between soil and boulder or bedrock denudation rates. To evaluate denudation rates, we present a simple model that assesses differential denudation of boulders and the surrounding soil by evaluating boulder protrusion height against a two-stage erosion model and measured <span class="inline-formula"><sup>10</sup>Be</span> concentrations of boulder tops. We found that hillslope bedrock and boulders consistently denude more slowly than soil in two out of three of our field sites, which have a humid and a semi-arid climate: denudation rates range from <span class="inline-formula">∼5</span> to 15 m Myr<span class="inline-formula"><sup>−1</sup></span> for bedrock and boulders and from <span class="inline-formula">∼8</span> to 20 m Myr<span class="inline-formula"><sup>−1</sup></span> for soil. Furthermore, across a bedrock ridge at the humid site, denudation rates increase with increasing fracture density. At our lower-sloping field sites, boulders and bedrock appear to be similarly immobile based on similar <span class="inline-formula"><sup>10</sup>Be</span> concentrations. However, in the site with a Mediterranean climate, steeper slopes allow for higher denudation rates for both soil and boulders (<span class="inline-formula">∼40</span>–140 m Myr<span class="inline-formula"><sup>−1</sup></span>), while the bedrock denudation rate remains low (<span class="inline-formula">∼22</span> m Myr<span class="inline-formula"><sup>−1</sup></span>). Our findings suggest that unfractured bedrock patches and large hillslope boulders affect landscape morphology by inducing differential denudation in lower-sloping landscapes. When occurring long enough, such differential denudation should lead to topographic highs and lows controlled by bedrock exposure and hillslope sediment size, which are both a function of fracture density. We further examined our field sites for fracture control on landscape morphology by comparing fracture, fault, and stream orientations, with the hypothesis that bedrock fracturing leaves bedrock more susceptible to denudation. Similar orientations of fractures, faults, and streams further support the idea that tectonically induced bedrock fracturing guides fluvial incision and accelerates denudation by reducing hillslope sediment size.</p>
Yaniv Darvasi, Amotz Agnon
Conventional geophysical methods are suitable for estimating the thicknesses of subsoil layers. By combining several geophysical methods, the uncertainties can be assessed. Hence, the reliability of the results increases with a more accurate engineering solution. To estimate the base of an abandoned landfill, we collected data using classical approaches: high-resolution seismic reflection and refraction, with more modern methods including passive surface wave analysis and horizontal-to-vertical spectral ratio (HVSR) measurements. To evaluate the thickness of the landfill, three different datasets were acquired along each of the two seismic lines, and five different processing methods were applied for each of the two arrays. The results of all the classical methods indicate very consistent correlations and mostly converge to clear outcomes. However, since the shear wave velocity of the landfill is relatively low (<150 (m/s)), the uncertainty of the HVSR results is significant. All these methods are engineering-oriented, environmentally friendly, and relatively low-cost. They may be jointly interpreted to better assess uncertainties and therefore enable an efficient solution for environmental or engineering purposes.
Petar Veličković
In many ways, graphs are the main modality of data we receive from nature. This is due to the fact that most of the patterns we see, both in natural and artificial systems, are elegantly representable using the language of graph structures. Prominent examples include molecules (represented as graphs of atoms and bonds), social networks and transportation networks. This potential has already been seen by key scientific and industrial groups, with already-impacted application areas including traffic forecasting, drug discovery, social network analysis and recommender systems. Further, some of the most successful domains of application for machine learning in previous years -- images, text and speech processing -- can be seen as special cases of graph representation learning, and consequently there has been significant exchange of information between these areas. The main aim of this short survey is to enable the reader to assimilate the key concepts in the area, and position graph representation learning in a proper context with related fields.
Annika Jacobsen, Erik van Dijk, Halima Mouhib et al.
While many good textbooks are available on Protein Structure, Molecular Simulations, Thermodynamics and Bioinformatics methods in general, there is no good introductory level book for the field of Structural Bioinformatics. This book aims to give an introduction into Structural Bioinformatics, which is where the previous topics meet to explore three dimensional protein structures through computational analysis. We provide an overview of existing computational techniques, to validate, simulate, predict and analyse protein structures. More importantly, it will aim to provide practical knowledge about how and when to use such techniques. We will consider proteins from three major vantage points: Protein structure quantification, Protein structure prediction, and Protein simulation & dynamics. Within the living cell, protein molecules perform specific functions, typically by interacting with other proteins, DNA, RNA or small molecules. They take on a specific three dimensional structure, encoded by its amino acid sequence, which allows them to function within the cell. Hence, the understanding of a protein's function is tightly coupled to its sequence and its three dimensional structure. Before going into protein structure analysis and prediction, and protein folding and dynamics, here, we give a short and concise introduction into the basics of protein structures.
Rui Li, Dong Gong, Wei Yin et al.
Multi-frame depth estimation generally achieves high accuracy relying on the multi-view geometric consistency. When applied in dynamic scenes, e.g., autonomous driving, this consistency is usually violated in the dynamic areas, leading to corrupted estimations. Many multi-frame methods handle dynamic areas by identifying them with explicit masks and compensating the multi-view cues with monocular cues represented as local monocular depth or features. The improvements are limited due to the uncontrolled quality of the masks and the underutilized benefits of the fusion of the two types of cues. In this paper, we propose a novel method to learn to fuse the multi-view and monocular cues encoded as volumes without needing the heuristically crafted masks. As unveiled in our analyses, the multi-view cues capture more accurate geometric information in static areas, and the monocular cues capture more useful contexts in dynamic areas. To let the geometric perception learned from multi-view cues in static areas propagate to the monocular representation in dynamic areas and let monocular cues enhance the representation of multi-view cost volume, we propose a cross-cue fusion (CCF) module, which includes the cross-cue attention (CCA) to encode the spatially non-local relative intra-relations from each source to enhance the representation of the other. Experiments on real-world datasets prove the significant effectiveness and generalization ability of the proposed method.
Juami H. M. van Gils, Maurits Dijkstra, Halima Mouhib et al.
While many good textbooks are available on Protein Structure, Molecular Simulations, Thermodynamics and Bioinformatics methods in general, there is no good introductory level book for the field of Structural Bioinformatics. This book aims to give an introduction into Structural Bioinformatics, which is where the previous topics meet to explore three dimensional protein structures through computational analysis. We provide an overview of existing computational techniques, to validate, simulate, predict and analyse protein structures. More importantly, it will aim to provide practical knowledge about how and when to use such techniques. We will consider proteins from three major vantage points: Protein structure quantification, Protein structure prediction, and Protein simulation & dynamics. In the previous chapter "Molecular Dynamics" we have considered protein simulations from a dynamical point of view, using Newton's laws. In the current Chapter, we first take a step back and return to the bare minimum needed to simulate proteins, and show that proteins may be simulated in a more simple fashion, using the partition function directly. This means we do not have to calculate explicit forces, velocities, moments and do not even consider time explicitly. Instead, we will rely on the fact that for most systems we will want to simulate, the system is in a dynamic equilibrium; and that we want to find the most stable states in such systems by determining the relative stabilities between those states.
P. S. Pashchenko, O. Karamushka
Problem Statement and Purpose. The structure of the fracturing coal in the zones of geological disturbances is one of factors of origin of the gas dynamic phenomena,including spontaneous combustion of coal layers. The purpose of this work is definition of the disturbance zones, which are prone tospontaneous combustion in coal layers on the basis of application of the developeddevice and methods of selection of samples coal for determination of his propensity to spontaneous combustion and determination of the disturbance zones in coal layers after structural characteristics of coal. Data & Methods. 22 samples of coal medium catagenesis and an ear of low catagenesis from two mines of the Donetsk-Makivsky geological-industrial region of Donbas and 10 samples of coal of medium catagenesis from one mine of the Central region were researched. The selection of the disturbance zones, which are prone to spontaneous combustion in coal layers, is executed on the basis of application of the developed device and methods of selection of samples coal for determination of hispropensity to spontaneous combustion and determination of the disturbance zones in coal layers after structural characteristics of coal. The method of optical research of rocks with application of microscope of MBS‑1 was used for determination of structural characteristics of coal. Results. A device for the selection of point coal samples in the mountain making and, accordingly, method of selection of samples coal for determination of his propensity to spontaneous combustion were developed for determination of his propensity tospontaneous combustion during the leadthrough of experimental researches which are related to determination of the disturbance zones as potential zones, which are prone to spontaneous combustion in coal layers. The developed methods and device was tested at the mine of the Cent al geological-industrial region of Donbas. Samples were selected by a hand device for the selectionof point samples on a depth 1146 m in an amount 10 pieces. The amount of quasi-crystals of coal in samples was analysed and selected by optical the microscope MBS‑1, that is 2,4–4,6%, that is all of samples were selected from the disturbance zone according to the method of the determination of the disturbance zones in coal layers after structural characteristics of coal. Accordingly, plicative violation wasselected on the mine of the Central geological-industrial region on the basis of comparison of results of microscopic researches of coal from the different zones of coal layers. The data, what were got as a result of application of the developed device and methods of selection of samples coal for determination of his propensity to spontaneous combustion and determination of the disturbance zones in coal layers after structuralcharacteristics of coal, can be used for the prognosis of the disturbance zones which can be related to the origin of the gas dynamic phenomena, potential zones ofspontaneous combustion of coal layers.
A. Pavlychenko, A. Ihnatov, I. Askerov
Purpose. Development of structural foundations for downhole devices (hydraulic hammers) to create dynamic loads on the drilling technological tool and study of the basic physical and chemical processes occurring in their hydraulic circuit and the bottomhole zone of the well. Methodology. The study of the patterns of constructive and technological interaction of the main parts and assemblies of the hydraulic percussion mechanism was carried out when testing its physical model included in the functional diagram of a special drilling stand equipped with appropriate power and hydraulic units, as well as a control and measuring unit. Studies of the physicochemical properties of flushing fluids and their influence on the course of rock mass destruction processes, intensified by the application of generated dynamic loads, were carried out using standardized devices for monitoring the parameters of special process fluids, as well as by bench destruction of experimental rock blocks with the appropriate complex technical measurements. Findings. A basic structural diagram of a hydraulic percussion device has been developed, with its physical embodiment, in which a significantly different execution and functioning of the interacting nodes creates the prerequisites for effective control of the machine in question. The presence of mutual consistency of the circulation processes implemented in the hydraulic hammer model allows its use in various modifications of the regime and technological support of progressive methods of wellbore formation. The proposed hydraulic hammer contains in its design a fairly large number of unified parts and is practically devoid of the presence of wear elements, which ensures its significant motor resource and high maintainability. Simultaneous combination of a hydraulic hammer with the use of activated flushing fluids in the technological scheme of bottomhole assemblies allows to obtain a significant increase in the mechanical fracture rate due to reduced surface tension and acceptable rheological characteristics of the dispersion medium for rock destruction. Originality. The use of alternately different pressure chambers in the design of the hydraulic hammer makes it possible to form a single closed circulation-technical system, with the possibility of effective regulation of its energy performance and operational transition between different drilling modes. Practical value. The proposed layout of the hydraulic percussion machine makes it possible to expand the areas of its possible application in the technical and technological schemes of downhole drilling tools, as well as the implementation of measures to eliminate downhole accidents and complications. Key words: well, hydraulic hammer, drilling, surfactant, rock destruction, flushing fluid, mechanical speed, bottom hole.
D. Aaisyah, S. Sahari, A. Shah et al.
The coronavirus disease 2019 (COVID-19) pandemic has highlighted the need to have virtual field libraries accessible to users, which is possible if the field data are collected, archived, and processed to retain the actual field configurations. Therefore, we demonstrate the veracity of geological mapping with the help of Unmanned Air Vehicle (UAV) coupled with the traditional field site visits in Brunei Darussalam, SE Asia. We have selected two geological field sites that expose faulted Miocene sedimentary rocks. We visited these sites with geology undergraduates to teach them the field components of the courses on Field Mapping and Structural Geology at the Universiti Brunei Darussalam. A field assignment was given to students, which had to be submitted at the end of the fieldwork. The same exercise was repeated in the classroom with the UAV aided field data as virtual field exercises. The geological outcrop details were captured at kilometer and millimeter (mm) scales with both the static and dynamic mode of operations. The drone-based imagery was used to generate the 3D point clouds that used 67 oriented photographs to recreate the outcrop details. We discovered that both the traditional and drone based field data are highly useful to capture kilometer to mm scale details. Our results also revealed that students were very engaged during the virtual field exercises, and completed the field assignment with care, which was largely missing during the onsite field exercises. We think this is partly because of the relaxed state of the mind to grasp details while in a classroom environment where the hot sunny and humid weather of tropical Brunei was avoidable. The virtual field exercises have opened a new arena of field geology where the use of technology enhances the usability and accessibility of field based courses that are often disrupted because of various reasons. However, the traditional field visits should not be completely replaced by the virtual field.
A. Anduaga, A. Anduaga
<p>This paper examines how ionospheric physics emerged as a research speciality in Britain, Germany, and the United States in the first four decades of the 20th century. It argues that the formation of this discipline can be viewed as the confluence of four deep-rooted traditions in which scientists and engineers transformed, from within, research areas connected to radio wave propagation and geomagnetism. These traditions include Cambridge school's mathematical physics, Göttingen's mathematical physics, laboratory-based experimental physics, and Humboldtian-style terrestrial physics. Although focused on ionospheric physics, the paper pursues the idea that a dynamic conception of scientific tradition will provide a new perspective for the study of geosciences history.</p>
T. Alberti, R. V. Donner, R. V. Donner et al.
<p>Atmosphere and ocean dynamics display many complex features and are characterized by a wide variety of processes and couplings across different timescales. Here we demonstrate the application of multivariate empirical mode decomposition (MEMD) to investigate the multivariate and multiscale properties of a reduced order model of the ocean–atmosphere coupled dynamics. MEMD provides a decomposition of the original multivariate time series into a series of oscillating patterns with time-dependent amplitude and phase by exploiting the local features of the data and without any a priori assumptions on the decomposition basis. Moreover, each oscillating pattern, usually named multivariate intrinsic mode function (MIMF), represents a local source of information that can be used to explore the behavior of fractal features at different scales by defining a sort of multiscale and multivariate generalized fractal dimensions. With these two complementary approaches, we show that the ocean–atmosphere dynamics presents a rich variety of features, with different multifractal properties for the ocean and the atmosphere at different timescales. For weak ocean–atmosphere coupling, the resulting dimensions of the two model components are very different, while for strong coupling for which coupled modes develop, the scaling properties are more similar especially at longer timescales. The latter result reflects the presence of a coherent coupled dynamics. Finally, we also compare our model results with those obtained from reanalysis data demonstrating that the latter exhibit a similar qualitative behavior in terms of multiscale dimensions and the existence of a scale dependency of the statistics of the phase-space density of points for different regions, which is related to the different drivers and processes occurring at different timescales in the coupled atmosphere–ocean system. Our approach can therefore be used to diagnose the strength of coupling in real applications.</p>
Daniel Candeloro Cunha, Breno Vincenzo de Almeida, Heitor Nigro Lopes et al.
This paper proposes two novel approaches to perform more suitable sensitivity analyses for discrete topology optimization methods. To properly support them, we introduce a more formal description of the Bi-directional Evolutionary Structural Optimization (BESO) method, in which the sensitivity analysis is based on finite variations of the objective function. The proposed approaches are compared to a naive strategy; to the conventional strategy, referred to as First-Order Continuous Interpolation (FOCI) approach; and to a strategy previously developed by other researchers, referred to as High-Order Continuous Interpolation (HOCI) approach. The novel Woodbury approach provides exact sensitivity values and is a better alternative to HOCI. Although HOCI and Woodbury approaches may be computationally prohibitive, they provide useful expressions for a better understanding of the problem. The novel Conjugate Gradient Method (CGM) approach provides sensitivity values with arbitrary precision and is computationally viable for a small number of steps. The CGM approach is a better alternative to FOCI since, for appropriate initial conditions, it is always more accurate than the conventional strategy. The standard compliance minimization problem with volume constraint is considered to illustrate the methodology. Numerical examples are presented together with a broad discussion about BESO-type methods.
Halaman 7 dari 104757