R. Burkard, M. dell’Amico, Silvano Martello
Hasil untuk "Earthwork. Foundations"
Menampilkan 20 dari ~389965 hasil · dari arXiv, DOAJ, Semantic Scholar
Vik Pant, Eric Yu
Strategic coopetition in multi-stakeholder systems requires understanding how cooperation persists through time without binding contracts. This technical report extends computational foundations for strategic coopetition to sequential interaction dynamics, bridging conceptual modeling (i* framework) with game-theoretic reciprocity analysis. We develop: (1) bounded reciprocity response functions mapping partner deviations to finite conditional responses, (2) memory-windowed history tracking capturing cognitive limitations over k recent periods, (3) structural reciprocity sensitivity derived from interdependence matrices where behavioral responses are amplified by structural dependencies, and (4) trust-gated reciprocity where trust modulates reciprocity responses. The framework applies to both human stakeholder interactions and multi-agent computational systems. Comprehensive validation across 15,625 parameter configurations demonstrates robust reciprocity effects, with all six behavioral targets exceeding thresholds: cooperation emergence (97.5%), defection punishment (100%), forgiveness dynamics (87.9%), asymmetric differentiation (100%), trust-reciprocity interaction (100%), and bounded responses (100%). Empirical validation using the Apple iOS App Store ecosystem (2008-2024) achieves 43/51 applicable points (84.3%), reproducing documented cooperation patterns across five ecosystem phases. Statistical significance confirmed at p < 0.001 with Cohen's d = 1.57. This report concludes the Foundations Series (TR-1 through TR-4) adopting uniaxial treatment where agents choose cooperation levels along a single continuum. Companion work on interdependence (arXiv:2510.18802), trust (arXiv:2510.24909), and collective action (arXiv:2601.16237) has been prepublished. Extensions Series (TR-5 through TR-8) introduces biaxial treatment where cooperation and competition are independent dimensions.
Leo Segre, Or Hirschorn, Shai Avidan
Foundation models are vital tools in various Computer Vision applications. They take as input a single RGB image and output a deep feature representation that is useful for various applications. However, in case we have multiple views of the same 3D scene, they operate on each image independently and do not always produce consistent features for the same 3D point. We propose a way to convert a Foundation Model into a Multi-View Foundation Model. Such a model takes as input a set of images and outputs a feature map for each image such that the features of corresponding points are as consistent as possible. This approach bypasses the need to build a consistent 3D model of the features and allows direct manipulation in the image space. Specifically, we show how to augment Transformers-based foundation models (i.e., DINO, SAM, CLIP) with intermediate 3D-aware attention layers that help match features across different views. As leading examples, we show surface normal estimation and multi-view segmentation tasks. Quantitative experiments show that our method improves feature matching considerably compared to current foundation models.
Shirui Zhou, Shiteng Zheng, Junfang Tian et al.
The Intelligent Driver Model (IDM), proposed in 2000, has become a foundational tool in traffic flow modeling, renowned for its simplicity, computational efficiency, and ability to capture diverse traffic dynamics. Over the past 25 years, IDM has significantly advanced car-following theory and found extensive application in intelligent transportation systems, including driver assistance systems and autonomous vehicle control. However, IDM's deterministic framework and simplified assumptions face limitations in addressing real-world complexities such as stochastic variability, driver heterogeneity, and mixed traffic conditions. This paper provides a systematic review and critical reflection on IDM's theoretical foundations, academic influence, practical applications, and model extensions. While highlighting IDM's contributions, we emphasize the need to extend the model into a modular and extensible framework. Future directions include integrating stochastic elements, human behavioral insights, and hybrid modeling approaches that combine physics-based structures with data-driven methodologies. By reimagining IDM as a flexible modeling basis, this paper aims to inspire its continued development to meet the demands of intelligent, connected, and increasingly complex traffic systems.
K. Meyer, S. Platnick, G. T. Arnold et al.
<p>Satellite remote sensing retrievals of cloud effective radius (CER) are widely used for studies of aerosol–cloud interactions. Such retrievals, however, rely on forward radiative transfer (RT) calculations using simplified assumptions that can lead to retrieval errors when the real atmosphere deviates from the forward model. Here, coincident airborne remote sensing and in situ observations obtained during NASA's ObseRvations of Aerosols above CLouds and their intEractionS (ORACLES) field campaign are used to evaluate retrievals of CER for marine boundary layer stratocumulus clouds and to explore impacts of forward RT model assumptions and other confounding factors. Specifically, spectral CER retrievals from the Enhanced MODIS Airborne Simulator (eMAS) and the Research Scanning Polarimeter (RSP) are compared with polarimetric retrievals from RSP and with CER derived from droplet size distributions (DSDs) observed by the Phase Doppler Interferometer (PDI) and a combination of the Cloud and Aerosol Spectrometer (CAS) and the Two-Dimensional Stereo Probe (2D-S). The sensitivities of the eMAS and RSP spectral retrievals to assumptions about the DSD effective variance (CEV) and liquid water complex index of refraction are explored. CER and CEV inferred from eMAS spectral reflectance observations of the backscatter glory provide additional context for the spectral CER retrievals. The spectral and polarimetric CER retrieval agreement is case dependent, and updating the retrieval RT assumptions, including using RSP polarimetric CEV retrievals as a constraint, yields mixed results that are tied to differing sensitivities to vertical heterogeneity. Moreover, the in situ cloud probes, often used as the benchmark for remote sensing CER retrieval assessments, themselves do not agree, with PDI DSDs yielding CER values 1.3–1.6 <span class="inline-formula">µ</span>m larger than CAS and with CEV roughly 50 %–60 % smaller than CAS. Implications for the interpretation of spectral and polarimetric CER retrievals and their agreement are discussed.</p>
M. Schleiss
<p>An experimental study aimed at identifying special rainfall regimes with the help of co-located disdrometers is performed. Eight potentially special events (i.e., four number-controlled events and four size-controlled events) are identified and examined. However, a detailed cross-check with additional, independent radar measurements reveals no clear evidence of special rainfall dynamics. The research underscores the difficulty of experimentally confirming seemingly straightforward questions about rainfall patterns and dynamics that have been theorized in the literature for several decades but never formally validated experimentally. The study also questions the reliability of previous claims and serves as a reminder to approach such problems with more caution, emphasizing the need for rigorous uncertainty analysis and multiple cross-checks between sensors to avoid misinterpretation.</p>
J. Ericksen, T. P. Fischer, G. M. Fricke et al.
<p>We report in-plume carbon dioxide (CO<span class="inline-formula"><sub>2</sub></span>) concentrations and carbon isotope ratios during the 2021 eruption of Tajogaite volcano, island of La Palma, Spain. CO<span class="inline-formula"><sub>2</sub></span> measurements inform our understanding of volcanic contributions to the global climate carbon cycle and the role of CO<span class="inline-formula"><sub>2</sub></span> in eruptions. Traditional ground-based methods of CO<span class="inline-formula"><sub>2</sub></span> collection are difficult and dangerous, and as a result only about 5 % of volcanoes have been directly surveyed. We demonstrate that unpiloted aerial system (UAS) surveys allow for fast and relatively safe measurements. Using CO<span class="inline-formula"><sub>2</sub></span> concentration profiles we estimate the total flux during several measurements in November 2021 to be <span class="inline-formula"><math xmlns="http://www.w3.org/1998/Math/MathML" id="M8" display="inline" overflow="scroll" dspmath="mathml"><mrow><mn mathvariant="normal">1.76</mn><mo>±</mo><mn mathvariant="normal">0.20</mn><mo>×</mo><msup><mn mathvariant="normal">10</mn><mn mathvariant="normal">3</mn></msup></mrow></math><span><svg:svg xmlns:svg="http://www.w3.org/2000/svg" width="84pt" height="14pt" class="svg-formula" dspmath="mathimg" md5hash="3264aa3cae14714f7df7f4ce1ccb11d6"><svg:image xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="amt-17-4725-2024-ie00001.svg" width="84pt" height="14pt" src="amt-17-4725-2024-ie00001.png"/></svg:svg></span></span> to <span class="inline-formula"><math xmlns="http://www.w3.org/1998/Math/MathML" id="M9" display="inline" overflow="scroll" dspmath="mathml"><mrow><mn mathvariant="normal">2.23</mn><mo>±</mo><mn mathvariant="normal">0.26</mn><mo>×</mo><msup><mn mathvariant="normal">10</mn><mn mathvariant="normal">4</mn></msup></mrow></math><span><svg:svg xmlns:svg="http://www.w3.org/2000/svg" width="84pt" height="14pt" class="svg-formula" dspmath="mathimg" md5hash="85d7db9f831418a258f8beedbdbcf277"><svg:image xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="amt-17-4725-2024-ie00002.svg" width="84pt" height="14pt" src="amt-17-4725-2024-ie00002.png"/></svg:svg></span></span> t d<span class="inline-formula"><sup>−1</sup></span>. Carbon isotope ratios of plume CO<span class="inline-formula"><sub>2</sub></span> indicate a deep magmatic source, consistent with the intensity of the eruption. Our work demonstrates the feasibility of UASs for CO<span class="inline-formula"><sub>2</sub></span> surveys during active volcanic eruptions, particularly for deriving rapid emission estimates.</p>
Christian Schlarmann, Matthias Hein
Multi-modal foundation models combining vision and language models such as Flamingo or GPT-4 have recently gained enormous interest. Alignment of foundation models is used to prevent models from providing toxic or harmful output. While malicious users have successfully tried to jailbreak foundation models, an equally important question is if honest users could be harmed by malicious third-party content. In this paper we show that imperceivable attacks on images in order to change the caption output of a multi-modal foundation model can be used by malicious content providers to harm honest users e.g. by guiding them to malicious websites or broadcast fake information. This indicates that countermeasures to adversarial attacks should be used by any deployed multi-modal foundation model.
Peter Henderson, Xuechen Li, Dan Jurafsky et al.
Existing foundation models are trained on copyrighted material. Deploying these models can pose both legal and ethical risks when data creators fail to receive appropriate attribution or compensation. In the United States and several other countries, copyrighted content may be used to build foundation models without incurring liability due to the fair use doctrine. However, there is a caveat: If the model produces output that is similar to copyrighted data, particularly in scenarios that affect the market of that data, fair use may no longer apply to the output of the model. In this work, we emphasize that fair use is not guaranteed, and additional work may be necessary to keep model development and deployment squarely in the realm of fair use. First, we survey the potential risks of developing and deploying foundation models based on copyrighted content. We review relevant U.S. case law, drawing parallels to existing and potential applications for generating text, source code, and visual art. Experiments confirm that popular foundation models can generate content considerably similar to copyrighted material. Second, we discuss technical mitigations that can help foundation models stay in line with fair use. We argue that more research is needed to align mitigation strategies with the current state of the law. Lastly, we suggest that the law and technical mitigations should co-evolve. For example, coupled with other policy mechanisms, the law could more explicitly consider safe harbors when strong technical tools are used to mitigate infringement harms. This co-evolution may help strike a balance between intellectual property and innovation, which speaks to the original goal of fair use. But we emphasize that the strategies we describe here are not a panacea and more work is needed to develop policies that address the potential harms of foundation models.
Shashank Gupta, Philipp Hager, Jin Huang et al.
Since its inception, the field of unbiased learning to rank (ULTR) has remained very active and has seen several impactful advancements in recent years. This tutorial provides both an introduction to the core concepts of the field and an overview of recent advancements in its foundations along with several applications of its methods. The tutorial is divided into four parts: Firstly, we give an overview of the different forms of bias that can be addressed with ULTR methods. Secondly, we present a comprehensive discussion of the latest estimation techniques in the ULTR field. Thirdly, we survey published results of ULTR in real-world applications. Fourthly, we discuss the connection between ULTR and fairness in ranking. We end by briefly reflecting on the future of ULTR research and its applications. This tutorial is intended to benefit both researchers and industry practitioners who are interested in developing new ULTR solutions or utilizing them in real-world applications.
Alessandro Betti, Marco Gori, Stefano Melacci
The remarkable progress in computer vision over the last few years is, by and large, attributed to deep learning, fueled by the availability of huge sets of labeled data, and paired with the explosive growth of the GPU paradigm. While subscribing to this view, this book criticizes the supposed scientific progress in the field and proposes the investigation of vision within the framework of information-based laws of nature. Specifically, the present work poses fundamental questions about vision that remain far from understood, leading the reader on a journey populated by novel challenges resonating with the foundations of machine learning. The central thesis is that for a deeper understanding of visual computational processes, it is necessary to look beyond the applications of general purpose machine learning algorithms and focus instead on appropriate learning theories that take into account the spatiotemporal nature of the visual signal.
S. Pfreundschuh, S. Fox, P. Eriksson et al.
<p>Accurate measurements of ice hydrometeors are required to improve the representation of clouds and precipitation in weather and climate models. In this study, a newly developed, synergistic retrieval algorithm that combines radar with passive millimeter and sub-millimeter observations is applied to observations of three frontally generated, mid-latitude cloud systems in order to validate the retrieval and assess its capabilities to constrain the properties of ice hydrometeors. To account for uncertainty in the assumed shapes of ice particles, the retrieval is run multiple times while the shape is varied. Good agreement with in situ measurements of ice water content and particle concentrations for particle maximum diameters larger than <span class="inline-formula">200</span> <span class="inline-formula">µ</span>m is found for one of the flights for the large plate aggregate and the six-bullet rosette shapes. The variational retrieval fits the observations well, although small systematic deviations are observed for some of the sub-millimeter channels pointing towards issues with the sensor calibration or the modeling of gas absorption. For one of the flights the quality of the fit to the observations exhibits a weak dependency on the assumed ice particle shape, indicating that the employed combination of observations may provide limited information on the shape of ice particles in the observed clouds. Compared to a radar-only retrieval, the results show an improved sensitivity of the synergistic retrieval to the microphysical properties of ice hydrometeors at the base of the cloud.</p> <p>Our findings indicate that the synergy between active and passive microwave observations may improve remote-sensing measurements of ice hydrometeors and thus help to reduce uncertainties that affect currently available data products. Due to the increased sensitivity to their microphysical properties, the retrieval may also be a valuable tool to study ice hydrometeors in field campaigns. The good fits obtained to the observations increase confidence in the modeling of clouds in the Atmospheric Radiative Transfer Simulator and the corresponding single scattering database, which were used to implement the retrieval forward model. Our results demonstrate the suitability of these tools to produce realistic simulations for upcoming sub-millimeter sensors such as the Ice Cloud Image or the Arctic Weather Satellite.</p>
Jinyuan Jia, Hongbin Liu, Neil Zhenqiang Gong
Foundation models--such as GPT, CLIP, and DINO--have achieved revolutionary progress in the past several years and are commonly believed to be a promising approach for general-purpose AI. In particular, self-supervised learning is adopted to pre-train a foundation model using a large amount of unlabeled data. A pre-trained foundation model is like an ``operating system'' of the AI ecosystem. Specifically, a foundation model can be used as a feature extractor for many downstream tasks with little or no labeled training data. Existing studies on foundation models mainly focused on pre-training a better foundation model to improve its performance on downstream tasks in non-adversarial settings, leaving its security and privacy in adversarial settings largely unexplored. A security or privacy issue of a pre-trained foundation model leads to a single point of failure for the AI ecosystem. In this book chapter, we discuss 10 basic security and privacy problems for the pre-trained foundation models, including six confidentiality problems, three integrity problems, and one availability problem. For each problem, we discuss potential opportunities and challenges. We hope our book chapter will inspire future research on the security and privacy of foundation models.
S. Letizia, L. Zhan, G. V. Iungo
<p>The LiDAR Statistical Barnes Objective Analysis (LiSBOA), presented in <span class="cit" id="xref_text.1"><a href="#bib1.bibx42">Letizia et al.</a> (<a href="#bib1.bibx42">2021</a>)</span>, is a procedure for the optimal design of lidar scans and calculations over a Cartesian grid of the statistical moments of the velocity field. Lidar data collected during a field campaign conducted at a wind farm in complex terrain are analyzed through LiSBOA for two different tests. For both case studies, LiSBOA is leveraged for the optimization of the azimuthal step of the lidar and the retrieval of the mean equivalent velocity and turbulence intensity fields. In the first case, the wake velocity statistics of four utility-scale turbines are reconstructed on a 3D grid, showing LiSBOA's ability to capture complex flow features, such as high-speed jets around the nacelle and the wake turbulent-shear layers. For the second case, the statistics of the wakes generated by four interacting turbines are calculated over a 2D Cartesian grid and compared to the measurements provided by the nacelle-mounted anemometers. Maximum discrepancies, as low as 3 % for the mean velocity (with respect to the free stream velocity) and turbulence intensity (in absolute terms), endorse the application of LiSBOA for lidar-based wind resource assessment and diagnostic surveys for wind farms.</p>
Xingqiang Song, C. Carlsson, R. Kiilsgaard et al.
Life cycle assessment (LCA) is becoming an increasingly important environmental systems analysis tool in the construction sector for the identification of measures and strategies to reduce the environmental impact of buildings throughout the whole value chain. Geotechnical processes, such as earthworks, ground improvement and foundation construction, are often energy- and resource-intensive. Geotechnical works can thus play an important role in moving towards more sustainable building construction practices. This article reviews recent applications of LCA of buildings, including foundations as the focus or part of the system studied, based on the ISO 14040/44 standards. The system boundaries of geotechnical works are defined and a conceptual model for LCA of geotechnical works in building construction is proposed. The results of the literature review showed that the application of LCA to the building substructure is currently under development, but still in a fragmented state. There is a need for a unified framework for LCA of geotechnical works in building construction, especially regarding the definition of the functional unit, the choice of system boundaries, the appropriateness of inventory data, and the selection of impact categories. The conceptual model focuses on the demonstration of inventory flows and system boundaries and can serve as a basis for scope definition in future LCA studies of geotechnical works in building construction. It may also support effective communication between different actors and stakeholders regarding environmental sustainability in the construction sector.
D. Peduto, M. Korff, G. Nicodemo et al.
The analysis and prediction of damage to buildings resting on highly compressible fine-grained “soft soils” containing (organic) clay and peat are key issues to be addressed for a proper management of subsidence-affected urban areas. Among the probabilistic approaches suggested in literature, those oriented to the generation of empirical fragility curves are particularly promising provided that a comprehensive dataset for both the subsidence-related intensity (SRI) parameters and the corresponding damage severity to buildings is available. Following this line of thought, in the present paper, a rich sample of more than seven hundred monitored (by remote sensing) and surveyed masonry buildings – mainly resting with their (shallow or piled) foundations on soft soils – is analysed in four urban areas of The Netherlands. Probabilistic functions in the form of fragility curves for building damage are retrieved for three different SRI parameters (i.e., differential settlement, rotation and deflection ratio) derived from the processing of Synthetic Aperture Radar (SAR) images by way of a differential interferometric (DInSAR) technique in combination with the severity levels of the damage recorded from the visual inspection of over 700 masonry buildings. As a novelty with respect to earlier similar studies, the work points out the methodological steps to be followed in order to identify the most appropriate SRI parameter among the selected ones. Thus, the objective of the paper is to improve the existing geotechnical forecasting tools for subsidence-affected urban areas, in order to target areas that require more detailed investigations/analyses and/or to select/prioritize foundation repairing/replacing measures.
D. Harman
Information retrieval, the science behind search engines, had its birth in the late 1950s. Its forbearers came from library science, mathematics and linguistics, with later input from computer science. The early work dealt with finding better ways to index text, and then using new algorithms to search these (mostly) automatically built indexes. Like all computer applications, however, the theory and ideas were limited by lack of computer power, and additionally by lack of machine-readable text. But each decade saw progress, and by the 1990s, it had flowered. This monograph tells the story of the early history of information retrieval (up until 2000) in a manner that presents the technical context, the research and the early commercialization efforts. Donna Harman (2019), “Information Retrieval: The Early Years”, Foundations and Trends © in Information Retrieval: Vol. 13, No. 5, pp 425–577. DOI: 10.1561/1500000065. Full text available at: http://dx.doi.org/10.1561/1500000065
A. Philip Dawid
We develop a mathematical and interpretative foundation for the enterprise of decision-theoretic statistical causality (DT), which is a straightforward way of representing and addressing causal questions. DT reframes causal inference as "assisted decision-making", and aims to understand when, and how, I can make use of external data, typically observational, to help me solve a decision problem by taking advantage of assumed relationships between the data and my problem. The relationships embodied in any representation of a causal problem require deeper justification, which is necessarily context-dependent. Here we clarify the considerations needed to support applications of the DT methodology. Exchangeability considerations are used to structure the required relationships, and a distinction drawn between intention to treat and intervention to treat forms the basis for the enabling condition of "ignorability". We also show how the DT perspective unifies and sheds light on other popular formalisations of statistical causality, including potential responses and directed acyclic graphs.
A. Kylling, H. Ardeshiri, M. Cassiani et al.
<p>Atmospheric turbulence and in particular its effect on tracer dispersion may be measured by cameras sensitive to the absorption of ultraviolet (UV) sunlight by sulfur dioxide (<span class="inline-formula">SO<sub>2</sub></span>), a gas that can be considered a passive tracer over short transport distances. We present a method to simulate UV camera measurements of <span class="inline-formula">SO<sub>2</sub></span> with a 3D Monte Carlo radiative transfer model which takes input from a large eddy simulation (LES) of a <span class="inline-formula">SO<sub>2</sub></span> plume released from a point source. From the simulated images the apparent absorbance and various plume density statistics (centre-line position, meandering, absolute and relative dispersion, and skewness) were calculated. These were compared with corresponding quantities obtained directly from the LES. Mean differences of centre-line position, absolute and relative dispersions, and skewness between the simulated images and the LES were generally found to be smaller than or about the voxel resolution of the LES. Furthermore, sensitivity studies were made to quantify how changes in solar azimuth and zenith angles, aerosol loading (background and in plume), and surface albedo impact the UV camera image plume statistics. Changing the values of these parameters within realistic limits has negligible effects on the centre-line position, meandering, absolute and relative dispersions, and skewness of the <span class="inline-formula">SO<sub>2</sub></span> plume. Thus, we demonstrate that UV camera images of <span class="inline-formula">SO<sub>2</sub></span> plumes may be used to derive plume statistics of relevance for the study of atmospheric turbulent dispersion.</p>
H. Harandizadeh, M. M. Toufigh, V. Toufigh
Halaman 24 dari 19499