M. Hennessy
Hasil untuk "Earthwork. Foundations"
Menampilkan 20 dari ~639000 hasil · dari CrossRef, arXiv, DOAJ, Semantic Scholar
Nicolas Rühling
Answer Set Programming (ASP) is a powerful tool for solving real-world problems. However, many problems involve numeric values and complex constraints beyond the capabilities of standard ASP solvers. Hybrid solvers like CLINGCON and CLINGO[DL] address this by using specialized methods for specific constraints. However, these solvers lack a strong theoretical foundation. This issue has first been addressed by introducing the Logic of Here-and-There with constraints (HT_c) as an extension of the Logic of Here-and-There (HT) and its non-monotone extension Equilibrium Logic. Nowadays, HT serves as a logical foundation for ASP and has facilitated a broader understanding of this paradigm. The idea is that HTC (and other extensions) play an analogous role for hybrid ASP. There remain many open questions about these logics regarding their fundamental characteristics as well as their practical use in solvers, ie. how they can guide the implementation. Having a formal understanding of these hybrid logics is also needed to better understand the inherent structure of the (real-world) problems they are applied to and to improve their representations in ASP. As an example of an application of ASP we use product configuration.
Yukun Zhou, Paul Nderitu, Jocelyn Hui Lin Goh et al.
Medical foundation models, pre-trained with large-scale clinical data, demonstrate strong performance in diverse clinically relevant applications. RETFound, trained on nearly one million retinal images, exemplifies this approach in applications with retinal images. However, the emergence of increasingly powerful and multifold larger generalist foundation models such as DINOv2 and DINOv3 raises the question of whether domain-specific pre-training remains essential, and if so, what gap persists. To investigate this, we systematically evaluated the adaptability of DINOv2 and DINOv3 in retinal image applications, compared to two specialist RETFound models, RETFound-MAE and RETFound-DINOv2. We assessed performance on ocular disease detection and systemic disease prediction using two adaptation strategies: fine-tuning and linear probing. Data efficiency and adaptation efficiency were further analysed to characterise trade-offs between predictive performance and computational cost. Our results show that although scaling generalist models yields strong adaptability across diverse tasks, RETFound-DINOv2 consistently outperforms these generalist foundation models in ocular-disease detection and oculomics tasks, demonstrating stronger generalisability and data efficiency. These findings suggest that specialist retinal foundation models remain the most effective choice for clinical applications, while the narrowing gap with generalist foundation models suggests that continued data and model scaling can deliver domain-relevant gains and position them as strong foundations for future medical foundation models.
Saurav Ghosh, Niloy Deb Roy Mishu
Quantum computing poses fundamental risks to classical blockchain systems by undermining widely used cryptographic primitives. In response, two major research directions have emerged: post-quantum blockchains, which integrate quantum-resistant algorithms, and quantum blockchains, which leverage quantum properties such as entanglement and quantum key distribution. This survey reviews key developments in both areas, analyzing their cryptographic foundations, architectural designs, and implementation challenges. This work provides a comparative overview of technical proposals, highlight trade-offs in security, scalability, and deployment, and identify open research problems across hardware, consensus, and network design. The goal is to offer a structured and comprehensive reference for advancing secure blockchain systems in the quantum era.
Jeffrey Gu, Serena Yeung-Levy
Large pre-trained models, or foundation models, have shown impressive performance when adapted to a variety of downstream tasks, often out-performing specialized models. Hypernetworks, neural networks that generate some or all of the parameters of another neural network, have become an increasingly important technique for conditioning and generalizing implicit neural representations (INRs), which represent signals or objects such as audio or 3D shapes using a neural network. However, despite the potential benefits of incorporating foundation models in hypernetwork methods, this research direction has not been investigated, likely due to the dissimilarity of the weight generation task with other visual tasks. To address this gap, we (1) show how foundation models can improve hypernetworks with Transformer-based architectures, (2) provide an empirical analysis of the benefits of foundation models for hypernetworks through the lens of the generalizable INR task, showing that leveraging foundation models improves performance, generalizability, and data efficiency across a variety of algorithms and modalities. We also provide further analysis in examining the design space of foundation model-based hypernetworks, including examining the choice of foundation models, algorithms, and the effect of scaling foundation models.
Adrian Mirza, Nawaf Alampara, Martiño Ríos-García et al.
Foundation models have shown remarkable success across scientific domains, yet their impact in chemistry remains limited due to the absence of diverse, large-scale, high-quality datasets that reflect the field's multifaceted nature. We present the ChemPile, an open dataset containing over 75 billion tokens of curated chemical data, specifically built for training and evaluating general-purpose models in the chemical sciences. The dataset mirrors the human learning journey through chemistry -- from educational foundations to specialized expertise -- spanning multiple modalities and content types including structured data in diverse chemical representations (SMILES, SELFIES, IUPAC names, InChI, molecular renderings), scientific and educational text, executable code, and chemical images. ChemPile integrates foundational knowledge (textbooks, lecture notes), specialized expertise (scientific articles and language-interfaced data), visual understanding (molecular structures, diagrams), and advanced reasoning (problem-solving traces and code) -- mirroring how human chemists develop expertise through diverse learning materials and experiences. Constructed through hundreds of hours of expert curation, the ChemPile captures both foundational concepts and domain-specific complexity. We provide standardized training, validation, and test splits, enabling robust benchmarking. ChemPile is openly released via HuggingFace with a consistent API, permissive license, and detailed documentation. We hope the ChemPile will serve as a catalyst for chemical AI, enabling the development of the next generation of chemical foundation models.
Chandrasekhar Gokavarapu
This paper establishes the homological and geometric foundations of non-commutative n-ary Gamma-semirings, unifying two previously distinct directions in Gamma-algebra: the derived Gamma-geometry developed for the commutative ternary case and the structural and spectral theory for general non-commutative n-ary systems. We introduce categories of left, right, and bi-Gamma-modules that respect positional asymmetry and prove that they form additive and exact categories in Quillen's sense. Within this setting, we construct projective and injective resolutions, define the derived functors Ext^Gamma and Tor_Gamma, and establish long exact sequences and spectral balance theorems in the n-ary regime. By extending sheaf-theoretic and homological tools to the non-commutative Gamma-spectrum Spec_Gamma^nc(T), we obtain a coherent framework of non-commutative derived Gamma-geometry that parallels the classical paradigms of Grothendieck and Kontsevich in homological algebra and non-commutative geometry. The framework developed here establishes the foundational exact-categorical and homological structures that enable Morita-type analyses and spectral interpretations in the subsequent parts of this series.
G. Malollari, A. Ansmann, H. Baars et al.
<p>Vertical profiles of aerosol properties are essential for assessing the impact of aerosols on cloud formation and the Earth's radiation budget. Lidars can provide profiles of the particle backscatter and extinction coefficients and the extinction-to-backscatter ratio (lidar ratio). An Ångström exponent has to be assumed when computing these profiles from nitrogen vibrational–rotational Raman signals. This assumption introduces uncertainties. An alternative approach is the rotational Raman lidar method, which does not need an Ångström exponent as input. This study presents a quantitative comparison between the pure rotational and vibrational–rotational Raman lidar approaches to assess the impact of the Ångström exponent assumption on the vibrational–rotational Raman lidar solutions. In this short article, we present four contrasting case studies based on observations of wildfire smoke, Saharan dust, residential wood combustion smoke, and a cirrus layer. The optical properties are derived at a wavelength of 532 nm, with the rotational Raman signals measured at 530 nm and the vibrational–rotational Raman signals measured at 607 nm. It was found that the use of an Ångström exponent, deviating by 1 from the true value, introduces relative uncertainties of 5 % and less (backscatter coefficient), 5 %–10 % (extinction coefficient), and around 10 % (lidar ratio) in the vibrational–rotational Raman lidar solutions.</p>
L. Wüst, P. Dewald, G. N. T. E. Türk et al.
<p>Measurement of total peroxy nitrates (<span class="inline-formula">Σ</span>PNs) and alkyl nitrates (<span class="inline-formula">Σ</span>ANs) by instruments that use thermal dissociation (TD) inlets to convert the organic nitrate to detectable NO<span class="inline-formula"><sub>2</sub></span> may suffer from systematic bias (both positive and negative) resulting from unwanted secondary chemistry in the heated inlets. Here we review the sources of the bias and the methods used to reduce it and/or correct for it and report new experiments using (for the first time) atmospherically relevant, unsaturated, biogenic alkyl nitrates as well as two different peroxyacetyl nitrate (PAN) sources. We show that the commonly used commercial C<span class="inline-formula"><sub>3</sub></span> alkyl nitrate (isopropyl nitrate, IPN) for characterising the chemistry of ANs is not appropriate for real-air samples that contain longer-chain nitrates. Mixing ratios of ANs generated in the NO<span class="inline-formula"><sub>3</sub></span>-induced oxidation of limonene are strongly positively biased in the presence of NO. By detecting NO<span class="inline-formula"><sub><i>x</i></sub></span> rather than NO<span class="inline-formula"><sub>2</sub></span>, we provide a simple solution to avoid the bias caused by the conversion of NO to NO<span class="inline-formula"><sub>2</sub></span> by primary and secondary peroxy radicals resulting from the complex chemistry in the thermal degradation of long-chain, alkyl nitrates in air at TD temperatures. We also show that using a photochemical source of PAN to characterise the TD inlets can result in a much stronger apparent bias from NO to NO<span class="inline-formula"><sub>2</sub></span> conversion than for a diffusion source of synthesised (“pure”) PAN at similar mixing ratios, especially if high acetone concentrations (and thus radical concentrations) are involved. This is explained by the presence of thermally labile trace gases such as peracetic acid (CH<span class="inline-formula"><sub>3</sub></span>C(O)OOH) and hydrogen peroxide (H<span class="inline-formula"><sub>2</sub></span>O<span class="inline-formula"><sub>2</sub></span>).</p>
Wuri Proboretno, Budi Witjaksana, Hanie Teki Tjendani
Construction project management has three main goals that must be achieved, namely cost, quality and time of work Plengsengan Afv. Kedungpeluk Sidoarjo which is planned to be completed within an estimated time of 120 calendar days and there are obstacles to contractor implementers who experience delays of 6.98% in this study researchers make efforts to find out Early Warning if there is poor performance in project completion so that management policies and changes in implementation methods can be made so that delays in project completion can be prevented. Earned Value Method is used in the management of Afv Plengsengan work. Kedungpeluk Sidoarjo which will integrate the concept of time value. Earned Value Method is calculated from various factors that show the progress and performance of the project such as Schedule Variance (SV), work productivity index (CPI), time productivity index (SPI), and project completion schedule forecast (ECD), The results of using Earned Value Method get the results The amount of time required (ETS) at the end of the 8th week review is 91 days. While the time to complete the project (EAS) is 140 days. With this, the rate of change in project completion time is 0.16% longer, Based on the results of the analysis obtained for construction service providers who carry out development work, must apply good and efficient implementation methods in all stages of work with consistent supervision so that there is no delay in the construction work of Plengsengan Afv. Kedungpeluk Sidoarjo.
Sean I. Young
In recent years, compression of large language models (LLMs) has emerged as an important problem to enable language model deployment on resource-constrained devices, reduce computational costs, and mitigate the environmental footprint of large-scale AI infrastructure. In this paper, we lay down the foundation for LLM quantization from a convex optimization perspective and propose a quantization technique that builds on this foundation for optimum quantization outcomes. Our quantization framework, CVXQ, scales to models containing hundreds of billions of weight parameters and provides users with the flexibility to compress models to any specified model size, post-training. A reference implementation of CVXQ can be obtained from github.com/seannz/cvxq.
Roberto Dias Algarte
This article introduces a novel approach to the mathematical development of Ordinary Least Squares and Neural Network regression models, diverging from traditional methods in current Machine Learning literature. By leveraging Tensor Analysis and fundamental matrix computations, the theoretical foundations of both models are meticulously detailed and extended to their complete algorithmic forms. The study culminates in the presentation of three algorithms, including a streamlined version of the Backpropagation Algorithm for Neural Networks, illustrating the benefits of this new mathematical approach.
S. Alage, V. Michoud, S. Harb et al.
<p>Volatile organic compounds (VOCs) play a key role in tropospheric chemistry, giving rise to secondary products such as highly oxygenated organic molecules (HOMs) and secondary organic aerosols (SOAs). HOMs, a group of low-volatility gas-phase products, are formed through the autoxidation process of peroxy radicals (RO<span class="inline-formula"><sub>2</sub></span>) originating from the oxidation of VOCs. The measurement of HOMs is made by a NO<span class="inline-formula"><math xmlns="http://www.w3.org/1998/Math/MathML" id="M4" display="inline" overflow="scroll" dspmath="mathml"><mrow><msubsup><mi/><mn mathvariant="normal">3</mn><mo>-</mo></msubsup></mrow></math><span><svg:svg xmlns:svg="http://www.w3.org/2000/svg" width="9pt" height="16pt" class="svg-formula" dspmath="mathimg" md5hash="0723f17b5be9fc41c36a5585631feb47"><svg:image xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="amt-17-4709-2024-ie00004.svg" width="9pt" height="16pt" src="amt-17-4709-2024-ie00004.png"/></svg:svg></span></span> ToFCIMS instrument, which also detects other species like small highly oxygenated VOCs (e.g., dicarboxylic acids) and sulfuric acid (H<span class="inline-formula"><sub>2</sub></span>SO<span class="inline-formula"><sub>4</sub></span>). The instrument response to HOMs is typically estimated using H<span class="inline-formula"><sub>2</sub></span>SO<span class="inline-formula"><sub>4</sub></span>, as HOMs are neither commercially available nor easily synthesized in the laboratory. The resulting calibration factor is then applied to quantify all species detected using this technique. In this study, we explore the sensitivity of the instrument to commercially available small organic compounds, primarily dicarboxylic acids, given the limitations associated with producing known amounts of HOMs for calibration. We compare these single-compound calibration factors to the one obtained for H<span class="inline-formula"><sub>2</sub></span>SO<span class="inline-formula"><sub>4</sub></span> under identical operational conditions. The study found that the sensitivity of the NO<span class="inline-formula"><math xmlns="http://www.w3.org/1998/Math/MathML" id="M11" display="inline" overflow="scroll" dspmath="mathml"><mrow><msubsup><mi/><mn mathvariant="normal">3</mn><mo>-</mo></msubsup></mrow></math><span><svg:svg xmlns:svg="http://www.w3.org/2000/svg" width="9pt" height="16pt" class="svg-formula" dspmath="mathimg" md5hash="fa1148a5a7ab62133104fb46bf612014"><svg:image xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="amt-17-4709-2024-ie00005.svg" width="9pt" height="16pt" src="amt-17-4709-2024-ie00005.png"/></svg:svg></span></span> ToFCIMS varies depending on the specific type of organic compound, illustrating how a single calibration factor derived from sulfuric acid is clearly inadequate for quantifying all detected species using this technique. The results highlighted substantial variability in the calibration factors for the tested organic compounds, with 4-nitrocatechol exhibiting the highest sensitivity and pyruvic acid the lowest. The obtained sulfuric acid calibration factor agreed well with the previous values from the literature. In summary, this research emphasized the need to develop reliable and precise calibration methods for progressively oxygenated reaction products measured with a NO<span class="inline-formula"><math xmlns="http://www.w3.org/1998/Math/MathML" id="M12" display="inline" overflow="scroll" dspmath="mathml"><mrow><msubsup><mi/><mn mathvariant="normal">3</mn><mo>-</mo></msubsup></mrow></math><span><svg:svg xmlns:svg="http://www.w3.org/2000/svg" width="9pt" height="16pt" class="svg-formula" dspmath="mathimg" md5hash="ee54bb0fff66afdafaf51bed1fde360d"><svg:image xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="amt-17-4709-2024-ie00006.svg" width="9pt" height="16pt" src="amt-17-4709-2024-ie00006.png"/></svg:svg></span></span> chemical-ionization mass spectrometer (CIMS), for example, HOMs.</p>
Z. Shi, Z. Shi, Y. Wen et al.
<p>The squall line is a type of convective system that is characterized by storm cells arranged in a line or band pattern and is usually associated with disastrous weather. The identification and tracking of squall lines thus play important roles in early warning systems for meteorological disasters. Here, a clustering-based identification and tracking algorithm for squall lines is presented based on weather radar data. A clustering analysis is designed to distinguish the strong echo area and estimate the feature value, including the reflectivity value, length, width, area, endpoints, central axes, and centroid. The linearly arranged clusters are merged to improve the identification of squall line development. The three-dimensional structure and movement tracking of the squall line are obtained using the centroid and velocity of the squall lines identified in each layer. The results demonstrate that the method can effectively identify and track one or more squall lines across the radar surveillance area. The results also show that the recognition accuracy rate for the single scan elevation of this method is 95.06 %, and the false-positive rate is 3.17 %. This method improves the accuracy of squall line identification in the development stage of squall lines and still works efficiently even when high interference contamination occurs.</p>
Maria Papageorgiou, D. Fraser
Arguments by Sorkin (Impossible measurements on quantum fields. In: Directions in general relativity: proceedings of the 1993 International Symposium, Maryland, vol 2, pp 293–305, 1993) and Borsten et al. (Phys Rev D 104(2), 2021. https://doi.org/10.1103/PhysRevD.104.025012) establish that a natural extension of quantum measurement theory from non-relativistic quantum mechanics to relativistic quantum theory leads to the unacceptable consequence that expectation values in one region depend on which unitary operation is performed in a spacelike separated region. Sorkin [1] labels such scenarios ‘impossible measurements’. We explicitly present these arguments as a no-go result with the logical form of a reductio argument and investigate the consequences for measurement in quantum field theory (QFT). Sorkin-type impossible measurement scenarios clearly illustrate the moral that Microcausality is not by itself sufficient to rule out superluminal signalling in relativistic quantum theories that use Lüders’ rule. We review three different approaches to formulating an account of measurement for QFT and analyze their responses to the ‘impossible measurements’ problem. Two of the approaches are: a measurement theory based on detector models proposed in Polo-Gómez et al. (Phys Rev D, 2022. https://doi.org/10.1103/physrevd.105.065003) and a measurement framework for algebraic QFT proposed in Fewster and Verch (Commun Math Phys 378(2):851–889, 2020). Of particular interest for foundations of QFT is that they share common features that may hold general morals about how to represent measurement in QFT. These morals are about the role that dynamics plays in eliminating ‘impossible measurements’, the abandonment of the operational interpretation of local algebras A(O)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {A}}(O)$$\end{document} as representing possible operations carried out in region O, and the interpretation of state update rules. Finally, we examine the form that the ‘impossible measurements’ problem takes in histories-based approaches and we discuss the remaining challenges.
Lai Yu
J. Demmel, Ioana Dumitriu, Ryan Schneider
We present a randomized, inverse-free algorithm for producing an approximate diagonalization of any $$n \times n$$ n × n matrix pencil (A, B). The bulk of the algorithm rests on a randomized divide-and-conquer eigensolver for the generalized eigenvalue problem originally proposed by Ballard, Demmel and Dumitriu (Technical Report 2010). We demonstrate that this divide-and-conquer approach can be formulated to succeed with high probability provided the input pencil is sufficiently well-behaved, which is accomplished by generalizing the recent pseudospectral shattering work of Banks, Garza-Vargas, Kulkarni and Srivastava (Foundations of Computational Mathematics 2023). In particular, we show that perturbing and scaling (A, B) regularizes its pseudospectra, allowing divide-and-conquer to run over a simple random grid and in turn producing an accurate diagonalization of (A, B) in the backward error sense. The main result of the paper states the existence of a randomized algorithm that with high probability (and in exact arithmetic) produces invertible S, T and diagonal D such that $$||A - SDT^{-1}||_2 \le \varepsilon $$ | | A - S D T - 1 | | 2 ≤ ε and $$||B - ST^{-1}||_2 \le \varepsilon $$ | | B - S T - 1 | | 2 ≤ ε in at most $$O \left( \log ^2 \left( \frac{n}{\varepsilon } \right) T_{\text {MM}}(n) \right) $$ O log 2 n ε T MM ( n ) operations, where $$T_{\text {MM}}(n)$$ T MM ( n ) is the asymptotic complexity of matrix multiplication. This not only provides a new set of guarantees for highly parallel generalized eigenvalue solvers but also establishes nearly matrix multiplication time as an upper bound on the complexity of inverse-free, exact-arithmetic matrix pencil diagonalization.
Fei Yan, S. Venegas-Andraca, K. Hirota
Yingwei Song, Min Wang
M. Yaghoubi, A. Arulrajah, M. Disfani et al.
Abstract Portland cement is traditionally used as a binder in ground improvement projects on soft soil foundations. The use of cement in ground improvement projects, however, is fraught with both, financial and environmental concerns due to its relatively high cost, the use of natural resources and the high carbon footprint from cement production. Attempts are being made to find alternative environmentally friendly binders with a low carbon footprint using industrial by-products such as fly ash (FA) and slag (S). Using waste by-products such as FA and S to produce geopolymer binders, as novel green cementitious materials, may provide an environmentally friendly and effective ground improvement option. In this study, the effect of adding geopolymers to a soft soil was investigated for usage in deep soil mixing (DSM) applications. The soil was a soft marine clay known as Coode Island Silt (CIS). Different combinations of FA and S with six combinations of sodium and potassium based liquid alkaline activators (L) were added to the soil to study the effects on its engineering and chemical properties. These changes were evaluated via an unconfined compression strength (UCS) test, scanning electron microscopy (SEM) imaging and energy-dispersive X-ray spectroscopy (EDS) tests. The tests were conducted after 3, 7, 14 and 28 days of curing. Based on the results, the important role of L in strength development was studied, and the combination of 30% NaOH with 70% Na2SiO3 was found to achieve the highest strengths. Furthermore, increasing the S content was found to result in significant improvements in strength. The excellent correlation between strength and stiffness shown in the results are expected to help in the development of relationships for strength prediction of these green binders in geotechnical applications. This study shows that FA and S based geopolymers can be used as sustainable binders in DSM projects, with significant environmental benefits.
Halaman 19 dari 31950