<p>The Eppley Precision Infrared Radiometer (PIR) is widely used for broadband (3.5–50 <span class="inline-formula">µ</span>m), thermal infrared wavelength measurements of the downwelling and upwelling radiation from the atmosphere and surface, respectively. The field of view of the instrument is 2<span class="inline-formula"><i>π</i></span> steradians with a receiver that has an approximate cosine response. In this paper we examine four equations suggested by the literature that have been used to transfer irradiance calibrations from our standard PIRs that are calibrated at the World Radiation Center to field units used for network operations. We first discuss various equations used to convert the resistance measurements of the thermistors to temperatures of the body and dome that are used in the derivation of incoming irradiance. We then use the four related, but distinct, equations for the transfer of the calibration from standard PIRs to field instruments. A clear choice for the preferred equation to use for calibration and transfer of calibration to field PIRs emerges from this study.</p>
From the very beginning, Quantum Mechanics has been accompanied by crucial foundational questions: the possibility of visualizing physical processes, the limits of measurement epitomized by the Heisenberg uncertainty principle, the existence of a deeper underlying reality with additional degrees of freedom, the role of measurements, and the status of locality. Long regarded as philosophical speculations, these issues were progressively reformulated into precise mathematical statements and ultimately subjected to experimental verification. The trajectory proved unpredictable: questions once dismissed as metaphysical gave rise to experimental platforms, which in turn matured into devices and technologies powering quantum computation, communication, and sensing. Yet this development is not unidirectional: advances in technology also feed back into foundations, enabling tests of principles that were previously out of reach, for example, whether quantum superposition persists at larger and larger scales and whether reality, gravity included, is fundamentally quantum. In this way, the dialogue between foundational inquiry and technological progress continues to shape both our theoretical understanding and the practical realization of quantum phenomena.
A community-based initiative in Ghana has mapped and partially excavated an earthwork site in the Oti region. Radiocarbon dating shows that the site was occupied between the fifteenth and eighteenth centuries AD, while archaeo- and ethnobotanical research connects historical plant use with modern practices, contributing to our understanding of West African earthworks.
Для підтвердження застосовності теоретичного методу синтезу оптимального керування рухом пристрою для транспортування малогабаритних вантажів постає питання проведення експериментального дослідження такого керування на практиці. В даній роботі описано методику проведення експериментального дослідження процесу стабілізації положення пристрою для транспортування малогабаритних вантажів та методи оцінки якості такої стабілізації.
Очікуваним результатом було отримати експериментальні дані перевірки якості розробленого керування для 11 наборів коефіцієнтів ПІД-регулятора. Надалі з них обрано коефіцієнти регулятора, які найкраще себе показали в процесі стабілізації положення пристрою. Також отримано експериментальні дані роботи пристрою з мінімальною похибкою при порівнянні з теоретичними даними.
При проведенні експериментального дослідження використано фізичну модель двоколісного пристрою для транспортування малогабаритних вантажів. Було перевірено якість реалізації регулювання положення пристрою на одинадцяти наборах коефіцієнтів ПІД-регулятора. Зібрано масиви експериментальних даних роботи пристрою, проведено порівняння з теоретичними даними та проведено оцінку якості процесу стабілізації положення пристрою.
При співставленні теоретичних і експериментальних даних отримано показники максимальних та середньоквадратичних похибок кута нахилу пристрою, показники похибок максимальної та середньоквадратичної кутової швидкості нахилу пристрою. Декремент згасання коливань знаходився в межах 0,25…2,11. Серед усіх розв’язків обрано найкращим з практичної точки зору є результат набору наступних коефіцієнтів ПІД-регулятора: пропрорційний k1=-2,112, інтегральний k2=-1,756, диференціальний k3=-1,38·10-7. Цей результат відповідає найбільшому декременту згасання коливань (λ=2,11). Отриманий результат дав підстави вважати методику синтезу оптимального керування дієвою, а задачу експериментальної перевірки виконаною.
<p>Particulate nitrate is a major component of ambient aerosol around the world, present in inorganic form, mainly as ammonium nitrate, and also as organic nitrate. It is of increasing importance to monitor ambient particulate nitrate, a reservoir of urban nitrogen oxides that can be transported downwind and harm ecosystems. The unit-mass-resolution time-of-flight aerosol chemical speciation monitor equipped with capture vaporizer (CV-UMR-ToF-ACSM) is designed to quantitatively monitor ambient PM<span class="inline-formula"><sub>2.5</sub></span> composition. In this paper, we describe a method for separating the organic and ammonium nitrate components measured by CV-UMR-ToF-ACSM based on evaluating the <span class="inline-formula"><math xmlns="http://www.w3.org/1998/Math/MathML" id="M4" display="inline" overflow="scroll" dspmath="mathml"><mrow class="chem"><msubsup><mi mathvariant="normal">NO</mi><mn mathvariant="normal">2</mn><mo>+</mo></msubsup><mspace linebreak="nobreak" width="0.125em"/><mo>/</mo><mspace linebreak="nobreak" width="0.125em"/><msup><mi mathvariant="normal">NO</mi><mo>+</mo></msup></mrow></math><span><svg:svg xmlns:svg="http://www.w3.org/2000/svg" width="55pt" height="15pt" class="svg-formula" dspmath="mathimg" md5hash="553175b34927ea0e1dd52e22771ba7ff"><svg:image xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="amt-18-3051-2025-ie00004.svg" width="55pt" height="15pt" src="amt-18-3051-2025-ie00004.png"/></svg:svg></span></span> ratio (<span class="inline-formula"><math xmlns="http://www.w3.org/1998/Math/MathML" id="M5" display="inline" overflow="scroll" dspmath="mathml"><mrow class="chem"><msubsup><mi mathvariant="normal">NO</mi><mi>x</mi><mo>+</mo></msubsup></mrow></math><span><svg:svg xmlns:svg="http://www.w3.org/2000/svg" width="24pt" height="14pt" class="svg-formula" dspmath="mathimg" md5hash="028ac11a5d00255f5cd2bebfa53fb902"><svg:image xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="amt-18-3051-2025-ie00005.svg" width="24pt" height="14pt" src="amt-18-3051-2025-ie00005.png"/></svg:svg></span></span> ratio). This method includes modifying the ACSM fragmentation table, time averaging, and data filtering. By using the measured <span class="inline-formula"><math xmlns="http://www.w3.org/1998/Math/MathML" id="M6" display="inline" overflow="scroll" dspmath="mathml"><mrow class="chem"><msubsup><mi mathvariant="normal">NO</mi><mi>x</mi><mo>+</mo></msubsup></mrow></math><span><svg:svg xmlns:svg="http://www.w3.org/2000/svg" width="24pt" height="14pt" class="svg-formula" dspmath="mathimg" md5hash="2a6a55d4caf2332280d6029154572194"><svg:image xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="amt-18-3051-2025-ie00006.svg" width="24pt" height="14pt" src="amt-18-3051-2025-ie00006.png"/></svg:svg></span></span> ratio of <span class="inline-formula">NH<sub>4</sub>NO<sub>3</sub></span> and a plausible range of <span class="inline-formula"><math xmlns="http://www.w3.org/1998/Math/MathML" id="M8" display="inline" overflow="scroll" dspmath="mathml"><mrow class="chem"><msubsup><mi mathvariant="normal">NO</mi><mi>x</mi><mo>+</mo></msubsup></mrow></math><span><svg:svg xmlns:svg="http://www.w3.org/2000/svg" width="24pt" height="14pt" class="svg-formula" dspmath="mathimg" md5hash="eb65f99e471aaeebbcaa19a5428d6f6a"><svg:image xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="amt-18-3051-2025-ie00007.svg" width="24pt" height="14pt" src="amt-18-3051-2025-ie00007.png"/></svg:svg></span></span> ratio for organic nitrate aerosol, the measured particulate nitrate can be split into inorganic and organic fractions. Data pre-treatment filters concentrations of particulate nitrate below 0.6–2.0 <span class="inline-formula">µg m<sup>−3</sup></span>, depending on the time averaging. The method detection limit, when considering <span class="inline-formula">±10</span> % absolute uncertainty of organic nitrate fraction, is found to be 2 <span class="inline-formula">µg m<sup>−3</sup></span> (120 <span class="inline-formula">min</span> averaging) to 10 <span class="inline-formula">µg m<sup>−3</sup></span> (10 <span class="inline-formula">min</span> averaging) for total particulate nitrate concentration and 10 % (120 <span class="inline-formula">min</span>) to 20 % (10 <span class="inline-formula">min</span>) for organic nitrate fraction. We show that this method is able to distinguish periods with inorganic or organic nitrate as major components at a rural site in the Netherlands. A comparison to a high-resolution time-of-flight aerosol mass spectrometer equipped with a standard vaporizer (SV-HR-ToF-AMS) and positive matrix factorization (PMF) method shows similar response of increasing particulate organic nitrate fraction with uncertainties mainly from sensitivity to fragmentation table correction when obtaining the <span class="inline-formula"><math xmlns="http://www.w3.org/1998/Math/MathML" id="M17" display="inline" overflow="scroll" dspmath="mathml"><mrow class="chem"><msubsup><mi mathvariant="normal">NO</mi><mn mathvariant="normal">2</mn><mo>+</mo></msubsup></mrow></math><span><svg:svg xmlns:svg="http://www.w3.org/2000/svg" width="24pt" height="15pt" class="svg-formula" dspmath="mathimg" md5hash="bf0393145799d670464806da7619ecf1"><svg:image xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="amt-18-3051-2025-ie00008.svg" width="24pt" height="15pt" src="amt-18-3051-2025-ie00008.png"/></svg:svg></span></span> signal. We propose that researchers use this <span class="inline-formula"><math xmlns="http://www.w3.org/1998/Math/MathML" id="M18" display="inline" overflow="scroll" dspmath="mathml"><mrow class="chem"><msubsup><mi mathvariant="normal">NO</mi><mi>x</mi><mo>+</mo></msubsup></mrow></math><span><svg:svg xmlns:svg="http://www.w3.org/2000/svg" width="24pt" height="14pt" class="svg-formula" dspmath="mathimg" md5hash="9729d4d8d125869e8a23e995affb01bc"><svg:image xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="amt-18-3051-2025-ie00009.svg" width="24pt" height="14pt" src="amt-18-3051-2025-ie00009.png"/></svg:svg></span></span> ratio method for CV-UMR-ToF-ACSM (adapting the appropriate fragmentation table and data pre-treatment for each specific application) to quantify the particulate organic<span id="page3052"/> nitrate fraction at existing monitoring sites in order to improve understanding of nitrate formation and speciation.</p>
In this paper, we study axiomatic foundations of Bayesian persuasion, where a principal (i.e., sender) delegates the task of choice making after informing a biased agent (i.e., receiver) about the payoff relevant uncertain state (see, e.g., Kamenica and Gentzkow (2011)). Our characterizations involve novel models of Bayesian persuasion, where the principal can steer the agent's bias after acquiring costly information. Importantly, we provide an elicitation method using only observable menu-choice data of the principal, which shows how to construct the principal's subjective costs of acquiring information even when he anticipates managing the agent's bias.
We propose two quantum experiments - modified Bell tests - that could detect contextual hidden variables underlying quantum mechanics. The experiments are inspired by hydrodynamic pilot-wave systems that mimic a wide range of quantum effects and exhibit a classical analog of contextuality. To justify the experiments, we show that contextual hidden variables are 'physics as usual' if a unification between quantum mechanics and general relativity is possible. Accordingly, contextual theories can bypass Bell's theorem in a way that is both local and non-conspiratorial. We end with a note on the relevance of exploratory experiments in the foundations of quantum physics.
<p>Temperature and water vapor profiles are essential to climate change studies and weather forecasting. Hyperspectral instruments are of great value for retrieving temperature and water vapor profiles, enabling accurate monitoring of their changes. Successful retrievals of temperature and water vapor profiles require accuracy of hyperspectral radiometer measurements. In this study, the radiometric accuracy of an airborne hyperspectral microwave radiometer, the High Spectral Resolution Airborne Microwave Sounder (HiSRAMS), and a ground-based hyperspectral infrared radiometer, the Atmospheric Emitted Radiance Interferometer (AERI), is simultaneously assessed by performing radiative closure tests under clear-sky conditions in Ottawa, Canada. As an airborne instrument, HiSRAMS has two radiometers measuring radiance in the oxygen band (49.6–58.3 GHz) and water vapor band (175.9–184.6 GHz) for zenith-pointing and nadir-pointing observations. AERI provides ground-based, zenith-pointing radiance measurements between 520 and 1800 cm<span class="inline-formula"><sup>−1</sup></span>. A systematic warm radiance bias is present in AERI observations in the window band. Upon removal of this bias, improved radiative closure was attained in the window band. The brightness temperature (BT) bias in nadir-pointing HiSRAMS observations is smaller than at the zenith. A novel but straightforward method is developed to diagnose the radiometric accuracy of the two instruments in comparison based on the relationship between radiometric bias and optical depth. Compared to AERI, HiSRAMS demonstrates similar radiometric accuracy for nadir-pointing measurements but exhibits relatively poor accuracy for zenith-pointing measurements, which requires further characterization. Future work on temperature and water vapor concentration retrievals using HiSRAMS and AERI is warranted.</p>
<p>We discuss robust estimations for the variance of normally distributed random variables in the presence of interference. The robust estimators are based on either ranking or the geometric mean. For the interference models used, estimators based on the geometric mean outperform the rank-based ones in both mitigating the effect of interference and reducing the statistical error when there is no interference. One reason for this is that estimators using the geometric mean do not suffer from the “heavy tail” phenomenon like the rank-based estimators do. The ratio of the standard deviation over the mean of the power random variable is sensitive to interference. It can thus be used as a criterion to combine the sample mean with a robust estimator to form a hybrid estimator. We apply the estimators to the Arecibo incoherent scatter radar signals to determine the total power and Doppler velocities in the ionospheric E-region altitudes. Although all the robust estimators selected deal with light contamination well, the hybrid estimator is most effective in all circumstances. It performs well in suppressing heavy contamination and is as efficient as the sample mean in reducing the statistical error. Accurate incoherent scatter radar measurements, especially at nighttime and at E-region altitudes, can improve studies of ionospheric dynamics and composition.</p>
There has been a surge of recent interest in the Wigner's Friend paradox, sparking several novel thought experiments and no-go theorems. The main narrative has been that Wigner's Friend highlights a counterintuitive feature that is unique to quantum theory, and which is closely related to the quantum measurement problem. Here, we challenge this view. We argue that the gist of the Wigner's Friend paradox can be reproduced without assuming quantum physics, and that it underlies a much broader class of enigmas in the foundations of physics and philosophy. To show this, we first consider several recently proposed Extended Wigner's Friend scenarios, and demonstrate that some of their implications for the absoluteness of observations can be reproduced by classical thought experiments that involve the duplication of agents. Crucially, some of these classical scenarios are technologically much easier to implement than their quantum counterparts. Then, we argue that the essential structural ingredient of all these scenarios is a feature that we call "Restriction A": essentially, that a physical theory cannot give us a probabilistic description of the observations of all agents. Finally, we argue that this difficulty is at the core of other puzzles in the foundations of physics and philosophy, and demonstrate this explicitly for cosmology's Boltzmann brain problem. Our analysis suggests that Wigner's Friend should be studied in a larger context, addressing a frontier of human knowledge beyond quantum foundations: to obtain reliable predictions for experiments in which these predictions can be privately but not intersubjectively verified.
The rapid advancement of foundation modelslarge-scale neural networks trained on diverse, extensive datasetshas revolutionized artificial intelligence, enabling unprecedented advancements across domains such as natural language processing, computer vision, and scientific discovery. However, the substantial parameter count of these models, often reaching billions or trillions, poses significant challenges in adapting them to specific downstream tasks. Low-Rank Adaptation (LoRA) has emerged as a highly promising approach for mitigating these challenges, offering a parameter-efficient mechanism to fine-tune foundation models with minimal computational overhead. This survey provides the first comprehensive review of LoRA techniques beyond large Language Models to general foundation models, including recent techniques foundations, emerging frontiers and applications of low-rank adaptation across multiple domains. Finally, this survey discusses key challenges and future research directions in theoretical understanding, scalability, and robustness. This survey serves as a valuable resource for researchers and practitioners working with efficient foundation model adaptation.
Building multisensory AI systems that learn from multiple sensory inputs such as text, speech, video, real-world sensors, wearable devices, and medical data holds great promise for impact in many scientific areas with practical benefits, such as in supporting human health and well-being, enabling multimedia content processing, and enhancing real-world autonomous agents. By synthesizing a range of theoretical frameworks and application domains, this thesis aims to advance the machine learning foundations of multisensory AI. In the first part, we present a theoretical framework formalizing how modalities interact with each other to give rise to new information for a task. These interactions are the basic building blocks in all multimodal problems, and their quantification enables users to understand their multimodal datasets, design principled approaches to learn these interactions, and analyze whether their model has succeeded in learning. In the second part, we study the design of practical multimodal foundation models that generalize over many modalities and tasks, which presents a step toward grounding large language models to real-world sensory modalities. We introduce MultiBench, a unified large-scale benchmark across a wide range of modalities, tasks, and research areas, followed by the cross-modal attention and multimodal transformer architectures that now underpin many of today's multimodal foundation models. Scaling these architectures on MultiBench enables the creation of general-purpose multisensory AI systems, and we discuss our collaborative efforts in applying these models for real-world impact in affective computing, mental health, cancer prognosis, and robotics. Finally, we conclude this thesis by discussing how future work can leverage these ideas toward more general, interactive, and safe multisensory AI.
I have argued elsewhere that second order logic provides a foundation for mathematics much in the same way as set theory does, despite the fact that the former is second order and the latter first order, but second order logic is marred by reliance on ad hoc {\em large domain assumptions}. In this paper I argue that sort logic, a powerful extension of second order logic, provides a foundation for mathematics without any ad hoc large domain assumptions. The large domain assumptions are replaced by ZFC-like axioms. Despite this resemblance to set theory sort logic retains the structuralist approach to mathematics characteristic of second order logic. As a model-theoretic logic sort logic is the strongest logic. In fact, every model class definable in set theory is the class of models of a sentence of sort logic. Because of its strength sort logic can be used to formulate particularly strong reflection principles in set theory.
X. Calbet, C. Carbajal Henken, S. DeSouza-Machado
et al.
<p>Water vapor concentration structures in the atmosphere are well approximated horizontally by Gaussian random fields at small scales (<span class="inline-formula">≲6</span> km). These Gaussian random fields have a spatial correlation in accordance with a structure function with a two-thirds slope, following the corresponding law from Kolmogorov's theory of turbulence. This is proven by showing that the horizontal structure functions measured by several satellite instruments and radiosonde measurements do indeed follow the two-thirds law. High-spatial-resolution retrievals of total column water vapor (TCWV) obtained from the Ocean and Land Color Instrument (OLCI) on board the Sentinel-3 series of satellites also qualitatively show a Gaussian random field structure.</p>
<p>As a consequence, the atmosphere has an inherently stochastic component associated with the horizontal small-scale water vapor features, which, in turn, can make deterministic forecasting or nowcasting difficult. These results can be useful in areas where high-resolution modeling of water vapor is required, such as the estimation of the water vapor variance within a region or when searching for consistency between different water vapor measurements in neighboring locations. In terms of weather forecasting or nowcasting, the water vapor horizontal variability could be important in estimating the uncertainty of the atmospheric processes driving convection.</p>
In this article, we analyse how decentralised digital infrastructures can provide a fundamental change in the structure and dynamics of organisations. The works of R.H.Coase and M. Olson, on the nature of the firm and the logic of collective action, respectively, are revisited under the light of these emerging new digital foundations. We also analyse how these technologies can affect the fundamental assumptions on the role of organisations (either private or public) as mechanisms for the coordination of labour. We propose that these technologies can fundamentally affect: (i) the distribution of rewards within an organisation and (ii) the structure of its transaction costs. These changes bring the potential for addressing some of the trade-offs between the private and public sectors.
Both metamathematics and physics are posited to emerge from samplings by observers of the unique ruliad structure that corresponds to the entangled limit of all possible computations. The possibility of higher-level mathematics accessible to humans is posited to be the analog for mathematical observers of the perception of physical space for physical observers. A physicalized analysis is given of the bulk limit of traditional axiomatic approaches to the foundations of mathematics, together with explicit empirical metamathematics of some examples of formalized mathematics. General physicalized laws of mathematics are discussed, associated with concepts such as metamathematical motion, inevitable dualities, proof topology and metamathematical singularities. It is argued that mathematics as currently practiced can be viewed as derived from the ruliad in a direct Platonic fashion analogous to our experience of the physical world, and that axiomatic formulation, while often convenient, does not capture the ultimate character of mathematics. Among the implications of this view is that only certain collections of axioms may be consistent with inevitable features of human mathematical observers. A discussion is included of historical and philosophical connections, as well as of foundational implications for the future of mathematics.
The recently proposed CP language adopts Compositional Programming: a new modular programming style that solves challenging problems such as the Expression Problem. CP is implemented on top of a polymorphic core language with disjoint intersection types called Fi+. The semantics of Fi+ employs an elaboration to a target language and relies on a sophisticated proof technique to prove the coherence of the elaboration. Unfortunately, the proof technique is technically challenging and hard to scale to many common features, including recursion or impredicative polymorphism. Thus, the original formulation of Fi+ does not support the two later features, which creates a gap between theory and practice, since CP fundamentally relies on them. This paper presents a new formulation of Fi+ based on a type-directed operational semantics (TDOS). The TDOS approach was recently proposed to model the semantics of languages with disjoint intersection types (but without polymorphism). Our work shows that the TDOS approach can be extended to languages with disjoint polymorphism and model the full Fi+ calculus. Unlike the elaboration semantics, which gives the semantics to Fi+ indirectly via a target language, the TDOS approach gives a semantics to Fi+ directly. With a TDOS, there is no need for a coherence proof. Instead, we can simply prove that the semantics is deterministic. The proof of determinism only uses simple reasoning techniques, such as straightforward induction, and is able to handle problematic features such as recursion and impredicative polymorphism. This removes the gap between theory and practice and validates the original proofs of correctness for CP. We formalized the TDOS variant of the Fi+ calculus and all its proofs in the Coq proof assistant.
<p>Hurricane Matthew (2016) was observed by the ground-based polarimetric Next Generation Weather Radar (NEXRAD) in Miami (KAMX) and the National Oceanic and Atmospheric Administration WP-3D (NOAA P-3) airborne tail Doppler radar near the coast of the southeastern United States for several hours, providing a novel opportunity to evaluate and compare single- and multiple-Doppler wind retrieval techniques for tropical cyclone flows. The generalized velocity track display (GVTD) technique can retrieve a subset of the wind field from a single ground-based Doppler radar under the assumption of nearly axisymmetric rotational wind, but it has been shown to have errors from the aliasing of unresolved wind components. An improved technique that mitigates errors due to storm motion is derived in this study, although some spatial aliasing remains due to limited information content from the single-Doppler measurements. A spline-based variational wind retrieval technique called SAMURAI can retrieve the full three-dimensional wind field from airborne radar fore–aft pseudo-dual-Doppler scanning, but it has been shown to have errors due to temporal aliasing from the nonsimultaneous Doppler measurements. A comparison between the two techniques shows that the axisymmetric tangential winds are generally comparable between the two techniques, and the improved GVTD technique improves the accuracy of the retrieval. Fourier decomposition of asymmetric kinematic and convective structure shows more discrepancies due to spatial and temporal aliasing in the retrievals. The strengths and weaknesses of each technique for studying tropical cyclone structure are discussed and suggest that complementary information can be retrieved from both single- and dual-Doppler retrievals. Future improvements to the asymmetric flow assumptions in single-Doppler analysis and steady-state assumptions in pseudo-dual-Doppler analysis are required to reconcile differences in retrieved tropical cyclone structure.</p>
<p>Atmospheric aerosols have been known to be a major source of uncertainties in <span class="inline-formula">CO<sub>2</sub></span> concentrations retrieved from space. In this study, we investigate the added value of multi-angle polarimeter (MAP) measurements in the context of the Copernicus Anthropogenic Carbon Dioxide Monitoring (CO2M) mission. To this end, we compare aerosol-induced <span class="inline-formula">XCO<sub>2</sub></span> errors from standard retrievals using a spectrometer only (without MAP) with those from retrievals using both MAP and a spectrometer. MAP observations are expected to provide information about aerosols that is useful for improving <span class="inline-formula">XCO<sub>2</sub></span> accuracy. For the purpose of this work, we generate synthetic measurements for different atmospheric and geophysical scenes over land, based on which <span class="inline-formula">XCO<sub>2</sub></span> retrieval errors are assessed. We show that the standard <span class="inline-formula">XCO<sub>2</sub></span> retrieval approach that makes no use of auxiliary aerosol observations returns <span class="inline-formula">XCO<sub>2</sub></span> errors with an overall bias of 1.12 <span class="inline-formula">ppm</span> and a spread (defined as half of the 15.9–84.1 percentile range) of 2.07 <span class="inline-formula">ppm</span>. The latter is far higher than the required <span class="inline-formula">XCO<sub>2</sub></span> accuracy (0.5 <span class="inline-formula">ppm</span>) and precision (0.7 <span class="inline-formula">ppm</span>) of the CO2M mission. Moreover, these <span class="inline-formula">XCO<sub>2</sub></span> errors exhibit a significantly larger bias and scatter at high aerosol optical depth, high aerosol altitude, and low solar zenith angle, which could lead to worse performance in retrieving <span class="inline-formula">XCO<sub>2</sub></span> from polluted areas where <span class="inline-formula">CO<sub>2</sub></span> and aerosols are co-emitted. We proceed to determine MAP instrument specifications in terms of wavelength range, number of viewing angles, and measurement uncertainties that are required to achieve <span class="inline-formula">XCO<sub>2</sub></span> accuracy and precision targets of the mission. Two different MAP instrument concepts are considered in this analysis. We find that for either concept, MAP measurement uncertainties on radiance and degree of linear polarization should be no more than 3 % and 0.003, respectively. A retrieval exercise using MAP and spectrometer measurements of the synthetic scenes is carried out for each of the two MAP concepts. The resulting <span class="inline-formula">XCO<sub>2</sub></span> errors have an overall bias of <span class="inline-formula">−0.004</span> <span class="inline-formula">ppm</span> and a spread of 0.54 <span class="inline-formula">ppm</span> for one concept, and a bias of 0.02 <span class="inline-formula">ppm</span> and a spread of 0.52 <span class="inline-formula">ppm</span> for the other concept. Both are compliant with the CO2M mission requirements; the very low bias is especially important for proper emission estimates. For the test ensemble, we find effectively no dependence of the <span class="inline-formula">XCO<sub>2</sub></span> errors on aerosol optical depth, altitude of the aerosol layer, and solar zenith angle. These results indicate a major improvement in the retrieved <span class="inline-formula">XCO<sub>2</sub></span> accuracy with respect to the standard retrieval approach, which could lead to a higher data yield, better global coverage, and a more comprehensive determination of <span class="inline-formula">CO<sub>2</sub></span> sinks and sources. As such, this outcome underlines the contribution of, and therefore the need for, a MAP instrument aboard the CO2M mission.</p>
The linear (Winkler) foundation is a simple model widely used for decades to account for the surface response of elastic bodies. It models the response as purely local, linear, and perpendicular to the surface. We extend this model to the case where the foundation is made of a structured material such as a polymer network, which has characteristic scales of length and time. We use the two-fluid model of viscoelastic structured materials to treat a film of finite thickness, supported on a rigid solid and subjected to a concentrated normal force at its free surface. We obtain the foundation modulus (Winkler constant) as a function of the film's thickness, intrinsic correlation length, and viscoelastic moduli, for three choices of boundary conditions. The results can be used to readily extend earlier applications of the Winkler model to more complex, microstructured substrates. They also provide a way to extract the intrinsic properties of such complex materials from mechanical surface measurements.