Esraa Elelimy, David Szepesvari, Martha White
et al.
In the traditional view of reinforcement learning, the agent's goal is to find an optimal policy that maximizes its expected sum of rewards. Once the agent finds this policy, the learning ends. This view contrasts with \emph{continual reinforcement learning}, where learning does not end, and agents are expected to continually learn and adapt indefinitely. Despite the clear distinction between these two paradigms of learning, much of the progress in continual reinforcement learning has been shaped by foundations rooted in the traditional view of reinforcement learning. In this paper, we first examine whether the foundations of traditional reinforcement learning are suitable for the continual reinforcement learning paradigm. We identify four key pillars of the traditional reinforcement learning foundations that are antithetical to the goals of continual learning: the Markov decision process formalism, the focus on atemporal artifacts, the expected sum of rewards as an evaluation metric, and episodic benchmark environments that embrace the other three foundations. We then propose a new formalism that sheds the first and the third foundations and replaces them with the history process as a mathematical formalism and a new definition of deviation regret, adapted for continual learning, as an evaluation metric. Finally, we discuss possible approaches to shed the other two foundations.
<p>A dilute plasma is continuously maintained in the troposphere by ionizing particle radiation from galactic cosmic rays and radon decay. Small ions in the 1–2 nm size range play an important role in atmospheric processes such as ion-induced nucleation of aerosol particles. Consequently, there is a need for precise and robust instruments to measure small ions both for atmospheric observations and for laboratory experiments that simulate the atmosphere. Here, we describe the design and performance of the Cluster Ion Counter (CIC, Airel OÜ), which measures the number concentrations of positively and negatively charged ions and particles below 5 nm mobility diameter simultaneously, with low noise and fast time response. The CIC is primarily designed as a robust, low-maintenance instrument prioritizing ease of operation and broad applicability, including laboratory experiments; long-term unattended field measurements; and mobile, airborne, and battery-powered setups. The main application of the device is to study the temporal development of total cluster ion concentrations while also providing some information about the ion mobility distribution. The detection efficiency is above 80 % for ions and charged particles between 1.2 and 2.0 nm and above 90 % between 2.0 and 3.0 nm. The ion concentrations measured by the CIC agree well with reference instruments. The noise level (1 <span class="inline-formula"><i>σ</i></span> of background measurements) is typically between 20 and 30 ions cm<span class="inline-formula"><sup>−3</sup></span> at 1 Hz sampling rate and an airflow rate of 7 L min<span class="inline-formula"><sup>−1</sup></span> per analyzer. The noise level improves when higher flow rates and longer sampling periods are used. The CIC responds rapidly, with 1 s time resolution, to pulses of ionization produced in the CLOUD chamber by a CERN particle beam.</p>
This article was motivated by the discovery of a potential new foundation for mainstream mathematics. The goals are to clarify the relationships between primitives, foundations, and deductive practice; to understand how to determine what is, or isn't, a foundation; and get clues as to how a foundation can be optimized for effective human use. For this we turn to history and professional practice of the subject. We have no asperations to Philosophy. The first section gives a short abstract discussion, focusing on the significance of consistency. The next briefly describes foundations, explicit and implicit, at a few key periods in mathematical history. We see, for example, that at the primitive level human intuitions are essential, but can be problematic. We also see that traditional axiomatic set theories, Zermillo-Fraenkel-Choice (ZFC) in particular, are not quite consistent with mainstream practice. The final section sketches the proposed new foundation and gives the basic argument that it is uniquely qualified to be considered {the} foundation of mainstream deductive mathematics. The ``coherent limit axiom'' characterizes the new theory among ZFC-like theories. This axiom plays a role in recursion, but is implicitly assumed in mainstream work so does not provide new leverage there. In principle it should settle set-theory questions such as the continuum hypothesis.
Earthwork-related locations (ERLs), such as construction sites, earth dumping ground, and concrete mixing stations, are major sources of urban dust pollution (particulate matters). The effective management of ERLs is crucial and requires timely and efficient tracking of these locations throughout the city. This work aims to identify and classify urban ERLs using GPS trajectory data of over 16,000 construction waste hauling trucks (CWHTs), as well as 58 urban features encompassing geographic, land cover, POI and transport dimensions. We compare several machine learning models and examine the impact of various spatial-temporal features on classification performance using real-world data in Chengdu, China. The results demonstrate that 77.8% classification accuracy can be achieved with a limited number of features. This classification framework was implemented in the Alpha MAPS system in Chengdu, which has successfully identified 724 construction cites/earth dumping ground, 48 concrete mixing stations, and 80 truck parking locations in the city during December 2023, which has enabled local authority to effectively manage urban dust pollution at low personnel costs.
The Stratified Foundations are a restriction of naive set theory where the comprehension scheme is restricted to stratifiable propositions. It is known that this theory is consistent and that proofs strongly normalize in this theory. Deduction modulo is a formulation of first-order logic with a general notion of cut. It is known that proofs normalize in a theory modulo if it has some kind of many-valued model called a pre-model. We show in this paper that the Stratified Foundations can be presented in deduction modulo and that the method used in the original normalization proof can be adapted to construct a pre-model for this theory.
<p>Radar Doppler spectra observations provide a wealth of information about cloud and precipitation microphysics and dynamics. The interpretation
of these measurements depends on our ability to simulate these observations
accurately using a forward model. The effect of small-scale turbulence on the radar Doppler spectra shape has been traditionally treated by implementing the convolution process on the hydrometeor reflectivity spectrum and environmental turbulence. This approach assumes that all the particles in the radar sampling volume respond the same to turbulent-scale velocity fluctuations and neglects the particle inertial effect. Here, we investigate the inertial effects of liquid-phase particles on the forward modeled radar Doppler spectra. A physics-based simulation (PBS) is developed to demonstrate that big droplets, with large inertia, are unable to follow the rapid change of the velocity field in a turbulent environment. These findings are incorporated into a new radar Doppler spectra simulator. Comparison between the traditional and newly formulated radar Doppler spectra simulators indicates that the conventional simulator leads to an unrealistic broadening of the spectrum, especially in a strong turbulent environment. This study provides clear evidence to illustrate the droplet inertial effect on radar Doppler spectrum and develops a physics-based simulator framework to accurately emulate the Doppler spectrum for a given droplet size distribution (DSD) in a turbulence field. The proposed simulator has various potential applications for the cloud and precipitation studies, and it provides a valuable tool to decode the cloud microphysical and dynamical properties from Doppler radar observation.</p>
Conceptual modeling is a strongly interdisciplinary field of research. Although numerous proposals for axiomatic foundations of the main ideas of the field exist, there is still a lack of understanding main concepts such as system, process, event, data, and many more. Against the background of the tremendously gaining importance of digital phenomena, we argue that axiomatic foundations are needed for our discipline. Besides the general call, we provide a particular case study using HERAKLIT. This modeling infrastructure encompasses the architecture, statics, and dynamics of computer-integrated systems. The case study illustrates how axiomatically well-founded conceptual models may look like in practice. We argue that axiomatic foundations do not only have positive effects for theoretical research, but also for empirical research, because, for instance, assumed axioms can explicitly be tested. It is now time to spark the discussion on axiomatic foundations of our field.
S. Martínez-Alonso, M. N. Deeter, B. C. Baier
et al.
<p>AirCore in situ vertical profiles sample the atmosphere from near the surface to the lower stratosphere, making them ideal for the validation of satellite tropospheric trace gas data. Here we present intercomparison results of AirCore carbon monoxide (CO) measurements with respect to retrievals from MOPITT (Measurements of Pollution In The Troposphere; version 8) and TROPOMI (TROPOspheric Monitoring Instrument), on board the NASA Terra and ESA Sentinel 5-Precursor satellites, respectively. Mean MOPITT/AirCore total column bias values and their standard deviation (0.4 <span class="inline-formula">±</span> 5.5, 1.7 <span class="inline-formula">±</span> 5.6, and 0.7 <span class="inline-formula">±</span> 6.0 for MOPITT thermal-infrared, near-infrared, and multispectral retrievals, respectively; all in %)
are similar to results obtained in MOPITT/NOAA aircraft flask data comparisons from this study and from previous validation efforts. MOPITT CO retrievals are systematically validated using in situ vertical profiles from a variety of aircraft campaigns. Because most aircraft vertical profiles do not sample the troposphere's entire vertical extent, they must be extended upwards in order to be usable in validation. Here we quantify, for the first time, the error introduced in MOPITT CO validation by the use of shorter aircraft vertical profiles extended upwards by analyzing validation results of MOPITT with respect to full and truncated AirCore CO vertical profiles. Our results indicate that the error is small, affects mostly upper tropospheric retrievals (at 300 hPa: <span class="inline-formula">∼</span> 2.6, 0.8, and 3.2 percent points for MOPITT thermal-infrared, near-infrared, and multispectral, respectively), and may have resulted in the overestimation of MOPITT retrieval biases in that region. TROPOMI can retrieve CO under both clear and cloudy conditions. The latter is achieved by quantifying interfering trace gases and parameters describing the cloud contamination of the measurements together with the CO column; then, the reference CO profiles used in the retrieval are scaled based on estimated above-cloud CO rather than on estimated total CO. We use AirCore measurements as the reference to evaluate the error introduced by this approach in cloudy TROPOMI retrievals over land after accounting for TROPOMI's vertical sensitivity to CO (relative bias and its standard deviation <span class="inline-formula">=</span> 2.02 % <span class="inline-formula">±</span> 11.13 %). We also quantify the null-space error, which accounts for differences between the shape of TROPOMI reference profiles and that of AirCore measured profiles (for TROPOMI cloudy <span class="inline-formula"><i>e</i><sub>null</sub>=0.98</span> % <span class="inline-formula">±</span> 2.32 %).</p>
В природному стані тиксотропні явища в ґрунтах найбільш часто виникають при зовнішніх навантаженнях динамічного характеру: землетрусах, вібрації від транспортних засобів, що рухаються, працюючих машин і механізмів. Відомості про дослідження тиксотропії ґрунтів свідчать про те, що при динамічному навантаженні зменшувати міцність і переходити в рідкий стан можуть дисперсні ґрунти різного типу, генезису і віку. Вплив динамічного навантаження особливо суттєвий у випадку слабоущільнених і водонасичених піщаних та глинистих ґрунтів.
При динамічному навантаженні може відбуватися як ущільнення ґрунту і відповідно підвищення його міцності, так і зменшення міцності. Частіше доводиться стикатися з процесами зменшення міцності і розріджування ґрунтів при динамічному впливі. Найбільш вивчені процеси, що відбуваються у водонасичених пісках. Стан розрідження при вібрації виникає внаслідок руйнування контактів між окремими зернами, тривкість піщаного ґрунту падає практично до нуля. Після зняття вібронавантаження піщинки під впливом власної ваги переміщуються вниз, викликаючи стискання рідкої фази. Чим більше піски містять тонкодисперсних часток і органічної речовини, тим довше вони зберігають рідиноподібний стан, тим повільніше відбувається відтискання води і ущільнення. Можливість розрідження піщаних ґрунтів визначається не стільки природною пористістю, скільки напруженим станом ґрунту і характером вібровпливу.
<p>For the past 17 years, the Atmospheric Chemistry Experiment Fourier Transform Spectrometer (ACE-FTS) instrument on the Canadian SCISAT satellite
has been measuring profiles of atmospheric ozone. The latest operational
versions of the level 2 ozone data are versions 3.6 and 4.1. This study
characterizes how both products compare with correlative data from other
limb-sounding satellite instruments, namely MAESTRO, MLS, OSIRIS, SABER, and
SMR. In general, v3.6, with respect to the other instruments, exhibits a
smaller bias (which is on the order of <span class="inline-formula">∼</span> 3 %) in the middle
stratosphere than v4.1 (<span class="inline-formula">∼</span> 2 %–9 %); however, the bias exhibited
in the v4.1 data tends to be more stable, i.e. not changing significantly
over time in any altitude region. In the lower stratosphere, v3.6 has a
positive bias of about 3 %–5 % that is stable to within
<span class="inline-formula">±</span>1 % per decade, and v4.1 has a bias on the order of <span class="inline-formula">−</span>1 % to <span class="inline-formula">+</span>5 % and is also stable to within <span class="inline-formula">±</span>1 % per decade. In the middle stratosphere, v3.6 has a positive bias of <span class="inline-formula">∼</span> 3 % with a significant negative drift on the order of 0.5 %–2.5 % per decade, and v4.1 has a positive bias of 2 %–9 % that is stable to within <span class="inline-formula">±</span>0.5 % per decade. In the upper stratosphere, v3.6 has a positive bias that increases with altitude up to <span class="inline-formula">∼</span> 16 % and a significant negative drift on the order of 2 %–3 % per decade, and v4.1 has a positive bias that increases with altitude up to <span class="inline-formula">∼</span> 15 % and is stable to within <span class="inline-formula">±</span>1 % per decade. Estimates indicate that both versions 3.6 and 4.1 have precision values on the order of 0.1–0.2 ppmv below 20 km and above 45 km (<span class="inline-formula">∼</span> 5 %–10 %, depending on altitude). Between 20 and 45 km, the
estimated v3.6 precision of <span class="inline-formula">∼</span> 4 %–6 % is better than the
estimated v4.1 precision of <span class="inline-formula">∼</span> 6 %–10 %.</p>
<p>Raindrop size distribution (DSD) observations during the passage of landfalling tropical cyclone Nivar by impact (JWD) and laser (LPM and PARSIVEL) disdrometers are used to unveil the DSD characteristics in the eyewall as well as the inner and outer rainbands. Disdrometer measurements collected at the same location are used to study the effect of wind, measuring principle, and hardware processing on the DSDs and, in turn, on estimated rain integral and polarimetric parameters. The concentration of raindrops of diameters between 0.7 and 1.5 mm increases with rain rate (<span class="inline-formula"><i>R</i></span>) in all the regions of Nivar, while the magnitude of the increase is higher in the eyewall than in the inner and outer rainbands. The DSD characteristics reveal that for a given <span class="inline-formula"><i>R</i></span>, relatively larger reflectivity (<span class="inline-formula"><i>Z</i></span>) and mass-weighted mean diameter (<span class="inline-formula"><i>D</i><sub>m</sub></span>) are found in the outer rainband, and smaller <span class="inline-formula"><i>Z</i></span> and <span class="inline-formula"><i>D</i><sub>m</sub></span> are found in the eyewall than in other regions of a tropical cyclone (TC). Raindrops of diameter 3 mm in size are observed frequently in inner and outer rainbands; however, they are infrequent in the eyewall at <span class="inline-formula"><i>R</i></span> greater than 5 mm h<span class="inline-formula"><sup>−1</sup></span>. The DSDs and estimated rain integral and polarimetric parameters are distinctly different for various disdrometers at similar environmental conditions. Raindrops greater than 3 mm in size are infrequent in the JWD recordings, while they are frequent in the LPM and PARSIVEL, indicating that LPM and PARSIVEL overestimate the raindrop size when the fall path deviates from nadir due to horizontal wind. The wind effect on the recorded DSD as well as estimated rain integral and polarimetric parameters are not uniform in various regions of Nivar for different disdrometers as the measuring principle and hardware processing further influence these effects. Along with the differences in measured DSD spectra, the resonance effects at X band for raindrops greater than 3 mm cause variations in the estimated polarimetric parameters between the disdrometers.</p>
<p>Accurate knowledge of cloud properties is essential to the measurement of atmospheric composition from space. In this work we assess the quality of the cloud data from three Copernicus Sentinel-5 Precursor (S5P) TROPOMI cloud products: (i) S5P OCRA/ROCINN_CAL (Optical Cloud Recognition Algorithm/Retrieval of Cloud Information using Neural Networks;Clouds-As-Layers), (ii) S5P OCRA/ROCINN_CRB (Clouds-as-Reflecting Boundaries), and (iii) S5P FRESCO-S (Fast Retrieval Scheme for Clouds from Oxygen absorption bands – Sentinel). Target properties of this work are cloud-top height and cloud optical thickness (OCRA/ROCINN_CAL), cloud height (OCRA/ROCINN_CRB and FRESCO-S), and radiometric cloud fraction (all three algorithms). The analysis combines (i) the examination of cloud maps for artificial geographical patterns, (ii) the comparison to other satellite cloud data (MODIS, NPP-VIIRS, and OMI <span class="inline-formula">O<sub>2</sub></span>–<span class="inline-formula">O<sub>2</sub></span>), and (iii) ground-based validation with respect to correlative observations (30 April 2018 to 27 February 2020) from the Cloudnet network of ceilometers, lidars, and radars. Zonal mean latitudinal variation of S5P cloud properties is similar to that of other satellite data. S5P OCRA/ROCINN_CAL agrees well with NPP VIIRS cloud-top height and cloud optical thickness and with Cloudnet cloud-top height, especially for the low (mostly liquid) clouds. For the high clouds, S5P OCRA/ROCINN_CAL cloud-top height is below the cloud-top height of VIIRS and of Cloudnet, while its cloud optical thickness is higher than that of VIIRS. S5P OCRA/ROCINN_CRB and S5P FRESCO cloud height are well below the Cloudnet cloud mean height for the low clouds but match on average better with the Cloudnet cloud mean height for the higher clouds. As opposed to S5P OCRA/ROCINN_CRB and S5P FRESCO, S5P OCRA/ROCINN_CAL is well able to match the lowest CTH mode of the Cloudnet observations. Peculiar geographical patterns are identified in the cloud products and will be mitigated in future releases of the cloud data products.</p>
<p>This two-part study explores hyperspectral (300–700 <span class="inline-formula">nm</span>) aerosol optical measurements obtained from in situ sampling methods employed during
the May–June 2016 Korea–United States Ocean Color (KORUS-OC) cruise conducted in concert with the broader air quality campaign (KORUS-AQ). Part 1
focused on the hyperspectral measurement of extinction coefficients (<span class="inline-formula"><i>σ</i><sub>ext</sub></span>) using the recently developed in situ Spectral Aerosol
Extinction (SpEx) instrument and showed that second-order polynomials provided a better fit to the measured spectra than power law fits. Two
dimensional mapping of the second-order polynomial coefficients (<span class="inline-formula"><i>a</i><sub>1</sub></span>, <span class="inline-formula"><i>a</i><sub>2</sub></span>) was used to explore the information content of the spectra. Part 2
expands on that work by applying a similar analytical approach to filter-based measurements of aerosol hyperspectral total absorption
(<span class="inline-formula"><i>σ</i><sub>abs</sub></span>) and soluble absorption from filters extracted with either deionized water (<span class="inline-formula"><i>σ</i><sub>DI-abs</sub></span>) or methanol
(<span class="inline-formula"><i>σ</i><sub>MeOH-abs</sub></span>). As was found for <span class="inline-formula"><i>σ</i><sub>ext</sub></span>, second-order polynomials provided a better fit to all three absorption spectra
sets. Averaging the measured <span class="inline-formula"><i>σ</i><sub>ext</sub></span> from Part 1 over the filter sampling intervals in this work, hyperspectral single-scattering
albedo (<span class="inline-formula"><i>ω</i></span>) was calculated. Water-soluble aerosol composition from the DI extracts was used to examine relationships with the various
measured optical properties. In particular, both <span class="inline-formula"><i>σ</i><sub>DI-abs</sub></span>(365 <span class="inline-formula">nm</span>) and <span class="inline-formula"><i>σ</i><sub>MeOH-abs</sub></span>(365 <span class="inline-formula">nm</span>) were found
to be best correlated with oxalate (<span class="inline-formula"><math xmlns="http://www.w3.org/1998/Math/MathML" id="M16" display="inline" overflow="scroll" dspmath="mathml"><mrow class="chem"><msub><mi mathvariant="normal">C</mi><mn mathvariant="normal">2</mn></msub><msubsup><mi mathvariant="normal">O</mi><mn mathvariant="normal">4</mn><mrow><mn mathvariant="normal">2</mn><mo>-</mo></mrow></msubsup></mrow></math><span><svg:svg xmlns:svg="http://www.w3.org/2000/svg" width="35pt" height="17pt" class="svg-formula" dspmath="mathimg" md5hash="e3aae9c010c3846ca08906ead7560a82"><svg:image xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="amt-14-715-2021-ie00001.svg" width="35pt" height="17pt" src="amt-14-715-2021-ie00001.png"/></svg:svg></span></span>), but elevated soluble absorption was found from two chemically and optically distinct
populations of aerosols. The more photochemically aged aerosols of those two groups exhibited partial spectra (i.e., the longer wavelengths of the
spectral range were below detection) while the less-aged aerosol of the other group exhibited complete spectra across the wavelength range. The
chromophores of these groups may have derived from different sources and/or atmospheric processes, such that photochemical age may have been only
one factor contributing to the differences in the observed spectra. The differences in the spectral properties of these groups was evident in
(<span class="inline-formula"><i>a</i><sub>1</sub></span>, <span class="inline-formula"><i>a</i><sub>2</sub></span>) maps. The results of the two-dimensional mapping shown in Parts 1 and 2 suggest that this spectral characterization may offer new
methods to relate in situ aerosol optical properties to their chemical and<span id="page716"/> microphysical characteristics. However, a key finding of this work is
that mathematical functions (whether power laws or second-order polynomials) extrapolated from a few wavelengths or a subrange of wavelengths fail to
reproduce the measured spectra over the full 300–700 <span class="inline-formula">nm</span> wavelength range. Further, the <span class="inline-formula"><i>σ</i><sub>abs</sub></span> and <span class="inline-formula"><i>ω</i></span> spectra
exhibited distinctive spectral features across the UV and visible wavelength range that simple functions and extrapolations cannot reproduce. These
results show that in situ hyperspectral measurements provide valuable new data that can be probed for additional information relating in situ
aerosol optical properties to the underlying physicochemical properties of ambient aerosols. It is anticipated that future studies examining in situ
aerosol hyperspectral properties will not only improve our ability to use optical data to characterize aerosol physicochemical properties, but that
such in situ tools will be needed to validate hyperspectral remote sensors planned for space-based observing platforms.</p>
<p>Following the release of the version 4 Cloud-Aerosol Lidar with Orthogonal
Polarization (CALIOP) data products from the Cloud-Aerosol Lidar and
Infrared Pathfinder Satellite Observations (CALIPSO) mission, a new version
(version 4; V4) of the CALIPSO Imaging Infrared Radiometer (IIR) Level 2 data
products has been developed. The IIR Level 2 data products include cloud
effective emissivities and cloud microphysical properties such as effective
diameter and ice or liquid water path estimates. Dedicated retrievals for
water clouds were added in V4, taking advantage of the high sensitivity of
the IIR retrieval technique to small particle sizes. This paper (Part I)
describes the improvements in the V4 algorithms compared to those used in
the version 3 (V3) release, while results will be presented in a companion
(Part II) paper. The IIR Level 2 algorithm has been modified in the V4 data
release to improve the accuracy of the retrievals in clouds of very small
(close to 0) and very large (close to 1) effective emissivities. To reduce
biases at very small emissivities that were made evident in V3, the
radiative transfer model used to compute clear-sky brightness temperatures
over oceans has been updated and tuned for the simulations using Modern-Era Retrospective analysis for Research and
Applications version 2 (MERRA-2)
data to match IIR observations in clear-sky conditions. Furthermore, the
clear-sky mask has been refined compared to V3 by taking advantage of
additional information now available in the V4 CALIOP 5 km layer products
used as an input to the IIR algorithm. After sea surface emissivity
adjustments, observed and computed brightness temperatures differ by less
than <span class="inline-formula">±0.2</span> K at night for the three IIR channels centered at 08.65,
10.6, and 12.05 <span class="inline-formula">µ</span>m, and inter-channel biases are reduced from several
tens of Kelvin in V3 to less than 0.1 K in V4. We have also improved
retrievals in ice clouds having large emissivity by refining the
determination of the radiative temperature needed for emissivity
computation. The initial V3 estimate, namely the cloud centroid temperature
derived from CALIOP, is corrected using a parameterized function of
temperature difference between cloud base and top altitudes, cloud
absorption optical depth, and CALIOP multiple scattering correction factor.
As shown in Part II, this improvement reduces the low biases at large
optical depths that were seen in V3 and increases the number of retrievals.
As in V3, the IIR microphysical retrievals use the concept of microphysical
indices applied to the pairs of IIR channels at 12.05 and 10.6 <span class="inline-formula">µ</span>m
and at 12.05 and 08.65 <span class="inline-formula">µ</span>m. The V4 algorithm uses ice look-up
tables (LUTs) built using two ice habit models from the recent “TAMUice2016” database, namely the single-hexagonal-column model and the<span id="page3254"/> eight-element
column aggregate model, from which bulk properties are synthesized using a
gamma size distribution. Four sets of effective diameters derived from a
second approach are also reported in V4. Here, the LUTs are analytical
functions relating microphysical index applied to IIR channels 12.05 and
10.6 <span class="inline-formula">µ</span>m and effective diameter as derived from in situ
measurements at tropical and midlatitudes during the Tropical Composition,
Cloud, and Climate Coupling (TC4) and Small Particles in Cirrus
Science and Operations Plan (SPARTICUS)
field experiments.</p>
A number of software foundations have been created as legal instruments to better articulate the structure, collaboration and financial model of Open Source Software (OSS) projects. Some examples are the Apache, Linux, or Mozilla foundations. However, the mission and support provided by these foundations largely differ among them. In this paper we perform a study on the role of foundations in OSS development. We analyze the nature, activities, role and governance of 101 software foundations and then go deeper on the 27 having as concrete goal the development and evolution of specific open source projects (and not just generic actions to promote the free software movement or similar). Our results reveal the existence of a significant number of foundations with the sole purpose of promoting the free software movement and/or that limit themselves to core legal aspects but do not play any role in the day-to-day operations of the project (e.g., umbrella organizations for a large variety of projects). Therefore, while useful, foundations do not remove the need for specific projects to develop their own specific governance, contribution and development policies. A website to help projects to choose the foundation that best fits their needs is also available.
We provide a theoretical foundation for the notion of the quantum coherent state of the electrostatic field of a static external charge distribution introduced in a 1998 paper and rederive formulae there for the inner products of a pair of such states. Contrary to what one might expect, these inner products are non-zero whenever the total charges of the two charge distributions are equal, even if the charge distributions themselves differ. We actually display two different frameworks for these same coherent states, in the second of which Gauss's law only holds in expectation value. We propose an experiment capable of ruling that out. The first framework leads to a 'product picture' for full QED -- i.e. a reformulation of standard QED in which it has a total Hamiltonian, arising as a sum of a free electromagnetic Hamiltonian, a free charged-matter Hamiltonian and an interaction term, acting on a 'physical subspace' of the full tensor product of charged-matter and electromagnetic-field Hilbert spaces. (The traditional Coulomb gauge formulation of QED isn't a product picture because, in it, the longitudinal part of the electric field is a function of the charged matter operators.) We do this for both Maxwell-Dirac and Maxwell-Schrödinger QED. For all states in the physical subspace of each of these systems, the charged matter is entangled with longitudinal photons and Gauss's law holds on the physical subspace as an operator equation; albeit the electric field operator and the Hamiltonian, while self-adjoint on the physical subspace, fail to be self-adjoint on the full tensor-product Hilbert space. Analogues of our coherent state inner products and of the product picture play a role in the author's matter-gravity entanglement hypothesis. Also, the product picture amounts to a temporal gauge quantization of QED which appears to be free from the difficulties of previous versions.
This paper is a programmatic article presenting an outline of a new view of the foundations of quantum mechanics and quantum field theory. In short, the proposed foundations are given by the following statements: * Coherent quantum physics is physics in terms of a coherent space consisting of a line bundle over a classical phase space and an appropriate coherent product. * The kinematical structure of quantum physics and the meaning of the fundamental quantum observables are given by the symmetries of this coherent space, their infinitesimal generators, and associated operators on the quantum space of the coherent space. * The connection of quantum physics to experiment is given through the thermal interpretation. The dynamics of quantum physics is given (for isolated systems) by the Ehrenfest equations for q-expectations.
<p>Nitrous oxide (<span class="inline-formula">N<sub>2</sub>O</span>) is an important greenhouse gas and it can also
generate nitric oxide, which depletes ozone in the stratosphere. It is a
common target species of ground-based Fourier transform infrared (FTIR) near-infrared (TCCON) and
mid-infrared (NDACC) measurements. Both TCCON and NDACC networks provide a
long-term global distribution of atmospheric <span class="inline-formula">N<sub>2</sub>O</span> mole fraction. In this
study, the dry-air column-averaged mole fractions of <span class="inline-formula">N<sub>2</sub>O</span> (<span class="inline-formula"><math xmlns="http://www.w3.org/1998/Math/MathML" id="M4" display="inline" overflow="scroll" dspmath="mathml"><mrow class="chem"><msub><mi mathvariant="normal">X</mi><mrow><msub><mi mathvariant="normal">N</mi><mn mathvariant="normal">2</mn></msub><mi mathvariant="normal">O</mi></mrow></msub></mrow></math><span><svg:svg xmlns:svg="http://www.w3.org/2000/svg" width="25pt" height="14pt" class="svg-formula" dspmath="mathimg" md5hash="7083feeaa337c360bc1dec6cdd9e436c"><svg:image xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="amt-12-1393-2019-ie00001.svg" width="25pt" height="14pt" src="amt-12-1393-2019-ie00001.png"/></svg:svg></span></span>) from
the TCCON and NDACC measurements are compared against each other at seven
sites around the world (Ny-Ålesund, Sodankylä, Bremen, Izaña,
Réunion, Wollongong, Lauder) in the time period of 2007–2017. The mean
differences in <span class="inline-formula"><math xmlns="http://www.w3.org/1998/Math/MathML" id="M5" display="inline" overflow="scroll" dspmath="mathml"><mrow class="chem"><msub><mi mathvariant="normal">X</mi><mrow><msub><mi mathvariant="normal">N</mi><mn mathvariant="normal">2</mn></msub><mi mathvariant="normal">O</mi></mrow></msub></mrow></math><span><svg:svg xmlns:svg="http://www.w3.org/2000/svg" width="25pt" height="14pt" class="svg-formula" dspmath="mathimg" md5hash="5e6a681c49fd20b61f27782a4f0ae370"><svg:image xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="amt-12-1393-2019-ie00002.svg" width="25pt" height="14pt" src="amt-12-1393-2019-ie00002.png"/></svg:svg></span></span> between TCCON and NDACC (NDACC–TCCON) at these
sites are between <span class="inline-formula">−3.32</span> and 1.37 ppb (<span class="inline-formula">−1.1</span> %–0.5 %) with standard
deviations between 1.69 and 5.01 ppb (0.5 %–1.6 %), which are within the
uncertainties of the two datasets. The NDACC <span class="inline-formula">N<sub>2</sub>O</span> retrieval has good
sensitivity throughout the troposphere and stratosphere, while the TCCON
retrieval underestimates a deviation from the a priori in the troposphere and
overestimates it in the stratosphere. As a result, the TCCON <span class="inline-formula"><math xmlns="http://www.w3.org/1998/Math/MathML" id="M9" display="inline" overflow="scroll" dspmath="mathml"><mrow class="chem"><msub><mi mathvariant="normal">X</mi><mrow><msub><mi mathvariant="normal">N</mi><mn mathvariant="normal">2</mn></msub><mi mathvariant="normal">O</mi></mrow></msub></mrow></math><span><svg:svg xmlns:svg="http://www.w3.org/2000/svg" width="25pt" height="14pt" class="svg-formula" dspmath="mathimg" md5hash="50b5fa68b9780aad29d3bc59a335671d"><svg:image xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="amt-12-1393-2019-ie00003.svg" width="25pt" height="14pt" src="amt-12-1393-2019-ie00003.png"/></svg:svg></span></span>
measurement is strongly affected by its a priori profile.</p>
<p><span id="page1394"/>Trends and seasonal cycles of <span class="inline-formula"><math xmlns="http://www.w3.org/1998/Math/MathML" id="M10" display="inline" overflow="scroll" dspmath="mathml"><mrow class="chem"><msub><mi mathvariant="normal">X</mi><mrow><msub><mi mathvariant="normal">N</mi><mn mathvariant="normal">2</mn></msub><mi mathvariant="normal">O</mi></mrow></msub></mrow></math><span><svg:svg xmlns:svg="http://www.w3.org/2000/svg" width="25pt" height="14pt" class="svg-formula" dspmath="mathimg" md5hash="99677be2b065f598f9fe943d745811ab"><svg:image xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="amt-12-1393-2019-ie00004.svg" width="25pt" height="14pt" src="amt-12-1393-2019-ie00004.png"/></svg:svg></span></span> are derived from the TCCON and NDACC
measurements and the nearby surface flask sample measurements and compared
with the results from GEOS-Chem model a priori and a posteriori simulations.
The trends and seasonal cycles from FTIR measurement at Ny-Ålesund and
Sodankylä are strongly affected by the polar winter and the polar vortex.
The a posteriori <span class="inline-formula">N<sub>2</sub>O</span> fluxes in the model are optimized based on surface
<span class="inline-formula">N<sub>2</sub>O</span> measurements with a 4D-Var inversion method. The <span class="inline-formula"><math xmlns="http://www.w3.org/1998/Math/MathML" id="M13" display="inline" overflow="scroll" dspmath="mathml"><mrow class="chem"><msub><mi mathvariant="normal">X</mi><mrow><msub><mi mathvariant="normal">N</mi><mn mathvariant="normal">2</mn></msub><mi mathvariant="normal">O</mi></mrow></msub></mrow></math><span><svg:svg xmlns:svg="http://www.w3.org/2000/svg" width="25pt" height="14pt" class="svg-formula" dspmath="mathimg" md5hash="e10d4b76078a1e8806f098c6d853566d"><svg:image xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="amt-12-1393-2019-ie00005.svg" width="25pt" height="14pt" src="amt-12-1393-2019-ie00005.png"/></svg:svg></span></span> trends
from the GEOS-Chem a posteriori simulation (<span class="inline-formula">0.97±0.02</span> (<span class="inline-formula">1<i>σ</i></span>) ppb yr<span class="inline-formula"><sup>−1</sup></span>) are close to those from the NDACC (0<span class="inline-formula">.93±0.04</span> ppb yr<span class="inline-formula"><sup>−1</sup></span>) and
the surface flask sample measurements (<span class="inline-formula">0.93±0.02</span> ppb yr<span class="inline-formula"><sup>−1</sup></span>). The
<span class="inline-formula"><math xmlns="http://www.w3.org/1998/Math/MathML" id="M21" display="inline" overflow="scroll" dspmath="mathml"><mrow class="chem"><msub><mi mathvariant="normal">X</mi><mrow><msub><mi mathvariant="normal">N</mi><mn mathvariant="normal">2</mn></msub><mi mathvariant="normal">O</mi></mrow></msub></mrow></math><span><svg:svg xmlns:svg="http://www.w3.org/2000/svg" width="25pt" height="14pt" class="svg-formula" dspmath="mathimg" md5hash="466b2088eb3ba38a6fcc0d0b8ea69279"><svg:image xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="amt-12-1393-2019-ie00006.svg" width="25pt" height="14pt" src="amt-12-1393-2019-ie00006.png"/></svg:svg></span></span> trend from the TCCON measurements is slightly lower (<span class="inline-formula">0.81±0.04</span> ppb yr<span class="inline-formula"><sup>−1</sup></span>) due to the underestimation of the trend in TCCON a priori simulation. The
<span class="inline-formula"><math xmlns="http://www.w3.org/1998/Math/MathML" id="M24" display="inline" overflow="scroll" dspmath="mathml"><mrow class="chem"><msub><mi mathvariant="normal">X</mi><mrow><msub><mi mathvariant="normal">N</mi><mn mathvariant="normal">2</mn></msub><mi mathvariant="normal">O</mi></mrow></msub></mrow></math><span><svg:svg xmlns:svg="http://www.w3.org/2000/svg" width="25pt" height="14pt" class="svg-formula" dspmath="mathimg" md5hash="90450f00fd870e5f84133e6e1a36cf6c"><svg:image xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="amt-12-1393-2019-ie00007.svg" width="25pt" height="14pt" src="amt-12-1393-2019-ie00007.png"/></svg:svg></span></span> trends from the GEOS-Chem a priori simulation are about 1.25 ppb yr<span class="inline-formula"><sup>−1</sup></span>, and our study confirms that the
<span class="inline-formula">N<sub>2</sub>O</span> fluxes from the a priori
inventories are overestimated. The seasonal cycles of <span class="inline-formula"><math xmlns="http://www.w3.org/1998/Math/MathML" id="M27" display="inline" overflow="scroll" dspmath="mathml"><mrow class="chem"><msub><mi mathvariant="normal">X</mi><mrow><msub><mi mathvariant="normal">N</mi><mn mathvariant="normal">2</mn></msub><mi mathvariant="normal">O</mi></mrow></msub></mrow></math><span><svg:svg xmlns:svg="http://www.w3.org/2000/svg" width="25pt" height="14pt" class="svg-formula" dspmath="mathimg" md5hash="bf7b54d1602258a6bd4e0a2baf736945"><svg:image xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="amt-12-1393-2019-ie00008.svg" width="25pt" height="14pt" src="amt-12-1393-2019-ie00008.png"/></svg:svg></span></span> from the
FTIR measurements and the model simulations are close to each other in the
Northern Hemisphere with a maximum in August–October and a minimum in
February–April. However, in the Southern Hemisphere, the modeled <span class="inline-formula"><math xmlns="http://www.w3.org/1998/Math/MathML" id="M28" display="inline" overflow="scroll" dspmath="mathml"><mrow class="chem"><msub><mi mathvariant="normal">X</mi><mrow><msub><mi mathvariant="normal">N</mi><mn mathvariant="normal">2</mn></msub><mi mathvariant="normal">O</mi></mrow></msub></mrow></math><span><svg:svg xmlns:svg="http://www.w3.org/2000/svg" width="25pt" height="14pt" class="svg-formula" dspmath="mathimg" md5hash="da6994d65f4a61e38d189bbe5fbdd62a"><svg:image xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="amt-12-1393-2019-ie00009.svg" width="25pt" height="14pt" src="amt-12-1393-2019-ie00009.png"/></svg:svg></span></span>
values show a minimum in February–April while the FTIR <span class="inline-formula"><math xmlns="http://www.w3.org/1998/Math/MathML" id="M29" display="inline" overflow="scroll" dspmath="mathml"><mrow class="chem"><msub><mi mathvariant="normal">X</mi><mrow><msub><mi mathvariant="normal">N</mi><mn mathvariant="normal">2</mn></msub><mi mathvariant="normal">O</mi></mrow></msub></mrow></math><span><svg:svg xmlns:svg="http://www.w3.org/2000/svg" width="25pt" height="14pt" class="svg-formula" dspmath="mathimg" md5hash="8900c9f9c19d990507a65d899d2e82ec"><svg:image xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="amt-12-1393-2019-ie00010.svg" width="25pt" height="14pt" src="amt-12-1393-2019-ie00010.png"/></svg:svg></span></span> retrievals show
different patterns. By comparing the partial column-averaged <span class="inline-formula">N<sub>2</sub>O</span> from the
model and NDACC for three vertical ranges (surface–8, 8–17, 17–50 km), we
find that the discrepancy in the <span class="inline-formula"><math xmlns="http://www.w3.org/1998/Math/MathML" id="M31" display="inline" overflow="scroll" dspmath="mathml"><mrow class="chem"><msub><mi mathvariant="normal">X</mi><mrow><msub><mi mathvariant="normal">N</mi><mn mathvariant="normal">2</mn></msub><mi mathvariant="normal">O</mi></mrow></msub></mrow></math><span><svg:svg xmlns:svg="http://www.w3.org/2000/svg" width="25pt" height="14pt" class="svg-formula" dspmath="mathimg" md5hash="0d208196834a80a82d174963af43b993"><svg:image xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="amt-12-1393-2019-ie00011.svg" width="25pt" height="14pt" src="amt-12-1393-2019-ie00011.png"/></svg:svg></span></span> seasonal cycle between the model
simulations and the FTIR measurements in the Southern Hemisphere is mainly
due to their stratospheric differences.</p>
Finetuning and Naturalness are extra-empirical theory assessments that reflect our expectation how scientific theories should provide an intuitive understanding about the foundations underlying the observed phenomena. Recently, the absence of new physics at the LHC and the theoretical evidence for a multiverse of alternative physical realities, predicted by our best fundamental theories, have casted doubts about the validity of these concepts. In this essay we argue that the discussion about Finetuning should not predominantly concentrate on the desired features a fundamental theory is expected to have, but rather on the question what a theory needs to qualify as fundamental in the first place. By arguing that a fundamental description of the Universe should possess zero entropy, we develop a 'holistic' concept for the most fundamental layer of reality: The fundamental description of the Universe is the Universe itself, understood as an entangled quantum state. Adopting a universal applicability of quantum mechanics, in this framework the behavior of subsystems can be understood as the perspectival experience of an entangled quantum Universe perceived through the "lens of decoherence". In this picture the fundamental reality is non-local, and finetuned coincidences in effective theories may be understood in a way similar to EPR-correlations. This notion provides a fresh view on the topic of Naturalness and Finetuning since it suggests that Finetuning problems and hints for anthropic explanations are an artefact of theories building up on subsystems rather than on the fundamental description. Recent work in quantum gravity aiming at an understanding of spacetime geometry from entanglement entropy could be interpreted as a first sign of such a paradigm shift.