<p>The CO<span class="inline-formula"><sub>2</sub></span> molar fraction in standard gas mixtures is known to deviate as a result of adsorption/desorption to/from the inner surface of a high-pressure cylinder and thermal diffusion fractionation caused by the temperature distribution in the cylinder. This deviation reduces the consistency of atmospheric CO<span class="inline-formula"><sub>2</sub></span> observations, because the standard gas mixtures are used to calibrate all measurement systems for precise CO<span class="inline-formula"><sub>2</sub></span> observations. To maintain the consistency of CO<span class="inline-formula"><sub>2</sub></span> values over the long term, a quantitative understanding of the deviations in the CO<span class="inline-formula"><sub>2</sub></span> molar fraction in a standard gas mixture is needed. Thus far, this understanding has not been achieved sufficiently well, because the contribution of thermal diffusion fractionation is less well understood than that of adsorption/desorption. In this study, offsets of 0.013 <span class="inline-formula">±</span> 0.015 and <span class="inline-formula">−</span>0.014 <span class="inline-formula">±</span> 0.011 <span class="inline-formula">µmol</span> mol<span class="inline-formula"><sup>−1</sup></span> were observed in the outflowing gas from horizontally and vertically positioned cylinders, respectively, at a flow rate of 0.080 L min<span class="inline-formula"><sup>−1</sup></span>. These offsets are attributed to thermal diffusion effects, which diluted and enriched the CO<span class="inline-formula"><sub>2</sub></span> molar fraction by <span class="inline-formula">−</span>0.045 <span class="inline-formula">µmol</span> mol<span class="inline-formula"><sup>−1</sup></span> (horizontal cylinder) and 0.048 <span class="inline-formula">µmol</span> mol<span class="inline-formula"><sup>−1</sup></span> (vertical cylinder) as the relative pressure dropped to 0.03. In the experiments at same flow rate, the adsorption/desorption effect enriched the CO<span class="inline-formula"><sub>2</sub></span> molar fraction by 0.06 <span class="inline-formula">µmol</span> mol<span class="inline-formula"><sup>−1</sup></span> (horizontal cylinder) and 0.10 <span class="inline-formula">µmol</span> mol<span class="inline-formula"><sup>−1</sup></span> (vertical cylinder). Therefore, attention should be paid to both thermal diffusion fractionation and adsorption/desorption effects for precise calibration of long-term observations of CO<span class="inline-formula"><sub>2</sub></span> molar fractions, although past studies have ignored the contribution of thermal diffusion fractionation at the low flow rates (<span class="inline-formula"><</span> 0.3 L min<span class="inline-formula"><sup>−1</sup></span>) examined in this study. Furthermore, the deviation of the CO<span class="inline-formula"><sub>2</sub></span> molar fraction depends only on the pressure relative to the initial pressure of the cylinder. This result suggests that the recommendation by the World Meteorological Organization (WMO) to replace the standard gas mixture once the cylinder pressure drops to 2 MPa needs to be revised.</p>
<p>Rain gauge measurements are one of the primary techniques used to estimate a precipitation field, but they require careful quality control. This paper describes a modified RainGaugeQC system, which is applied to real-time quality control of rain gauge measurements made every 10 min. This system works operationally at the national meteorological and hydrological service in Poland. The RainGaugeQC algorithms, which have been significantly modified, are described in detail. The modifications were made primarily to control data from non-professional measurement networks, which may be of lower quality than professional data, especially in the case of personal stations. Accordingly, the modifications went in the direction of performing more sophisticated data control, applying weather radar data, and taking into account various aspects of data quality, such as consistency analysis of data time series and bias detection. The effectiveness of the modified system was verified based on independent measurement data from manual rain gauges, which are considered one of the most accurate measurement instruments, although they mostly provide daily totals. In addition, an analysis of two case studies is presented. This highlights various issues involved in using non-professional data to generate multi-source estimates of the precipitation field.</p>
<p>This study presents an algorithm for the detection of fog and low stratus (FLS) over Europe based on the infrared bands of the SEVIRI (Spinning Enhanced Visible and InfraRed Imager) instrument on board the Meteosat Second Generation geostationary satellites. As the method operates based on the SEVIRI infrared observations only, it is expected to be stationary in time and thus can provide a coherent and detailed view of FLS development over large areas over the 24 h day cycle. The algorithm is based on a gradient boosted tree machine learning model that is trained with ground truth observations from METeorological Aerodrome Report (METAR) stations and the SEVIRI observations at bands centered at 8.7, 10.8, 12.0, and 13.4 <span class="inline-formula">µ</span>m wavelengths. The METAR data used here comprise a total number of 2 544 400 data points spread over the winters (i.e., 1 September to 31 May) of the years 2016–2022 and 356 locations across Europe. Among them, the data points corresponding to 276 stations and the winters of 2016–2018 and 2019–2021 (<span class="inline-formula">∼</span> 45 % of all data points) were used to train the algorithm. The remaining data points comprise four independent datasets which were used to validate the algorithm's performance and applicability to time spans and locations within the study area (i.e., Europe) that extend beyond those covered by the data points used for the algorithm training, as well as to compare the algorithm's accuracy at the locations of METAR stations with that of the existing state-of-the-art daytime FLS detection algorithm Satellite-based Operational Fog Observation Scheme (SOFOS). Validation of the algorithm against the METAR data showed that the algorithm is well suited for the detection of FLS. Specifically, the algorithm is found to detect FLS with probability of detection (POD) values ranging from 0.70 to 0.82 (for different inter-comparison approaches) and false alarm ratios (FARs) between 0.21 and 0.31. These numbers are very close to those achieved by SOFOS for differentiating FLS from other sky conditions at the tested locations and time spans. These results also showed that the technique's applicability in the study region extends beyond the particular locations and time spans covered by the data points used for training the algorithm.</p>
<p>An equation for the Absolute Cavity Pyrgeometer (ACP) is
derived from application of Kirchhoff's law and the addition of a convection
term to account for the thermopile being open to the environment, unlike a domed radiometer. The equation is then used to investigate four methods to
characterise key instrumental parameters using laboratory and field
measurements. The first uses solar irradiance to estimate the thermopile
responsivity, the second uses a minimisation method that solves for the thermopile responsivity and transmission of the cavity, and the third and
fourth revisit the Reda et al. (2012) linear least squares calibration
technique. Data were collected between January and November 2020, when the ACP96 and two IRIS radiometers monitoring terrestrial irradiances were
available. The results indicate good agreement with IRIS irradiances using
the new equation. The analysis also indicates that while the thermopile
responsivity, concentrator transmission and emissivity of an ACP can be
determined independently, as an open instrument, the impact of the
convection term is minor in steady-state conditions but significant when the base of the instrument is being subjected to rapid artificial cooling or
heating. Using laboratory characterisation of the transmission and
emissivity, together with use of an estimated solar calibration of the thermopile, generated mean differences of less than 1.5 Wm<span class="inline-formula"><sup>−2</sup></span> to the two IRIS radiometers. A minimisation method using each IRIS radiometer as the reference also provided similar results, and the derived thermopile
responsivity was within 0.3 <span class="inline-formula">µ</span>V W<span class="inline-formula"><sup>−1</sup></span> m<span class="inline-formula"><sup>2</sup></span> of the solar-calibration-derived infrared responsivity estimate of 10.5 <span class="inline-formula">µ</span>V W<span class="inline-formula"><sup>−1</sup></span> m<span class="inline-formula"><sup>2</sup></span> estimated
using a nominal solar calibration and provide irradiances within <span class="inline-formula">±2</span> %
of the terrestrial irradiance measured by the reference pyrgeometers
traceable to the International System of Units (SI). The calibration method using linear least squares regression introduced by Reda et al. (2012) that relies on rapid cooling of
the ACP base but utilising the new equation was found to produce consistent results but was dependent on the assumed temperature of the air above the thermopile. This study demonstrates the potential of the ACP as another
independent reference radiometer for terrestrial irradiance once the
magnitude of the convection coefficient and any potential variations in it have been resolved.</p>
Titi Sari Nurul Rachmawati, Hyung Cheol Park, Sunkuk Kim
Risks are involved in every aspect of earthwork projects. This paper specifically discusses the cost risk associated with the volume calculation of such projects. In the design phase, it is not possible to accurately predict the quantity per soil type underground of the site. As a result, there are uncertainties in the excavation cost that may cause cost overrun. There is a need for an innovative method to forecast, control, monitor, and manage excavation cost from design phase to completion. There is, however, an innovative method for calculating volume accurately using a digital surface model method. The digital surface model can be acquired using GPS and unmanned aerial vehicles (UAV). This paper proposes a simulation model which is able to analyze, control, and monitor the cost based on excavation volume so stakeholders are able to gain the actual volume quickly and accurately. Monte Carlo simulation is applied to the excavation volume per soil type, resulting in a range of possible outcomes for excavation cost. The developed model was verified by applying it to an actual case project. Throughout the project, the cost was successfully monitored and maintained below the maximum expected cost. However, the final actual cost in the last simulation almost reached the maximum expected cost, indicating the need for cost monitoring. By periodically comparing the simulation result to the actual excavated volume obtained from the UAV, the proposed model can assist stakeholders in controlling the cost overrun risk and developing strategies during the earthwork life cycle.
<p>The chemistry and reaction kinetics of reactive species
dominate changes to the composition of complex chemical systems, including
Earth's atmosphere. Laboratory experiments to identify reactive species and
their reaction products, and to monitor their reaction kinetics and product
yields, are key to our understanding of complex systems. In this work we
describe the development and characterisation of an experiment using laser
flash photolysis coupled with time-resolved mid-infrared (mid-IR) quantum
cascade laser (QCL) absorption spectroscopy, with initial results reported
for measurements of the infrared spectrum, kinetics, and product yields for
the reaction of the <span class="inline-formula">CH<sub>2</sub>OO</span> Criegee intermediate with <span class="inline-formula">SO<sub>2</sub></span>. The
instrument presented has high spectral (<span class="inline-formula"><</span> 0.004 cm<span class="inline-formula"><sup>−1</sup>)</span> and
temporal (<span class="inline-formula"><</span> 5 <span class="inline-formula">µ</span>s) resolution and is able to monitor kinetics
with a dynamic range to at least 20 000 s<span class="inline-formula"><sup>−1</sup></span>. Results obtained at 298 K
and pressures between 20 and 100 Torr gave a rate coefficient for the
reaction of <span class="inline-formula">CH<sub>2</sub>OO</span> with <span class="inline-formula">SO<sub>2</sub></span> of (3.83 <span class="inline-formula">±</span> 0.63) <span class="inline-formula">×</span> 10<span class="inline-formula"><sup>−11</sup></span> cm<span class="inline-formula"><sup>3</sup></span> s<span class="inline-formula"><sup>−1</sup></span>, which compares well to the current IUPAC
recommendation of <span class="inline-formula"><math xmlns="http://www.w3.org/1998/Math/MathML" id="M16" display="inline" overflow="scroll" dspmath="mathml"><mrow><mfenced close=")" open="("><mrow><msubsup><mn mathvariant="normal">3.70</mn><mrow><mo>-</mo><mn mathvariant="normal">0.40</mn></mrow><mrow><mo>+</mo><mn mathvariant="normal">0.45</mn></mrow></msubsup></mrow></mfenced></mrow></math><span><svg:svg xmlns:svg="http://www.w3.org/2000/svg" width="58pt" height="22pt" class="svg-formula" dspmath="mathimg" md5hash="d583b63ae9fb740190bf16448b830808"><svg:image xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="amt-15-2875-2022-ie00001.svg" width="58pt" height="22pt" src="amt-15-2875-2022-ie00001.png"/></svg:svg></span></span> <span class="inline-formula">×</span> 10<span class="inline-formula"><sup>−11</sup></span> cm<span class="inline-formula"><sup>3</sup></span> s<span class="inline-formula"><sup>−1</sup></span>. A limit of
detection of 4.0 <span class="inline-formula">×</span> 10<span class="inline-formula"><sup>−5</sup></span>, in absorbance terms, can be achieved,
which equates to a limit of detection of <span class="inline-formula">∼</span> 2 <span class="inline-formula">×</span> 10<span class="inline-formula"><sup>11</sup></span> cm<span class="inline-formula"><sup>−3</sup></span> for <span class="inline-formula">CH<sub>2</sub>OO</span>, monitored at 1285.7 cm<span class="inline-formula"><sup>−1</sup></span>, based on
the detection path length of (218 <span class="inline-formula">±</span> 20) cm. Initial results, directly
monitoring <span class="inline-formula">SO<sub>3</sub></span> at 1388.7 cm<span class="inline-formula"><sup>−1</sup></span>, demonstrate that <span class="inline-formula">SO<sub>3</sub></span> is the
reaction product for <span class="inline-formula">CH<sub>2</sub>OO</span> <span class="inline-formula">+</span> <span class="inline-formula">SO<sub>2</sub></span>. The use of mid-IR QCL
absorption spectroscopy offers significant advantages over alternative
techniques commonly used to determine reaction kinetics, such as
laser-induced fluorescence (LIF) or ultraviolet absorption spectroscopy,
owing to the greater number of species to which IR measurements can be
applied. There are also significant advantages over alternative IR
techniques, such as step-scan FT-IR, owing to the coherence and increased
intensity and spectral resolution of the QCL source and in terms of cost.
The instrument described in this work has potential applications in
atmospheric chemistry, astrochemistry, combustion chemistry, and in the
monitoring of trace species in industrial processes and medical diagnostics.</p>
<p>Sentinel-2 satellite imagery has been shown by studies to be
capable of detecting and quantifying methane emissions from oil and gas
production. However, current methods lack performance calibration with
ground-truth testing. This study developed a multi-band–multi-pass–multi-comparison-date methane retrieval algorithm that enhances Sentinel-2 sensitivity to methane plumes. The method was calibrated
using data from a large-scale controlled-release test in Ehrenberg, Arizona,
in fall 2021, with three algorithm parameters tuned based on the true
emission rates. Tuned parameters are the pixel-level concentration upper-bound threshold during extreme value removal, the number of comparison
dates, and the pixel-level methane concentration percentage threshold when
determining the spatial extent of a plume. We found that a low value of the
upper-bound threshold during extreme value removal can result in false
negatives. A high number of comparison dates helps enhance the algorithm
sensitivity to the plumes in the target date, but values in excess of
12 d are neither necessary nor computationally efficient. A high percentage
threshold when determining the spatial extent of a plume helps enhance the
quantification accuracy, but it may harm the yes/no detection accuracy. We
found that there is a trade-off between quantification accuracy and
detection accuracy. In a scenario with the highest quantification accuracy,
we achieved the lowest quantification error and had zero false-positive
detections; however, the algorithm missed three true plumes, which reduced the
yes/no detection accuracy. In contrast, all of the true plumes were
detected in the highest detection accuracy scenario, but the emission rate
quantification had higher errors. We illustrated a two-step method that
updates the emission rate estimates in an interim step, which improves
quantification accuracy while keeping high yes/no detection accuracy. We
also validated the algorithm's ability to detect true positives and true
negatives in two application studies.</p>
<p>Aeolus carries the Atmospheric LAser Doppler INstrument (ALADIN), the first high-spectral-resolution lidar (HSRL) in space. Although ALADIN is optimized to measure winds, its two measurement channels can also be used to derive optical properties of atmospheric particles, including a direct retrieval of the lidar ratio.</p>
<p>This paper presents the standard correct algorithm and the Mie correct algorithm, the two main algorithms of the optical properties product called the Level-2A product, as they are implemented in version 3.12 of the processor, corresponding to the data labelled Baseline 12. The theoretical basis is the same as in <span class="cit" id="xref_text.1"><a href="#bib1.bibx13">Flamant et al.</a> (<a href="#bib1.bibx13">2008</a>)</span>. Here, we also show the in-orbit performance of these algorithms. We also explain the adaptation of the calibration method, which is needed to cope with unforeseen variations of the instrument radiometric performance due to the in-orbit strain of the primary mirror under varying thermal conditions. Then we discuss the limitations of the algorithms and future improvements.</p>
<p>We demonstrate that the L2A product provides valuable information about airborne particles; in particular, we demonstrate the capacity to retrieve a useful lidar ratio from Aeolus observations. This is illustrated using Saharan dust aerosol observed in June 2020.</p>
S. Kremser, J. S. Tradowsky, J. S. Tradowsky
et al.
Upper-air measurements of essential climate variables (ECVs), such as
temperature, are crucial for climate monitoring and climate change detection.
Because of the internal variability of the climate system, many decades of
measurements are typically required to robustly detect any trend in the
climate data record. It is imperative for the records to be temporally
homogeneous over many decades to confidently estimate any trend.
Historically, records of upper-air measurements were primarily made for
short-term weather forecasts and as such are seldom suitable for studying
long-term climate change as they lack the required continuity and
homogeneity. Recognizing this, the Global Climate Observing System (GCOS)
Reference Upper-Air Network (GRUAN) has been established to provide
reference-quality measurements of climate variables, such as temperature,
pressure, and humidity, together with well-characterized and traceable
estimates of the measurement uncertainty. To ensure that GRUAN data products
are suitable to detect climate change, a scientifically robust instrument
replacement strategy must always be adopted whenever there is a change in
instrumentation. By fully characterizing any systematic differences between
the old and new measurement system a temporally homogeneous data series can
be created. One strategy is to operate both the old and new instruments in
tandem for some overlap period to characterize any inter-instrument biases.
However, this strategy can be prohibitively expensive at measurement sites
operated by national weather services or research institutes. An alternative
strategy that has been proposed is to alternate between the old and new
instruments, so-called interlacing, and then statistically derive the
systematic biases between the two instruments. Here we investigate the
feasibility of such an approach specifically for radiosondes, i.e. flying the
old and new instruments on alternating days. Synthetic data sets are used to
explore the applicability of this statistical approach to radiosonde change
management.
<p>Tropospheric clouds are a very important component of the climate system and
the hydrological cycle in the Arctic and sub-Arctic. Liquid water path
(LWP) is one of the key parameters of clouds urgently needed for a variety of
studies, including the snow cover and climate modelling at northern
latitudes. A joint analysis was made of the LWP values obtained from observations by
the SEVIRI satellite instrument and from ground-based observations by the
RPG-HATPRO microwave radiometer near St Petersburg, Russia (60° N,
30° E). The time period of selected data sets spans 2
years (December 2012–November 2014) excluding winter months, since the
specific requirements for SEVIRI observations restrict measurements at
northern latitudes in winter when the solar zenith angle is too large. The
radiometer measurement site is located very close to the shore of the Gulf of
Finland, and our study has revealed considerable differences between the LWP
values obtained by SEVIRI over land and over water areas in the region under
investigation. Therefore, special attention was paid to the analysis of
the LWP spatial distributions derived from SEVIRI observations at scales from
15 to 150 km in the vicinity of St Petersburg. Good agreement between the
daily median LWP values obtained from the SEVIRI and the RPG-HATPRO
observations was shown: the rms difference was estimated at
0.016 kg m<sup>−2</sup> for a warm season and 0.048 kg m<sup>−2</sup> for a cold
season. Over 7 months (February–May and August–October), the SEVIRI
and the RPG-HATPRO instruments revealed similar diurnal variations in LWP,
while considerable discrepancies between the diurnal variations obtained by
the two instruments were detected in June and July. On the basis of
reanalysis data, it was shown that the LWP diurnal cycles are
characterised by considerable interannual variability.</p>
ABSTRACTProducing geometric designs and images on materials, such as pottery, basketry, and bead artwork, as well as the human body, is elemental and widespread among Amazonian Indigenous peoples. In this article, we examine the different geometric forms identified in the precolonial geoglyph architecture of southwestern Amazonia in the context of geometric design making and relational ontologies. Our aim is to explore earthwork iconography through the lens of Amerindian visual arts and movement. Combining ethnographic and archaeological data from the Upper Purus, Brazil, the article shows how ancient history and socio‐cosmology are deeply “written” onto the landscape in the form of geometric earthworks carved out of the soil, which materialize interactions between nonhuman and human actors. We underline skills in visualization, imaginative practices, and movement as ways to promote well‐balanced engagements with animated life forms. Here, iconography inserted in the landscape is both a form of writing and also emerges as an agent, affecting people through visual and corporal practices. [geometric designs,earthworks,visualization,movement,Amazonia]
Rain time series records are generally studied using rainfall rate
or accumulation parameters, which are estimated for a fixed duration
(typically 1 min, 1 h or 1 day). In this study we use the concept of
<q>rain events</q>. The aim of the first part of this paper is to establish a
parsimonious characterization of rain events, using a minimal set of
variables selected among those normally used for the characterization of
these events. A methodology is proposed, based on the combined use of a
genetic algorithm (GA) and self-organizing maps (SOMs). It can be
advantageous to use an SOM, since it allows a high-dimensional data space to
be mapped onto a two-dimensional space while preserving, in an unsupervised
manner, most of the information contained in the initial space topology. The
2-D maps obtained in this way allow the relationships between variables to be
determined and redundant variables to be removed, thus leading to a minimal
subset of variables. We verify that such 2-D maps make it possible to
determine the characteristics of all events, on the basis of only five
features (the event duration, the peak rain rate, the rain event depth, the
standard deviation of the rain rate event and the absolute rain rate
variation of the order of 0.5). From this minimal subset of variables,
hierarchical cluster analyses were carried out. We show that clustering into
two classes allows the conventional convective and stratiform classes to be
determined, whereas classification into five classes allows this convective–stratiform
classification to be further refined. Finally, our study made
it possible to reveal the presence of some specific relationships between
these five classes and the microphysics of their associated rain events.
Radiative fluxes at the top of the atmosphere (TOA) from the Clouds and the
Earth's Radiant Energy System (CERES) instrument are fundamental variables
for understanding the Earth's energy balance and how it changes with time.
TOA radiative fluxes are derived from the CERES radiance measurements using
empirical angular distribution models (ADMs). This paper evaluates the
accuracy of CERES TOA fluxes using direct integration and flux consistency
tests. Direct integration tests show that the overall bias in regional
monthly mean TOA shortwave (SW) flux is less than 0.2 Wm<sup>−2</sup> and the RMSE is less than 1.1 Wm<sup>−2</sup>. The bias and RMSE are very similar
between Terra and Aqua. The bias in regional monthly mean
TOA LW fluxes is less than 0.5 Wm<sup>−2</sup> and the RMSE is less than 0.8 Wm<sup>−2</sup> for both Terra and Aqua. The accuracy of the TOA
instantaneous flux is assessed by performing tests using fluxes inverted from
nadir- and oblique-viewing angles using CERES along-track observations and
temporally and spatially matched MODIS observations, and using fluxes
inverted from multi-angle MISR observations. The averaged TOA instantaneous
SW flux uncertainties from these two tests are about 2.3 % (1.9 Wm<sup>−2</sup>)
over clear ocean, 1.6 % (4.5 Wm<sup>−2</sup>) over clear land, and 2.0 % (6.0 Wm<sup>−2</sup>) over clear snow/ice; and are about 3.3 % (9.0 Wm<sup>−2</sup>), 2.7 %
(8.4 Wm<sup>−2</sup>), and 3.7 % (9.9 Wm<sup>−2</sup>) over ocean, land, and snow/ice
under all-sky conditions. The TOA SW flux uncertainties are generally larger
for thin broken clouds than for moderate and thick overcast clouds. The TOA
instantaneous daytime LW flux uncertainties derived from the CERES-MODIS test
are 0.5 % (1.5 Wm<sup>−2</sup>), 0.8 % (2.4 Wm<sup>−2</sup>), and 0.7 % (1.3 Wm<sup>−2</sup>)
over clear ocean, land, and snow/ice; and are about 1.5 % (3.5 Wm<sup>−2</sup>),
1.0 % (2.9 Wm<sup>−2</sup>), and 1.1 % (2.1 Wm<sup>−2</sup>) over ocean, land, and
snow/ice under all-sky conditions. The TOA instantaneous nighttime LW flux
uncertainties are about 0.5–1 % (< 2.0 Wm<sup>−2</sup>) for all surface types.
Flux uncertainties caused by errors in scene identification are also assessed
by using the collocated CALIPSO, CloudSat, CERES and MODIS data product.
Errors in scene identification tend to underestimate TOA SW flux by about 0.6 Wm<sup>−2</sup> and overestimate
TOA daytime (nighttime) LW flux by 0.4 (0.2) Wm<sup>−2</sup> when all CERES viewing angles are considered.
The cloud processing scheme APOLLO (AVHRR Processing scheme Over cLouds,
Land and Ocean) has been in use for cloud detection and cloud property
retrieval since the late 1980s. The physics of the APOLLO scheme still build
the backbone of a range of cloud detection algorithms for AVHRR (Advanced
Very High Resolution Radiometer) heritage instruments. The
APOLLO_NG (APOLLO_NextGeneration) cloud
processing scheme is a probabilistic interpretation of the original APOLLO
method. It builds upon the physical principles that have served well in the
original APOLLO scheme. Nevertheless, a couple of additional variables have
been introduced in APOLLO_NG. Cloud detection is no longer
performed as a binary yes/no decision based on these physical principles. It
is rather expressed as cloud probability for each satellite pixel.
Consequently, the outcome of the algorithm can be tuned from being sure to
reliably identify clear pixels to conditions of reliably identifying
definitely cloudy pixels, depending on the purpose. The probabilistic
approach allows retrieving not only the cloud properties (optical depth,
effective radius, cloud top temperature and cloud water path) but also their
uncertainties. APOLLO_NG is designed as a standalone cloud
retrieval method robust enough for operational near-realtime use and for
application to large amounts of historical satellite data. The radiative
transfer solution is approximated by the same two-stream approach which also
had been used for the original APOLLO. This allows the algorithm to be
applied to a wide range of sensors without the necessity of sensor-specific
tuning. Moreover it allows for online calculation of the radiative transfer
(i.e., within the retrieval algorithm) giving rise to a detailed
probabilistic treatment of cloud variables. This study presents the
algorithm for cloud detection and cloud property retrieval together with the
physical principles from the APOLLO legacy it is based on. Furthermore a
couple of example results from NOAA-18 are presented.
We have used two methods for measuring emission factors (EFs) in real driving
conditions on five cars in a controlled environment: the stationary method,
where the investigated vehicle drives by the stationary measurement platform
and the composition of the plume is measured, and the chasing method, where
a mobile measurement platform drives behind the investigated vehicle. We
measured EFs of black carbon and particle number concentration. The
stationary method was tested for repeatability at different speeds and on a
slope. The chasing method was tested on a test track and compared to the
portable emission measurement system. We further developed the data
processing algorithm for both methods, trying to improve consistency,
determine the plume duration, limit the background influence and facilitate
automatic processing of measurements. The comparison of emission factors
determined by the two methods showed good agreement. EFs of a single car
measured with either method have a specific distribution with a
characteristic value and a long tail of super emissions. Measuring EFs at
different speeds or slopes did not significantly influence the EFs of
different cars; hence, we propose a new description of vehicle emissions that
is not related to kinematic or engine parameters, and we rather describe the
vehicle EF with a characteristic value and a super emission tail.
R. Schnitzhofer, A. Metzger, M. Breitenlechner
et al.
The CLOUD experiment (<b>C</b>osmics <b>L</b>eaving <b>OU</b>tdoor
<b>D</b>roplets) investigates the nucleation of new particles and how this
process is influenced by galactic cosmic rays in an electropolished,
stainless-steel environmental chamber at CERN (European Organization for
Nuclear Research). Since volatile organic compounds (VOCs) can act as
precursor gases for nucleation and growth of particles, great efforts have
been made to keep their unwanted background levels as low as possible and to
quantify them. In order to be able to measure a great set of VOCs
simultaneously in the low parts per trillion (pptv) range,
proton-transfer-reaction mass spectrometry (PTR-MS) was used. Initially the
total VOC background concentration strongly correlated with ozone in the
chamber and ranged from 0.1 to 7 parts per billion (ppbv). Plastic used as
sealing material in the ozone generator was found to be a major VOC source.
Especially oxygen-containing VOCs were generated together with ozone. These
parts were replaced by stainless steel after CLOUD3, which strongly reduced
the total VOC background. An additional ozone-induced VOC source is
surface-assisted reactions at the electropolished stainless steel walls. The
change in relative humidity (RH) from very dry to humid conditions increases
background VOCs released from the chamber walls. This effect is especially
pronounced when the RH is increased for the first time in a campaign. Also
the dead volume of inlet tubes for trace gases that were not continuously
flushed was found to be a short but strong VOC contamination source. For
lower ozone levels (below 100 ppbv) the total VOC contamination was usually
below 1 ppbv and therewith considerably cleaner than a comparable Teflon
chamber. On average about 75% of the total VOCs come from only five exact
masses (tentatively assigned as formaldehyde, acetaldehyde, acetone, formic
acid, and acetic acid), which have a rather high vapour pressure and are
therefore not important for nucleation and growth of particles.
The present paper addresses the detection of turbulence based on the Thorpe (1977) method applied to an atmosphere where saturation of water vapor occurs. The detection method proposed by Thorpe relies on the sorting in ascending order of a measured profile of a variable conserved through adiabatic processes, (e.g. potential temperature). For saturated air, the reordering should be applied to a moist-conservative potential temperature, θ<sub>m</sub>, which is analogous to potential temperature for a dry (subsaturated) atmosphere. Here, θ<sub>m</sub> is estimated from the Brunt–Väisälä frequency derived by Lalas and Einaudi (1974) in a saturated atmosphere. The application to balloon data shows that the effective turbulent fraction of the troposphere can dramatically increase when saturation is taken into account. Preliminary results of comparisons with data simultaneously collected from the VHF Middle and Upper atmosphere radar (MUR, Japan) seem to give credence to the proposed approach.