Abstract Earthwork Allocation and Transportation (EAT) significantly impacts construction cost in Railway Alignment (RA) optimization, but slight research has been devoted to this problem. To this end, an RA-EAT bi-level optimization framework is developed. A concurrent RA-EAT design model is formulated at the upper-level. At the lower level, the EAT system is modelled by incorporating earthwork allocation sections partition, soil waste/borrow pits selection and access roads generation. To solve this bi-level model, a tailored solution strategy is proposed. Specifically, a candidate pool for borrow/waste pits is established via a moving window method. Then, RA alternatives are generated using a Particle Swarm Optimization (PSO). Afterward, the EAT model is solved with a hierarchical approach, including determining EAT sections along an RA with a divide-and-conquer method and configuring access roads with a modified Dijkstra's algorithm. Thus, the total RA-EAT solution can be iteratively evolved through the PSO rationale. Finally, the proposed method is verified and analyzed in a real-world case via a sensitivity analysis and a comparative experiment.
Abstract Earthwork productivity analysis is essential for successful construction projects. If productivity analysis results can be accessed anytime and anywhere, then project management can be performed more efficiently. To this end, this paper proposes an earthwork productivity monitoring framework via a real-time scene updating multi-vision platform. The framework consists of four main processes: 1) site-optimized database development; 2) real-time monitoring model updating; 3) multi-vision productivity monitoring; and 4) web-based monitoring platform for Internet-connected devices. The experimental results demonstrated satisfactory performance, with an average macro F1-score of 87.3% for continuous site-specific monitoring, an average accuracy of 86.2% for activity recognition, and the successful operation of multi-vision productivity monitoring through a web-based platform in real time. The findings can contribute to supporting site managers to understand real-time earthmoving operations while achieving better construction project and information management.
<p>This study introduces the primary products and features of active-sensor-based Level 2 cloud microphysics products of the Japanese Aerospace Exploration Agency (JAXA; i.e., the cloud radar standalone cloud product (CPR_CLP), the radar–lidar synergy cloud product (AC_CLP), and the radar–lidar–imager cloud product (ACM_CLP)). Combined with the 94 GHz Doppler cloud profiling radar (CPR), 355 nm high-spectral-resolution lidar (Atmospheric Lidar, ATLID) and Multi-Spectral Imager (MSI), these products provide a detailed view of the transitions of cloud particle categories and their size distributions. Simulated EarthCARE Level 1 data mimicking actual global observations were used to assess the performance of the JAXA Level 2 cloud microphysics product. Evaluation of the product revealed that the retrievals reasonably reproduced the vertical profile of the modeled microphysics. Further validation of the products is planned for post-launch calibration and validation. Velocity-related JAXA Level 2 products (i.e., CPR_VVL, AC_VVL, and ACM_VVL) such as hydrometeor fall speed and vertical air velocity will be described in a future paper.</p>
Using Ni(II) as the catalyst, electron-deficient 3,5-dimethylacryloylpyrazole olefin was reacted with C,N-diarylnitrones alone for 10 min to prepare novel five-member heterocyclic products, 4-3,5-dimethylacryloylpyrazole isoxazolidines with 100% regioselectivity and up to 99% yield. And then, taking these cycloadducts as substrates, six kinds of derivatization reactions, like ring-opening, nucleophilic substitution, addition-elimination and reduction, were studied. Experimental results showed that all kinds of transformations could obtain the target products at a high conversion rate under mild conditions, a finding which provided the basic methods for organic synthesis methodology research based on an isoxazolidine skeleton.
<p>A pre-deployment calibration and a field validation of two low-cost (LC) stations equipped with <span class="inline-formula">O<sub>3</sub></span> and <span class="inline-formula">NO<sub>2</sub></span> metal oxide sensors were addressed. Pre-deployment calibration was performed after developing and implementing a comprehensive calibration framework including several supervised learning models, such as univariate linear and non-linear algorithms, and multiple linear and non-linear algorithms. Univariate linear models included linear and robust regression, while univariate non-linear models included a support vector machine, random forest, and gradient boosting. Multiple models consisted of both parametric and non-parametric algorithms. Internal temperature, relative humidity, and gaseous interference compounds proved to be the most suitable predictors for multiple models, as they helped effectively mitigate the impact of environmental conditions and pollutant cross-sensitivity on sensor accuracy. A feature analysis, implementing dominance analysis, feature permutations, and the SHapley Additive exPlanations method, was also performed to provide further insight into the role played by each individual predictor and its impact on sensor performances. This study demonstrated that while multiple random forest (MRF) returned a higher accuracy than multiple linear regression (MLR), it did not accurately represent physical models beyond the pre-deployment calibration dataset, so a linear approach may overall be a more suitable solution. Furthermore, as well as being less computationally demanding and generally more suitable for non-experts, parametric models such as MLR have a defined equation that also includes a few parameters, which allows easy adjustments for possible changes over time. Thus, drift correction or periodic automatable recalibration operations can be easily scheduled, which is particularly relevant for <span class="inline-formula">NO<sub>2</sub></span> and <span class="inline-formula">O<sub>3</sub></span> metal oxide sensors. As demonstrated in this study, they performed well with the same linear model form but required unique parameter values due to intersensor variability.</p>
The growing penetration of inverter-based resources and associated controls necessitates system-wide electromagnetic transient (EMT) analyses. EMT tools and methods today were not designed for the scale of these analyses. In light of the emerging need, there is a great deal of interest in developing new techniques for fast and accurate EMT simulations for large power grids; the foundations of which will be built on current tools and methods. However, we find that educational texts covering the fundamentals and inner workings of current EMT tools are limited. As such, there is a lack of introductory material for students and professionals interested in researching the field. To that end, in this tutorial, we introduce the principles of EMT analyses from the circuit-theoretic viewpoint, mimicking how time-domain analyses are performed in circuit simulation tools like SPICE and Cadence. We perform EMT simulations for two examples, one linear and one nonlinear, including induction motor (IM) from the first principles. By the document's end, we anticipate the readers will have a \textit{basic} understanding of how power grid EMT tools work.
The picture fuzzy set (PFS) just appeared in 2014 and was introduced by Cuong, which is a generalization of intuitionistic fuzzy sets (Atanassov in Fuzzy Sets Syst 20(1):87–96, 1986) and fuzzy sets (Zadeh Inf Control 8(3):338–353, 1965). The picture fuzzy number (PFN) is an ordered value triple, including a membership degree, a neutral-membership degree, a non-membership degree, of a PFS. The PFN is a useful tool to study the problems that have uncertain information in real life. In this paper, the main aim is to develop basic foundations that can become tools for future research related to PFN and picture fuzzy calculus. We first establish a semi-linear space for PFNs by providing two new definitions of two basic operations, addition and scalar multiplication, such that the set of PFNs together with these two operations can form a semi-linear space. Moreover, we also provide some important properties and concepts such as metrics, order relations between two PFNs, geometric difference, multiplication of two PFNs. Next, we introduce picture fuzzy functions with a real domain that is also known as picture fuzzy functions with time-varying values, called geometric picture fuzzy function (GPFFs). In this framework, we give definitions about the limit of GPFFs and sequences of PFN. The important limit properties are also presented in detail. Finally, we prove that the metric semi-linear space of PFNs is complete, which is an important property in the classical mathematical analysis.
A recurring problem in game semantics is to enforce uniformity in strategies. Informally, a strategy is uniform when the Player's behaviour does not depend on the particular indexing of moves chosen by the Opponent. In game semantics, uniformity is used to define a resource modality !, that can be exploited for the semantics of programming languages. In this paper we give a new account of uniformity for strategies on event structures. This work is inspired by an older idea due to Melli\`es, that uniformity should be expressed as "bi-invariance" with respect to two interacting group actions. We explore the algebraic foundations of bi-invariance, adapt this idea to the language of event structures and define a general notion of uniform strategy in this context. Finally we revisit an existing approach to uniformity, and show how this arises as a special case of our constructions.
Abstract Discussions of the resiliency, sustainability, and agility of supply chains are important in the research and management of supply chains in these difficult times, considering the ongoing pandemic of COVID-19. A viable supply chain is often characterized by resiliency, sustainability, and agility in its network design. Resiliency is essential because disruption and demand fluctuations are forced upon SCs, and the effects of these for many managerial supply chains are unknown. In addition, applying novel technology in the supply chain, such as blockchain, Internet-of-Things (IoT), and artificial intelligence (AI) as agility tools can assist and enable the transition to lean production. This special issue of the Foundations of Computing and Decision Sciences is dedicated to advancements in this fields. Besides, the special issue covers instructional information about OR techniques which are useful for addressing real-world applications on such challenges.
We aim to show from a new perspective that Quine’s ontological relativity, based largely on his so-called “proxy-function argument”, falls short of being a rigorously coherent philosophical conception, as it exhibits significant formal defects. This new perspective enables exposing the shortcomings of Quine's position and suggests a possible reformulation of the original position. Moreover, we argue that his ontological relativity is inconsistent with the empirical data associated with some of our best physical theories, such as quantum mechanics. We refer to fundamental concepts of philosophy and the foundations of mathematics in order to clarify our critique of Quine’s position concerning the relation between formalized theories and both what we can know about the real world and how we come to know it.
<p>The Nimbus 7 Limb Infrared Monitor of the Stratosphere (LIMS) instrument operated from 25 October 1978 through 28 May 1979. Its Version 6 (V6)
profiles and their Level 3 or zonal Fourier coefficient products have been
characterized and archived in 2008 and in 2011, respectively. This paper
focuses on the value and use of daily ozone maps from Level 3, based on a
gridding of its zonal coefficients. We present maps of V6 ozone on pressure
surfaces and compare them with several rocket-borne chemiluminescent ozone
measurements that extend into the lower mesosphere. We illustrate how the
synoptic maps of V6 ozone and temperature are an important aid in
interpreting satellite limb-infrared emission versus local measurements,
especially when they occur during dynamically active periods of Northern
Hemisphere winter. A map sequence spanning the minor stratospheric warmings
of late January and early February characterizes the evolution of a low-ozone pocket (LOP) at that time. We also present time series of the
wintertime tertiary ozone maximum and its associated zonally varying
temperatures in the upper mesosphere. These examples provide guidance to
researchers for further exploratory analyses of the daily maps of middle
atmosphere ozone from LIMS.</p>
<p>Trade wind cumulus clouds have a significant impact on the Earth's radiative balance due to their ubiquitous presence and significant coverage in subtropical regions. Many numerical studies and field campaigns have focused on better understanding the thermodynamic, microphysical, and macroscopic properties of cumulus clouds with ground-based and satellite remote sensing as well as in situ observations. Aircraft flights have provided a significant contribution, but their resolution remains limited by rectilinear transects and fragmented temporal data for individual clouds. To provide a higher spatial and temporal resolution, remotely piloted aircraft (RPA) can now be employed for direct observations using numerous technological advances to map the microphysical cloud structure and to study entrainment mixing. In fact, the numerical representation of mixing processes between a cloud and the surrounding air has been a key issue in model parameterizations for decades. To better study these mixing processes as well as their impacts on cloud microphysical properties, the paper aims to improve exploration strategies that can be implemented by a fleet of RPA.</p>
<p>Here, we use a large-eddy simulation (LES) of shallow maritime cumulus clouds to design adaptive sampling strategies. An implementation of the RPA flight simulator within high-frequency LES outputs (every 5 s) allows tracking individual clouds. A rosette sampling strategy is used to explore clouds of different sizes that are static in time and space. The adaptive sampling carried out by these explorations is optimized using one or two RPA and with or without Gaussian process regression (GPR) mapping by comparing the results obtained with those of a reference simulation, in particular the total liquid water content (LWC) and the LWC distribution in a horizontal cross section. Also, a sensitivity test of length scale for GPR mapping is performed.
The results of exploring a static cloud are then extended to a dynamic case of a cloud evolving with time to assess the application of this exploration strategy to study the evolution of cloud heterogeneities.
While a single RPA coupled to GPR mapping remains insufficient to accurately reconstruct individual clouds, two RPA with GPR mapping adequately characterize cloud heterogeneities on scales small enough to quantify the variability of important parameters such as total LWC.</p>
<p>The microwave temperature profiler (MTP), an airborne
passive microwave radiometer, measures radiances, recorded as counts and
calibrated to brightness temperatures, in order to estimate temperature
profiles around flight altitude. From these data, quantities such as
potential temperature gradients and static stability, indicating the state
of the atmosphere, can be derived and used to assess important dynamical
processes (e.g., gravity waves or stability assessments). DLR has acquired a
copy of the MTP from NASA–JPL, which was designed as a wing-canister
instrument and is deployed on the German High Altitude
LOng range research aircraft (HALO). For this
instrument a thorough analysis of instrument characteristics has been made
in order to correctly determine the accuracy and precision of MTP
measurements.</p>
<p>Using a laboratory setup, the frequency response function and antenna
diagram of the instrument were carefully characterized. A cold chamber was
used to simulate the changing in-flight conditions and to derive noise
characteristics as well as reliable calibration parameters for brightness
temperature calculations, which are compared to those calculated from
campaign data.</p>
<p>The MTP shows quite large changes in the instrument state, imposing
considerable changes in calibration parameters over the course of a single
measurement flight; using a built-in heated target for calibration may yield
large errors in brightness temperatures due to a misinterpretation of the
measured absolute temperature. Applying the corrections presented herein to the
calibration parameter calculations, the measurement noise becomes the
dominant source of uncertainty and it is possible to measure the brightness
temperatures around flight level (closely related to the absolute
temperature close to the instrument) with a precision of 0.38 K.
Furthermore, radiative transfer simulations, using the Py4CAtS package in a
pencil-beam approach, indicate that the altitude range of the sensitivity of the
MTP instrument can be increased by applying a modified measurement strategy.</p>
<p>This is the first time such an extensive characterization of an MTP
instrument, including a thorough calibration strategy assessment, has been
published. The presented results, relevant for the wing-canister design of
the MTP instrument, are important when processing MTP data: knowledge of the
relevant uncertainties and instrument characteristics is essential for
retrieval setup and is mandatory to correctly identify and interpret
significant atmospheric temperature fluctuations.</p>
<p>The retrieval of turbulence parameters with profiling Doppler wind lidars (DWLs) is of high interest for boundary layer meteorology and its applications. DWLs provide wind measurements above the level of meteorological masts while being easier and less expensive to deploy. Velocity-azimuth display (VAD) scans can be used to retrieve the turbulence kinetic energy (TKE) dissipation rate through a fit of measured azimuth structure functions to a theoretical model. At the elevation angle of 35.3<span class="inline-formula"><sup>∘</sup></span> it is also possible to derive TKE. Modifications to existing retrieval methods are introduced in this study to reduce errors due to advection and enable retrievals with a low number of scans. Data from two experiments are utilized for validation: first, measurements at the Meteorological Observatory Lindenberg–Richard-Aßmann Observatory (MOL-RAO) are used for the validation of the DWL retrieval with sonic anemometers on a meteorological mast. Second, distributed measurements of three DWLs during the CoMet campaign with two different elevation angles are analyzed. For the first time, the ground-based DWL VAD retrievals of TKE and its dissipation rate are compared to in situ measurements of a research aircraft (here: DLR Cessna Grand Caravan 208B), which allows for measurements of turbulence above the altitudes that are in range for sonic anemometers.</p>
<p>From the validation against the sonic anemometers we confirm that lidar measurements can be significantly improved by the introduction of the volume-averaging effect into the retrieval. We introduce a correction for advection in the retrieval that only shows minor reductions in the TKE error for 35.3<span class="inline-formula"><sup>∘</sup></span> VAD scans. A significant bias reduction can be achieved with this advection correction for the TKE dissipation rate retrieval from 75<span class="inline-formula"><sup>∘</sup></span> VAD scans at the lowest measurement heights. Successive scans at 35.3 and 75<span class="inline-formula"><sup>∘</sup></span> from the CoMet campaign are shown to provide TKE dissipation rates with a good correlation of <span class="inline-formula"><i>R</i>>0.8</span> if all corrections are applied. The validation against the research aircraft encourages more targeted validation experiments to better understand and quantify the underestimation of lidar measurements in low-turbulence regimes and altitudes above tower heights.</p>
<p>Aerosol particles are essential constituents of the Earth's atmosphere, impacting the earth radiation balance directly by scattering and
absorbing solar radiation, and indirectly by acting as cloud condensation
nuclei. In contrast to most greenhouse gases, aerosol particles have short
atmospheric residence times, resulting in a highly heterogeneous distribution in space and time. There is a clear need to document this variability at
regional scale through observations involving, in particular, the in situ
near-surface segment of the atmospheric observation system. This paper will provide the widest effort so far to document variability of climate-relevant
in situ aerosol properties (namely wavelength dependent particle light
scattering and absorption coefficients, particle number concentration and
particle number size distribution) from all sites connected to the Global
Atmosphere Watch network. High-quality data from almost 90 stations worldwide have been collected and controlled for quality and are reported
for a reference year in 2017, providing a very extended and robust view of
the variability of these variables worldwide. The range of variability
observed worldwide for light scattering and absorption coefficients, single-scattering albedo, and particle number concentration are presented together with preliminary information on their long-term trends and comparison with
model simulation for the different stations. The scope of the present paper
is also to provide the necessary suite of information, including data provision procedures, quality control and analysis, data policy, and usage of
the ground-based aerosol measurement network. It delivers to users of the World Data Centre on Aerosol, the required confidence in data products in
the form of a fully characterized value chain, including uncertainty estimation and requirements for contributing to the global climate
monitoring system.</p>
Abstract In earthquake engineering, pile foundations are designed to withstand the lateral loading that results from large displacements due to ground movement caused by strong earthquakes. The distress and failure of superstructures occurs when the lateral load exceeds the ultimate lateral resistance of the piles. The aim of this study is to estimate the ultimate lateral resistance of piles especially in terms of the group effect induced by the pile arrangement. Several experimental and numerical analyses have been conducted on pile groups to investigate the group effect when the groups are subjected to uniform large horizontal ground movement. However, the ultimate lateral resistance of the pile groups in these studies was calculated by applying load to the piles. The present study directly assesses the ultimate lateral resistance of pile groups against ground movement by systematically varying the direction of the ground movement. Although the load bearing ratio of each pile in a pile group, defined as the ratio of the ultimate lateral resistance of each pile in a pile group to that of a single pile, is an important design criterion, it was difficult to assess in past works. This study focuses on the load bearing ratio of each pile against ground movement in various directions. The use of the finite element method (FEM) provides options for simulating the pile-soil system with complex pile arrangements by taking the complicated geometry of the problem into account. The ultimate lateral resistance is examined here for pile groups consisting of a 2 × 2 arrangement of four piles, as well as two piles, three piles, four piles, and an infinite number of piles arranged in a row through case studies in which the pile spacing is changed by applying the two-dimensional rigid plastic finite element method (RPFEM). The RPFEM was extended in this work to calculate not only the total ultimate lateral resistance of pile groups, but also the load bearing ratio of the piles in the group. The obtained results indicate that the load bearing ratio generally increases with an increase in pile spacing and converges to almost unity at a pile spacing ratio of 3.0 with respect to the pile diameter. Moreover, the group effect was further investigated by considering the failure mode of the ground around the piles.
Abstract Granular-continuum interfaces are widely present in geotechnical applications, including deep foundations, retaining structures, and anchoring applications. Interface mechanical properties are a function of the characteristics of the contacting soil and the opposing interface. Therefore, a robust understanding of granular-continuum interface behavior is essential to geotechnical practice. In this work, we did the following: (1) summarized the recent research on the effects of interface roughness, soil density, particle shape, and friction coefficient on interface behavior and strength; (2) simulated granular-continuum interface shear using the three-dimensional discrete element method (DEM); (3) compared the trends in DEM results to previously published physical experiments; and (4) investigated the microscale responses of the interface simulations. The DEM simulations were generally in good agreement with previously reported experimental results for similar interface roughness. DEM simulations give a bilinear strength-displacement trend consistent with that previously reported from physical experiments. The microscale investigations showed that, in the case of rougher interfaces, contact reorientation was the interface failure mechanism, and in the case of smoother interfaces, it was contact sliding. The mobilization of rougher interfaces tends to alter the force distribution in the surrounding soil.
<p>Low concentrations of ice-nucleating particles (INPs) are thought to be
important for the properties of mixed-phase clouds, but their detection is
challenging. Hence, there is a need for instruments where INP concentrations
of less than 0.01 L<sup>−1</sup> can be routinely and efficiently determined. The
use of larger volumes of suspension in drop assays increases the sensitivity
of an experiment to rarer INPs or rarer active sites due to the increase in
aerosol or surface area of particulates per droplet. Here we describe and
characterise the InfraRed-Nucleation by Immersed Particles Instrument
(IR-NIPI), a new immersion freezing assay that makes use of IR emissions to
determine the freezing temperature of individual 50 µL droplets each
contained in a well of a 96-well plate. Using an IR camera allows the
temperature of individual aliquots to be monitored. Freezing temperatures are
determined by detecting the sharp rise in well temperature associated with
the release of heat caused by freezing. In this paper we first present the
calibration of the IR temperature measurement, which makes use of the fact
that following ice nucleation aliquots of water warm to the ice–liquid
equilibrium temperature (i.e. 0 °C when water activity is
∼ 1), which provides a point of calibration for each individual
well in each experiment. We then tested the temperature calibration using
∼ 100 µm chips of K-feldspar, by immersing these chips
in 1 µL droplets on an established cold stage (µL-NIPI) as well
as in 50 µL droplets on IR-NIPI; the results were consistent with one
another, indicating no bias in the reported freezing temperature. In addition
we present measurements of the efficiency of the mineral dust NX-illite and a
sample of atmospheric aerosol collected on a filter in the city of Leeds.
NX-illite results are consistent with literature data, and the atmospheric INP
concentrations were in good agreement with the results from the µL-NIPI instrument. This demonstrates the utility of this approach, which
offers a relatively high throughput of sample analysis and access to low INP
concentrations.</p>
P. J. J. Tol, T. A. van Kempen, R. M. van Hees
et al.
<p>The shortwave infrared (SWIR) spectrometer module of the Tropospheric
Monitoring Instrument (TROPOMI), on board the ESA Copernicus Sentinel-5
Precursor satellite, is used to measure atmospheric CO and methane columns.
For this purpose, calibrated radiance measurements are needed that are
minimally contaminated by instrumental stray light. Therefore, a method has
been developed and applied in an on-ground calibration campaign to
characterize stray light in detail using a monochromatic quasi-point light
source. The dynamic range of the signal was extended to more than 7 orders of magnitude by performing measurements with different exposure times,
saturating detector pixels at the longer exposure times. Analysis of the
stray light indicates about 4.4 % of the detected light is correctable stray
light. An algorithm was then devised and implemented in the operational data
processor to correct in-flight SWIR observations in near-real time, based on
Van Cittert deconvolution. The stray light is approximated by a far-field
kernel independent of position and wavelength and an additional kernel
representing the main reflection. Applying this correction significantly
reduces the stray-light signal, for example in a simulated dark forest scene
close to bright clouds by a factor of about 10. Simulations indicate that
this reduces the stray-light error sufficiently for accurate gas-column
retrievals. In addition, the instrument contains five SWIR diode lasers that
enable long-term, in-flight monitoring of the stray-light distribution.</p>