<p>Trace gas measurements from the Total Carbon Column Observing Network (TCCON) are important for monitoring the global climate system and for validating satellite measurements. In the Arctic, ground-based data coverage is relatively limited due the inherent challenges of conducting measurements in this region (e.g., remoteness, harsh weather). Additionally, solar absorption measurements require sunlight and are not possible during polar night. TCCON measurements from the Arctic sites are of significant value for the validation of satellite data products in this region, as these measurements can extend the spatiotemporal coverage in the Arctic. In this study, we investigate the TCCON methane (CH<span class="inline-formula"><sub>4</sub></span>) retrieval under polar vortex conditions. The CH<span class="inline-formula"><sub>4</sub></span> profile exhibits a distinct shape inside the vortex, which is related to the descent of stratospheric air inside the vortex. We show that the standard TCCON CH<span class="inline-formula"><sub>4</sub></span> prior does not sufficiently reproduce this profile shape, leading to air mass dependencies (AMDs), increased spectral residuals, and less sensitive averaging kernels. These effects can be explained by the imperfect vertical sensitivity, especially to the stratosphere. We further show that changes in the prior can improve the retrieval within the polar vortex. This leads to mean differences between 1 and 2 ppb in XCH<span class="inline-formula"><sub>4</sub></span> compared to the standard retrieval, as well as maximum differences up to roughly 17 ppb. This paper highlights the importance of understanding the limitations of retrieval methods to avoid misinterpretation of data. Furthermore, it emphasizes the need to investigate the shape of trace gas profiles inside the polar vortex to improve profile-scaling retrievals (PSRs) in the Arctic, which could include in situ data campaigns focusing on inside-vortex air.</p>
We follow the Boltzmann-Clausius-Maxwell (BCM) proposal to establish the generalized second law (GSL) that is applicable to a system of any size, including a single particle system as our example establishes, and that supercedes the celebrated second law (SL) of increase of entropy of an isolated system. It is merely a consequence of the mechanical equilibrium (stable or unstable) principle (Mec-EQ-P) of analytical mechanics and the first law. We justify an irreversibility priciple that covers all processes, spontaneous or not, and having both positive and negative nonequilibrium temperatures temperatures T defined by (dQ/dS)E. Our novel approach to establish GSL/SL is the inverse of the one used in classical thermodynamics and clarifies the concept of spontaneous processes so that dS≥0 for T>0 and dS<0 for T<0. Nonspontaneous processes such as creation of internal constraints are not covered by GSL/SL. Our demonstration establishes that Mec-EQ-P controls spontaneous processes, and that temperature (positive and negative) must be considered an integral part of dissipation.
<p>The quantitative analysis of measurements with horizontally scanning aerosol lidar instruments faces two major challenges: the background correction can be affected by abnormal signal peaks, and the choice of a reference extinction coefficient <span class="inline-formula"><i>α</i><sub>ref</sub></span> is complicated if aerosols are ubiquitous in the sampled volume. Here, we present the newly developed multi-section method for the stable solution of extinction coefficient retrievals from horizontally scanning lidar measurements. The algorithm removes irregular peaks related to signal noise based on an experimentally derived fitting model. A representative value for <span class="inline-formula"><i>α</i><sub>ref</sub></span> is inferred from converging retrievals along different scan axes and over multiple scans of 10 to 15 min under the assumption that they are only related to ambient aerosols without distinct emission sources. Consequently, <span class="inline-formula"><i>α</i><sub>ref</sub></span> obtained through the multi-section method reflects typical atmospheric aerosols unaffected by emissions and noise. When comparing <span class="inline-formula"><i>α</i><sub>ref</sub></span> to the PM<span class="inline-formula"><sub>2.5</sub></span> mass concentrations at national monitoring stations near the measurement area, a significant correlation with an <span class="inline-formula"><i>r</i><sup>2</sup></span> value exceeding 0.74 was observed. The presented case studies show that the new method allows for the retrieval and visualization of spatio-temporal aerosol distributions and subsequent products such as PM<span class="inline-formula"><sub>2.5</sub></span> concentrations.</p>
The subgrade serves as the foundation of road construction, typically involving a significant amount of earthwork during its establishment. However, in coastal and desert areas, soil sources are often scarce. Local soil extraction significantly damages cultivated land, impacting the local ecological environment. Transporting soil over long distances inevitably raises construction costs. Fortunately, these regions often feature abundant fine sand distribution, presenting an opportunity to utilize it as subgrade filler in coastal regions. This review comprehensively introduces the properties of fine sand as a raw material, its engineering applications, and the associated construction technologies. It emphatically discusses the road use characteristics and treatment technology of fine sand filler and puts forward a prospect combining the characteristics and development trends of fine sand so as to provide a new perspective and basic material for the application of fine sand in the subgrade. To foster the adoption of fine sand in subgrade construction, it is recommended to advance research on the evaluation and treatment of fine sand foundations, analyze its suitability and structural behavior as a filler, and refine construction methodologies and quality control measures specific to fine sand subgrades.
P. Kollias, P. Kollias, B. Puidgomènech Treserras
et al.
<p>The Earth Clouds, Aerosols and Radiation (EarthCARE) satellite mission is a joint effort by the European Space Agency (ESA) and the Japanese Aerospace Exploration Agency (JAXA). The EarthCARE mission features the first spaceborne 94 GHz cloud-profiling radar (CPR) with Doppler capability. The raw CPR observations and auxiliary information are used as input to three Level-2 (L2) algorithms: (1) C-APC: Antenna Pointing Characterization; (2) C-FMR: CPR feature mask and reflectivity; (3) C-CD: Corrected CPR Doppler Measurements. These algorithms apply quality control and corrections to the CPR primary measurements and derive important geophysical variables, such as hydrometeor locations, and best estimates of particle sedimentation fall velocities. The C-APC algorithm uses natural targets to introduce any corrections needed to the CPR raw Doppler velocities due to the CPR antenna pointing. The C-FMR product provides the feature mask based on only-reflectivity CPR measurements and quality-controlled radar-reflectivity profiles corrected for gaseous attenuation at 94 GHz. In addition, C-FMR provides best estimates of the path-integrated attenuation (PIA) and flags identifying the presence of multiple scattering in the CPR observations. Finally, the C-CD product provides the quality-controlled, bias-corrected mean Doppler velocity estimates (Doppler measurements corrected for antenna mispointing, non-uniform beam filling and velocity folding). In addition, the best estimate of the particle sedimentation velocity is estimated using a novel technique.</p>
<p>In this work, we used a Zeppelin NT equipped with six sensor setups, each composed of four different low-cost electrochemical sensors (ECSs) to measure nitrogen oxides (NO and NO<span class="inline-formula"><sub>2</sub></span>), carbon monoxide, and O<span class="inline-formula"><sub><i>x</i></sub></span> (<span class="inline-formula">NO<sub>2</sub>+O<sub>3</sub></span>) in Germany. Additionally, a MIRO MGA laser absorption spectrometer was installed as a reference device for in-flight evaluation of the ECSs. We report not only the influence of temperature on the NO and NO<span class="inline-formula"><sub>2</sub></span> sensor outputs but also find a shorter timescale (1 s) dependence of the sensors on the relative humidity gradient. To account for these dependencies, we developed a correction method that is independent of the reference instrument. After applying this correction to all individual sensors, we compare the sensor setups with each other and to the reference device. For the intercomparison of all six setups, we find good agreements with <span class="inline-formula"><i>R</i><sup>2</sup>≥0.8</span> but different precisions for each sensor in the range from 1.45 to 6.32 ppb (parts per billion). The comparison to the reference device results in an <span class="inline-formula"><i>R</i><sup>2</sup></span> of 0.88 and a slope of 0.92 for NO<span class="inline-formula"><sub><i>x</i></sub></span> (<span class="inline-formula">NO+NO<sub>2</sub></span>). Furthermore, the average noise (1<span class="inline-formula"><i>σ</i></span>) of the NO and NO<span class="inline-formula"><sub>2</sub></span> sensors reduces significantly from 6.25 and 7.1 to 1.95 and 3.32 ppb, respectively. Finally, we highlight the potential use of ECSs in airborne applications by identifying different pollution sources related to industrial and traffic emissions during multiple commercial and targeted Zeppelin flights in spring 2020. These results are a first milestone towards the quality-assured use of low-cost sensors in airborne settings without a reference device, e.g., on unmanned aerial vehicles (UAVs).</p>
This article reviews the electrodynamic force law of Wilhelm Weber and its importance in electromagnetic theory. An introduction is given to Weber’s force and it is shown how it has been utilised in the literature to explain electromagnetism as well as phenomena in other disciplines of physics, where the force law has connections to the nuclear force, gravity, cosmology, inertia and quantum mechanics. Further, criticism of Weber’s force is reviewed and common misconceptions addressed and rectified. It is found that, while the theory is not without criticism and has much room for improvement, within the limitations of its validity, it is equally as successful as Maxwell’s theory in predicting certain phenomena. Moreover, it is discussed how Weber offers a valid alternative explanation of electromagnetic phenomena which can enrich and complement the field perspective of electromagnetism through a particle based approach.
<p>Atmospheric observations in remote locations offer a possibility of exploring trace gas and particle concentrations in pristine environments. However,
data from remote areas are often contaminated by pollution from local
sources. Detecting this contamination is thus a central and frequently
encountered issue. Consequently, many different methods exist today to
identify local contamination in atmospheric composition measurement time
series, but no single method has been widely accepted. In this study, we
present a new method to identify primary pollution in remote atmospheric
datasets, e.g., from ship campaigns or stations with a low background signal compared to the contaminated signal. The pollution detection algorithm (PDA) identifies and flags periods of polluted data in five steps. The first and most important step identifies polluted periods based on the derivative (time derivative) of a concentration over time. If this derivative exceeds a given threshold, data are flagged as polluted. Further pollution
identification steps are a simple concentration threshold filter, a
neighboring points filter (optional), a median, and a sparse data filter (optional). The PDA only relies on the target dataset itself and is
independent of ancillary datasets such as meteorological variables. All
parameters of each step are adjustable so that the PDA can be “tuned” to
be more or less stringent (e.g., flag more or fewer data points as contaminated).</p>
<p>The PDA was developed and tested with a particle number concentration
dataset collected during the Multidisciplinary drifting Observatory for the
Study of Arctic Climate (MOSAiC) expedition in the central Arctic. Using strict settings, we identified 62 % of the data as influenced by local
contamination. Using a second independent particle number concentration
dataset also collected during MOSAiC, we evaluated the performance of the
PDA against the same dataset cleaned by visual inspection. The two methods
agreed in 94 % of the cases. Additionally, the PDA was successfully
applied to a trace gas dataset (CO<span class="inline-formula"><sub>2</sub></span>), also collected during MOSAiC, and to another particle number concentration dataset, collected at the high-altitude background station Jungfraujoch, Switzerland. Thus, the PDA
proves to be a useful and flexible tool to identify periods affected by
local contamination in atmospheric composition datasets without the need for ancillary measurements. It is best applied to data representing primary
pollution. The user-friendly and open-access code enables reproducible application to a wide suite of different datasets. It is available at <a href="https://doi.org/10.5281/zenodo.5761101">https://doi.org/10.5281/zenodo.5761101</a> (Beck et al., 2021).</p>
According to Relational Quantum Mechanics (RQM) the wave function ψ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\psi$$\end{document} is considered neither a concrete physical item evolving in spacetime, nor an object representing the absolute state of a certain quantum system. In this interpretative framework, ψ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\psi$$\end{document} is defined as a computational device encoding observers’ information; hence, RQM offers a somewhat epistemic view of the wave function. This perspective seems to be at odds with the PBR theorem, a formal result excluding that wave functions represent knowledge of an underlying reality described by some ontic state. In this paper we argue that RQM is not affected by the conclusions of PBR’s argument; consequently, the alleged inconsistency can be dissolved. To do that, we will thoroughly discuss the very foundations of the PBR theorem, i.e. Harrigan and Spekkens’ categorization of ontological models, showing that their implicit assumptions about the nature of the ontic state are incompatible with the main tenets of RQM. Then, we will ask whether it is possible to derive a relational PBR-type result, answering in the negative. This conclusion shows some limitations of this theorem not yet discussed in the literature.
<p>We present a comparison between three absorption photometers that measured
the absorption coefficient (<span class="inline-formula"><i>σ</i><sub>abs</sub></span>) of ambient aerosol particles in
2012–2017 at SMEAR II (Station for Measuring Ecosystem–Atmosphere Relations II), a measurement station located in a boreal forest
in southern Finland. The comparison included an Aethalometer (AE31), a multi-angle absorption photometer (MAAP), and a particle soot absorption
photometer (PSAP). These optical instruments measured particles collected on
a filter, which is a source of systematic errors, since in addition to the
particles, the filter fibers also interact with light. To overcome this
problem, several algorithms have been suggested to correct the AE31 and PSAP
measurements. The aim of this study was to research how the different
correction algorithms affected the derived optical properties. We applied
the different correction algorithms to the AE31 and PSAP data and compared
the results against the reference measurements conducted by the MAAP. The
comparison between the MAAP and AE31 resulted in a multiple-scattering correction factor (<span class="inline-formula"><i>C</i><sub>ref</sub></span>) that is used in AE31 correction algorithms to
compensate for the light scattering by filter fibers. <span class="inline-formula"><i>C</i><sub>ref</sub></span> varies
between different environments, and our results are applicable to a boreal
environment. We observed a clear seasonal cycle in <span class="inline-formula"><i>C</i><sub>ref</sub></span>, which was
probably due to variations in aerosol optical properties, such as the
backscatter fraction and single-scattering albedo, and also due to
variations in the relative humidity (RH). The results showed that the
filter-based absorption photometers seemed to be rather sensitive to the
RH even if the RH was kept below the recommended value of 40 %. The
instruments correlated well (<span class="inline-formula"><i>R</i>≈0.98</span>), but the slopes of the
regression lines varied between the instruments and correction algorithms:
compared to the MAAP, the AE31 underestimated <span class="inline-formula"><i>σ</i><sub>abs</sub></span> only
slightly (the slopes varied between 0.96–1.00) and the PSAP overestimated
<span class="inline-formula"><i>σ</i><sub>abs</sub></span> only a little (the slopes varied between 1.01–1.04 for a
recommended filter transmittance <span class="inline-formula">>0.7</span>). The instruments and
correction algorithms had a notable influence on the absorption
Ångström exponent: the median absorption Ångström exponent
varied between 0.93–1.54 for the different algorithms and instruments.</p>
C. L. Sirmollo, C. L. Sirmollo, D. R. Collins
et al.
<p>Environmental chambers are a commonly used tool for
studying the production and processing of aerosols in the atmosphere. Most
are located indoors and most are filled with air having prescribed
concentrations of a small number of reactive gas species. Here we describe
portable chambers that are used outdoors and filled with mostly ambient air.
Each all-Teflon<sup>®</sup> 1 m<span class="inline-formula"><sup>3</sup></span> Captive Aerosol Growth and
Evolution (CAGE) chamber has a cylindrical shape that rotates along its
horizontal axis. A gas-permeable membrane allows exchange of gas-phase
species between the chamber and surrounding ambient air with an exchange
time constant of approximately 0.5 h. The membrane is non-permeable to
particles, and those that are injected into or nucleate in the chamber are
exposed to the ambient-mirroring environment until being sampled or lost to
the walls. The chamber and surrounding enclosure are made of materials that
are highly transmitting across the solar ultraviolet and visible wavelength
spectrum. Steps taken in the design and operation of the chambers to
maximize particle lifetime resulted in averages of 6.0, 8.2, and 3.9 h
for <span class="inline-formula">∼</span> 0.06, <span class="inline-formula">∼</span> 0.3, and
<span class="inline-formula">∼</span> 2.5 <span class="inline-formula">µ</span>m diameter particles, respectively. Two of the
newly developed CAGE chamber systems were characterized using data acquired
during a 2-month field study in 2016 in a forested area north of Houston,
TX, USA. Estimations of measured and unmeasured gas-phase species and of
secondary aerosol production in the chambers were made using a
zero-dimensional model that treats chemical reactions in the chamber and the
continuous exchange of gases with the surrounding air. Concentrations of NO,
NO<span class="inline-formula"><sub>2</sub></span>, NO<span class="inline-formula"><sub><i>y</i></sub></span>, O<span class="inline-formula"><sub>3</sub></span>, and several organic compounds measured in the
chamber were found to be in close agreement with those calculated from the
model, with all having near 1.0 best fit slopes and high <span class="inline-formula"><i>r</i><sup>2</sup></span> values. The
growth rates of particles in the chambers were quantified by tracking the
narrow modes that resulted from injection of monodisperse particles and from
occasional new particle formation bursts. Size distributions in the two
chambers were measured intermittently 24 h d<span class="inline-formula"><sup>−1</sup></span>. A bimodal diel
particle growth rate pattern was observed, with maxima of about
6 nm h<span class="inline-formula"><sup>−1</sup></span> in the late morning and early evening and minima of less than 1 nm h<span class="inline-formula"><sup>−1</sup></span> shortly before sunrise and sunset. A pattern change was observed
for hourly averaged growth rates between late summer and early fall.</p>
<p>We present a local-scale atmospheric inversion framework to estimate the
location and rate of methane (CH<span class="inline-formula"><sub>4</sub></span>) and carbon dioxide (CO<span class="inline-formula"><sub>2</sub></span>)
releases from point sources. It relies on mobile near-ground atmospheric
CH<span class="inline-formula"><sub>4</sub></span> and CO<span class="inline-formula"><sub>2</sub></span> mole fraction measurements across the corresponding
atmospheric plumes downwind of these sources, on high-frequency
meteorological measurements, and on a Gaussian plume dispersion model. The
framework exploits the scatter of the positions of the individual plume
cross sections, the integrals of the gas mole fractions above the background
within these plume cross sections, and the variations of these integrals from
one cross section to the other to infer the position and rate of the
releases. It has been developed and applied to provide estimates of brief
controlled CH<span class="inline-formula"><sub>4</sub></span> and CO<span class="inline-formula"><sub>2</sub></span> point source releases during a 1-week
campaign in October 2018 at the TOTAL experimental platform TADI in Lacq,
France. These releases typically lasted 4 to 8 min and covered a wide
range of rates (0.3 to 200 g CH<span class="inline-formula"><sub>4</sub></span>/s and 0.2 to 150 g CO<span class="inline-formula"><sub>2</sub></span>/s) to test
the capability of atmospheric monitoring systems to react fast to emergency
situations in industrial facilities. It also allowed testing of their
capability to provide precise emission estimates for the application of
climate change mitigation strategies. However, the low and highly varying
wind conditions during the releases added difficulties to the challenge of
characterizing the atmospheric transport over the very short duration of the
releases. We present our series of CH<span class="inline-formula"><sub>4</sub></span> and CO<span class="inline-formula"><sub>2</sub></span> mole fraction
measurements using instruments on board a car that drove along roads
<span class="inline-formula">∼50</span> to 150 m downwind of the 40 m <span class="inline-formula">×</span> 60 m area for
controlled releases along with the estimates of the release locations and
rates. The comparisons of these results to the actual position and rate of
the controlled releases indicate <span class="inline-formula">∼10</span> %–40 % average
errors (depending on the inversion configuration or on the series of tests)
in the estimates of the release rates and <span class="inline-formula">∼30</span>–40 m errors in
the estimates of the release locations. These results are shown to be
promising, especially since better results could be expected for longer
releases and under meteorological conditions more favorable to local-scale
dispersion modeling. However, the analysis also highlights the need for
methodological improvements to increase the skill for estimating the source
locations.</p>
<p>In this study we describe a methodology to create high-vertical-resolution SO<span class="inline-formula"><sub>2</sub></span> profiles from volcanic emissions. We
demonstrate the method's performance for the volcanic clouds following the
eruption of Sarychev in June 2009. The resulting profiles are based on a
combination of satellite SO<span class="inline-formula"><sub>2</sub></span> and aerosol retrievals together with
trajectory modelling. We use satellite-based measurements, namely lidar
backscattering profiles from the Cloud-Aerosol Lidar with Orthogonal
Polarization (CALIOP) satellite instrument, to create vertical profiles for
SO<span class="inline-formula"><sub>2</sub></span> swaths from the Atmospheric Infrared Sounder (AIRS) aboard the Aqua
satellite. Vertical profiles are created by transporting the air containing
volcanic aerosol seen in CALIOP observations using the
FLEXible PARTicle dispersion model (FLEXPART) while preserving the high vertical resolution using the
potential temperatures from the MERRA-2 (Modern-Era Retrospective analysis for Research and Application) meteorological data for the original
CALIOP swaths. For the Sarychev eruption, air tracers from 75 CALIOP swaths
within 9 d after the eruption are transported forwards and backwards and
then combined at a point in time when AIRS swaths cover the complete
volcanic SO<span class="inline-formula"><sub>2</sub></span> cloud. Our method creates vertical distributions for
column density observations of SO<span class="inline-formula"><sub>2</sub></span> for individual AIRS swaths, using
height information from multiple CALIOP swaths. The resulting dataset gives
insight into the height distribution in the different sub-clouds of SO<span class="inline-formula"><sub>2</sub></span>
within the stratosphere. We have compiled a gridded high-vertical-resolution
SO<span class="inline-formula"><sub>2</sub></span> inventory that can be used in Earth system models, with a vertical
resolution of 1 K in potential temperature, 61 <span class="inline-formula">±</span> 56 m, or 1.8 <span class="inline-formula">±</span> 2.9 mbar.</p>
In this work, we establish a novel approach to the foundations of relativistic quantum theory, which is based on generalizing the quantum-mechanical Born rule for determining particle position probabilities to curved spacetime. A principal motivator for this research has been to overcome internal mathematical problems of relativistic quantum field theory (QFT) such as the ‘problem of infinities’ (renormalization), which axiomatic approaches to QFT have shown to be not only of mathematical but also of conceptual nature. The approach presented here is probabilistic by construction, can accommodate a wide array of dynamical models, does not rely on the symmetries of Minkowski spacetime, and respects the general principle of relativity. In the analytical part of this work, we consider the 1-body case under the assumption of smoothness of the mathematical quantities involved. This is identified as a special case of the theory of the general-relativistic continuity equation. While related approaches to the relativistic generalization of the Born rule assume the hypersurfaces of interest to be spacelike and the spacetime to be globally hyperbolic, we employ prior contributions by C. Eckart and J. Ehlers to show that the former condition is naturally replaced by a transversality condition and that the latter one is obsolete. We discuss two distinct formulations of the 1-body case, which, borrowing terminology from the non-relativistic analog, we term the Lagrangian and Eulerian pictures. We provide a comprehensive treatment of both. The main contribution of this work to the mathematical physics literature is the development of the Lagrangian picture. The Langrangian picture shows how one can address the ‘problem of time’ in this approach and, therefore, serves as a blueprint for the generalization to many bodies and the case that the number of bodies is not conserved. We also provide an example to illustrate how this approach can in principle be employed to model particle creation and annihilation.
Abstract Bisimulation is a concept that captures behavioural equivalence of states in a transition system. In [Linan Chen, Florence Clerc, and Prakash Panangaden, Bisimulation for feller-dynkin processes, in: Proceedings of the Thirty-Fifth Conference on the Mathematical Foundations of Programming Semantics, Electronic Notes in Theoretical Computer Science 347 (2019) 45–63.], we proposed two equivalent definitions of bisimulation on continuous-time stochastic processes where the evolution is a flow through time. In the present paper, we develop the theory further: we introduce different concepts that correspond to different behavioural equivalences and compare them to bisimulation. In particular, we study the relation between bisimulation and symmetry groups of the dynamics. We also provide a game interpretation for two of the behavioural equivalences. We then compare those notions to their discrete-time analogues.
Abstract Spudcan retrieval from clay soils remains a major concern offshore as the extraction force required to overcome suction and soil resistance often exceeds the pulling capacity available on the mobile jack-up, causing extensive delays. Although methods to calculate extraction resistance have been recently suggested for seabeds of pure clay, to date there is no guidance available for the commonly encountered sand-over-clays. Based on failure mechanisms observed in half-spudcan visualisation tests, and calibrated against an extensive geotechnical centrifuge database of precisely measured extractions, this paper presents a method for calculating the force required to extract the spudcan foundations of mobile jack-up platforms after they have penetrated through a sand layer into underlying clay. Complexities, such as the strength degradation and strength recovery of the underlying clay soil, that occurs during spudcan installation and jack-up operations, are accounted for. Validation of the proposed method is demonstrated by retrospective prediction of the centrifuge testing database. The method outlined will allow operators of jack-up platforms to assess the extraction force prior to jack-up installation and to plan operational scenarios based on seabed conditions.
The determination of internal pile reactions is critical to designing and assessing the structural performance of deep foundations. Internal shear and moment profiles strongly depend on lateral pile-soil interaction, which in turn depends on pile and soil stiffnesses as well as the stiffness contrast between soft and stiff strata, such as occurs at a soil/rock interface. At zones of strong geomaterial stiffness contrast, Winkler-spring-type analyses predict abrupt changes in the internal pile reactions for laterally-loaded foundation elements. In particular, the sudden deamplification of internal moments when transitioning from a soft to stiff layer is accompanied by amplification of pile shear. This “shear spike” can result in bulky transverse reinforcement designs for drilled shaft rock sockets that pose constructability challenges due to reinforcement congestion, increasing the risk of defective concrete on the outside of the cage. This paper presents an experimental research program of three large-scale, instrumented drilled shafts with simulated rock sockets constructed from concrete. Each shaft had a different transverse reinforcement design intended to bound the amplitude of the predicted amplified shear demand, with a particular emphasis on performance of shafts with shear resistance less than the predicted demand and below the code minimum. Test results suggested that the shafts experienced a flexure-dominated failure irrespective of the transverse reinforcement detailing.