Laurie Needham, Murray Evans, Darren P. Cosker
et al.
AbstractHuman movement researchers are often restricted to laboratory environments and data capture techniques that are time and/or resource intensive. Markerless pose estimation algorithms show great potential to facilitate large scale movement studies ‘in the wild’, i.e., outside of the constraints imposed by marker-based motion capture. However, the accuracy of such algorithms has not yet been fully evaluated. We computed 3D joint centre locations using several pre-trained deep-learning based pose estimation methods (OpenPose, AlphaPose, DeepLabCut) and compared to marker-based motion capture. Participants performed walking, running and jumping activities while marker-based motion capture data and multi-camera high speed images (200 Hz) were captured. The pose estimation algorithms were applied to 2D image data and 3D joint centre locations were reconstructed. Pose estimation derived joint centres demonstrated systematic differences at the hip and knee (~ 30–50 mm), most likely due to mislabeling of ground truth data in the training datasets. Where systematic differences were lower, e.g., the ankle, differences of 1–15 mm were observed depending on the activity. Markerless motion capture represents a highly promising emerging technology that could free movement scientists from laboratory environments but 3D joint centre locations are not yet consistently comparable to marker-based motion capture.
Abstract Based on the two postulates of the invariance of both the laws of physics and light speed in inertial frames, Einstein revolutionised the prevalent conceptions of space and time in the Special Theory of Relativity, which is not however without difficulties, due to gaping holes in its philosophical foundations. The first postulate is an unjustified and clumsily drafted extension of Galilean Relativity, while the second postulate, if interpreted as meaning that the speed of light will not be altered by a moving frame, does not lead to Lorentz transformations, but just reveals that a light ray is not transported by a moving frame. Reference frames involving light propagation do not exhibit the equivalence inherent in inertial frames, and hence could be classified as seemingly-inertial, being some category of non-inertial frames. Either equivalence, including simultaneity in both moving and stationary frames, is a defining characteristic of inertial frames, or else if inertial frames allow inequivalence, then they are not indistinguishable, which is a contradiction. All along, the definitions of “frames of reference,” “inertial frames,” and “simultaneity” and the “legal” drafting of the two postulates play a crucial role in the development and consistency of the Theory, almost as much as the thought experiments and the mathematics.
The quadrupole Kozai mechanism, which describes the hierarchical three-body problem in the leading order, is shown to be equivalent to a simple pendulum where the change in the eccentricity squared equals the height of the pendulum from its lowest point: $e_{\text{max}}^2-e^2=h=l\left(1-\cosθ\right)$. In particular, this results in useful expressions for the KLC period, and the maximal and minimal eccentricities in terms of orbital constants. We derive the equivalence using the vector coordinates $\boldsymbolα=\textbf{j}+\textbf{e}, \boldsymbolβ=\textbf{j}-\textbf{e}$ for the inner Keplerian orbit, where $\textbf{j}$ is the normalized specific angular momentum, and $\textbf{e}$ is the eccentricity vector. The equations of motion for $\boldsymbolα$ and $\boldsymbolβ$ simplify to $\dot{\boldsymbolα}=2\partial_{\boldsymbolα} φ\times \boldsymbolα$ and $\dot{\boldsymbolβ}=2\partial_{\boldsymbolβ} φ\times \boldsymbolβ$, where $φ$ is the normalized averaged interaction potential and are symmetric to replacing $\boldsymbolα$ and $\boldsymbolβ$ for the KLC quadratic potential. Their constraints simplify to $\boldsymbolα^2=\boldsymbolβ^2=1$, and they are distributed uniformly and independently on the unit sphere for a uniform distribution in phase space (with a fixed energy).
Long term observations and space missions have generated a wealth of data on the magnetic fields of the Earth and other solar system planets. planetMagfields is a Python package designed to have all the planetary magnetic field data currently available in one place and to provide an easy interface to access the data. planetMagfields focuses on planetary bodies that generate their own magnetic field, namely Mercury, Earth, Jupiter, Saturn, Uranus, Neptune and Ganymede. planetMagfields provides functions to compute as well as plot the magnetic field on the planetary surface or at a distance above or under the surface. It also provides functions to filter out the field to large or small scales as well as to produce .vts files to visualize the field in 3D using Paraview, VisIt or similar rendering software. Lastly, the planetMagfields repository also provides a Jupyter notebook for easy interactive visualizations.
We present a statistical analysis of the ages and metallicities of triple stellar systems that are known to host exoplanets. With controversial cases disregarded, so far 27 of those systems have been identified. Our analysis, based on an exploratory approach, shows that those systems are on average notably younger than stars situated in the solar neighborhood. Though the statistical significance of this result is not fully established, the most plausible explanation is a possible double selection effect due to the relatively high mass of planet-hosting stars of those systems (which spend less time on the main-sequence than low-mass stars) and that planets in triple stellar systems may be long-term orbitally unstable. The stellar metallicities are on average solar-like; however, owing to the limited number of data, this result is not inconsistent with the previous finding that stars with planets tend to be metal-rich as the deduced metallicity distribution is relatively broad.
As humankind prepares to establish outposts and infrastructure on the Moon, the ability to manufacture parts and buildings on-site is crucial. While transporting raw materials from Earth can be costly and time-consuming, in-situ resource utilization (ISRU) presents an attractive alternative. This review paper aims to provide a thorough examination of the current state and future potential of Lunar-based manufacturing and construction (LBMC), with a particular focus on the prospect of utilizing in-situ resources and additive manufacturing. The paper analyzes existing research on LBMC from various perspectives, including different manufacturing techniques and compositions, the potential of ISRU for LBMC, characterization of built parts and structures, the role of energy sources and efficiency, the impact of low-gravity and vacuum conditions, and the feasibility of using artificial intelligence, automation, and robotics. By synthesizing these findings, this review offers valuable insights into the challenges and opportunities that lie ahead for LBMC.
Joshua Collins, Martina Piemonte, Mark Taylor
et al.
The ability to predict transformation behaviour during steel processing, such as primary heat treatments or welding, is extremely beneficial for tailoring microstructures and properties to a desired application. In this work, a model for predicting the continuous cooling transformation (CCT) behaviour of low-alloy steels is developed, using semi-empirical expressions for isothermal transformation behaviour. Coupling these expressions with Scheil’s additivity rule for converting isothermal to non-isothermal behaviour, continuous cooling behaviour can be predicted. The proposed model adds novel modifications to the Li model in order to improve CCT predictions through the addition of a carbon-partitioning model, thermodynamic boundary conditions, and a Koistinen–Marburger expression for martensitic behaviour. These modifications expanded predictions to include characteristic CCT behaviour, such as transformation suppression, and an estimation of the final constituent fractions. The proposed model has been shown to improve CCT predictions for EN3B, EN8, and SA-540 B24 steels by better reflecting experimental measurements. The proposed model was also adapted into a more complex simulation that considers the chemical heterogeneity of the examined SA-540 material, showing a further improvement to CCT predictions and demonstrating the versatility of the model. The model is rapid and open source.
BACKGROUND AND PURPOSE The Global Quality Assurance of Radiation Therapy Clinical Trials Harmonization Group (GHG) is a collaborative group of Radiation Therapy Quality Assurance (RTQA) Groups harmonizing and improving RTQA for multi-institutional clinical trials. The objective of the GHG OAR Working Group was to unify OAR contouring guidance across RTQA groups by compiling a single reference list of OARs in line with AAPM TG 263 and ASTRO, together with peer-reviewed, anatomically defined contouring guidance for integration into clinical trial protocols independent of the radiation therapy delivery technique. MATERIALS AND METHODS The GHG OAR multi-professional Working Group comprised of 22 members from 6 international RTQA Groups and affiliated organizations conducted the work in 3 stages: 1) Clinical trial documentation review and identification of structures of interest 2) Review of existing contouring guidance and survey of proposed OAR contouring guidance 3) Review of survey feedback with recommendations for contouring guidance with standardized OAR nomenclature. RESULTS 157 clinical trials were examined; 222 OAR structures were identified. Duplicates, non-anatomical, non-specific, structures with more specific alternative nomenclature, and structures identified by one RTQA group were excluded leaving 58 structures of interest. 6 OAR descriptions were accepted with no amendments, 41 required minor amendments, 6 major amendments, 20 developed as a result of feedback, and 5 structures excluded in response to feedback. The final GHG consensus guidance includes 73 OARs with peer-reviewed descriptions (appendix A). CONCLUSION We provide OAR descriptions with nomenclature for use in clinical trials. A more uniform dataset supports the delivery of clinically relevant and valid conclusions from clinical trials.
We present spectroscopic confirmation of candidate strong gravitational lenses using the Keck Observatory and Very Large Telescope as part of our ASTRO 3D Galaxy Evolution with Lenses (AGEL) survey. We confirm that (1) search methods using convolutional neural networks (CNNs) with visual inspection successfully identify strong gravitational lenses and (2) the lenses are at higher redshifts relative to existing surveys due to the combination of deeper and higher-resolution imaging from DECam and spectroscopy spanning optical to near-infrared wavelengths. We measure 104 redshifts in 77 systems selected from a catalog in the DES and DECaLS imaging fields (r ≤ 22 mag). Combining our results with published redshifts, we present redshifts for 68 lenses and establish that CNN-based searches are highly effective for use in future imaging surveys with a success rate of at least 88% (defined as 68/77). We report 53 strong lenses with spectroscopic redshifts for both the deflector and source (z src > z defl), and 15 lenses with a spectroscopic redshift for either the deflector (z defl > 0.21) or source (z src ≥ 1.34). For the 68 lenses, the deflectors and sources have average redshifts and standard deviations of 0.58 ± 0.14 and 1.92 ± 0.59 respectively, and corresponding redshift ranges of z defl = 0.21–0.89 and z src = 0.88–3.55. The AGEL systems include 41 deflectors at z defl ≥ 0.5 that are ideal for follow-up studies to track how mass density profiles evolve with redshift. Our goal with AGEL is to spectroscopically confirm ∼100 strong gravitational lenses that can be observed from both hemispheres throughout the year. The AGEL survey is a resource for refining automated all-sky searches and addressing a range of questions in astrophysics and cosmology.
Este trabalho apresenta uma visão da Geofísica no que diz respeito à algumas manifestações físicas que trazem informações sobre a estrutura interna da Terra. Desta forma, são abordados alguns conceitos básicos sobre sismologia, o estudo das ondas elásticas liberadas por terremotos, geomagnetismo, o estudo do campo magnético gerado no interior da Terra, e gravimetria, a interpretação das anomalias gravimétricas produzidas pelas heterogeneidades nas estruturas internas da Terra.
[Abridged] Luminous hot stars dominate the stellar energy input to the interstellar medium throughout cosmological time, they are laboratories to test theories of stellar evolution and multiplicity, and they serve as luminous tracers of star formation in the Milky Way and other galaxies. Massive stars occupy well-defined loci in colour-colour and colour-magnitude spaces, enabling selection based on the combination of Gaia EDR3 astrometry and photometry and 2MASS photometry, even in the presence of substantive dust extinction. In this paper we devise an all-sky sample of such luminous OBA-type stars, designed to be quite complete rather than very pure, to serve as targets for spectroscopic follow-up with the SDSS-V survey. We estimate "astro-kinematic" distances by combining parallaxes and proper motions with a model for the expected velocity and density distribution of young stars; we show that this adds useful constraints on the stars' distances, and hence luminosities. With these distances we map the spatial distribution of a more stringently selected sub-sample across the Galactic disc, and find it to be highly structured, with distinct over- and under-densities. The most evident over-densities can be associated with the presumed spiral arms of the Milky Way, in particular the Sagittarius-Carina and Scutum-Centaurus arms. Yet, the spatial picture of the Milky Way's young disc structure emerging in this study is complex, and suggests that most young stars in our Galaxy ($t_{age}<t_{dyn}$) are not neatly organised into distinct spiral arms. The combination of the comprehensive spectroscopy to come from SDSS-V (yielding velocities, ages, etc..) with future Gaia data releases will be crucial to reveal the dynamical nature of the spiral arm themselves.
We describe the method used by the multi-band template analysis (MBTA) pipeline to compute the probability of astrophysical origin, p astro, of compact binary coalescence candidates in LIGO–Virgo data from the third observing run (O3). The calculation is performed as part of the offline analysis and is used to characterize candidate events, along with their source classification. The technical details and the implementation are described, as well as the results from the first half of the third observing run (O3a) published in GWTC-2.1. The performance of the method is assessed on injections of simulated gravitational-wave signals in O3a data using a parameterization of p astro as a function of the MBTA combined ranking statistic. Possible sources of statistical and systematic uncertainties are discussed, and their effect on p astro quantified.
Searches for extrasolar planets using the periodic Doppler shift of stellar spectral lines have recently achieved a precision of 60 cm s-1 (ref. 1), which is sufficient to find a 5-Earth-mass planet in a Mercury-like orbit around a Sun-like star. To find a 1-Earth-mass planet in an Earth-like orbit, a precision of ∼5 cm s-1 is necessary. The combination of a laser frequency comb with a Fabry–Pérot filtering cavity has been suggested as a promising approach to achieve such Doppler shift resolution via improved spectrograph wavelength calibration, with recent encouraging results. Here we report the fabrication of such a filtered laser comb with up to 40-GHz (∼1-Å) line spacing, generated from a 1-GHz repetition-rate source, without compromising long-term stability, reproducibility or spectral resolution. This wide-line-spacing comb, or ‘astro-comb’, is well matched to the resolving power of high-resolution astrophysical spectrographs. The astro-comb should allow a precision as high as 1 cm s-1 in astronomical radial velocity measurements.
Astronomical images are essential for exploring and understanding the universe. Optical telescopes capable of deep observations, such as the Hubble Space Telescope, are heavily oversubscribed in the Astronomical Community. Images also often contain additive noise, which makes de-noising a mandatory step in post-processing the data before further data analysis. In order to maximise the efficiency and information gain in the post-processing of astronomical imaging, we turn to machine learning. We propose Astro U-net, a convolutional neural network for image de-noising and enhancement. For a proof-of-concept, we use Hubble space telescope images from WFC3 instrument UVIS with F555W and F606W filters. Our network is able to produce images with noise characteristics as if they are obtained with twice the exposure time, and with minimum bias or information loss. From these images, we are able to recover 95.9% of stars with an average flux error of 2.26%. Furthermore the images have, on average, 1.63 times higher signal-to-noise ratio than the input noisy images, equivalent to the stacking of at least 3 input images, which means a significant reduction in the telescope time needed for future astronomical imaging campaigns.
Os dados observacionais atuais indicam que aproximadamente 95% do substrato cosmológico é invisível e só se manifesta através de sua ação gravitacional. A conclusão mais aceita, baseada na teoria da relatividade geral de Einstein, é que estes 95% formam um “setor escuro", de natureza não bariônica. Este setor é normalmente dividido em energia escura e matéria escura. Energia escura é uma componente exótica com uma pressão negativa que domina dinamicamente o Universo atual. Na teoria de Einstein uma pressão efetiva negativa é necessária para entender a expansão acelerada do Universo, detectada em 1998. Matéria escura, por outro lado, é matéria sem pressão, necessária para explicar a origem das estruturas cósmicas. A natureza da matéria escura e da energia escura é objeto de intensos estudos em todo o mundo, tanto do ponto de vista teórico, quanto observacional. Este artigo, baseado numa palestra para alunos do IFES Guarapari no mês de outubro de 2019, visa dar uma introdução geral nos problemas atuais da cosmologia.
Document clustering is generally the first step for topic identification. Since many clustering methods operate on the similarities between documents, it is important to build representations of these documents which keep their semantics as much as possible and are also suitable for efficient similarity calculation. As we describe in Koopman et al. (Proceedings of ISSI 2015 Istanbul: 15th International Society of Scientometrics and Informetrics Conference, Istanbul, Turkey, 29 June to 3 July, 2015. Bogaziçi University Printhouse. http://www.issi2015.org/files/downloads/all-papers/1042.pdf, 2015), the metadata of articles in the Astro dataset contribute to a semantic matrix, which uses a vector space to capture the semantics of entities derived from these articles and consequently supports the contextual exploration of these entities in LittleAriadne. However, this semantic matrix does not allow to calculate similarities between articles directly. In this paper, we will describe in detail how we build a semantic representation for an article from the entities that are associated with it. Base on such semantic representations of articles, we apply two standard clustering methods, K-Means and the Louvain community detection algorithm, which leads to our two clustering solutions labelled as OCLC-31 (standing for K-Means) and OCLC-Louvain (standing for Louvain). In this paper, we will give the implementation details and a basic comparison with other clustering solutions that are reported in this special issue.