Dislocations in ceramics have enjoyed a long yet underappreciated research history. This brief historical overview and reflection on the current challenges provides new insights into using this line defect as a rediscovered tool for engineering functional ceramics.
Muhammed Adil Yatkin, Mihkel Korgesaar, Jani Romanoff
et al.
Current neural network (NN) models can learn patterns from data points with historical dependence. Specifically, in natural language processing (NLP), sequential learning has transitioned from recurrence-based architectures to transformer-based architectures. However, it is unknown which NN architectures will perform the best on datasets containing deformation history due to mechanical loading. Thus, this study ascertains the appropriateness of 1D-convolutional, recurrent, and transformer-based architectures for predicting deformation localization based on the earlier states in the form of deformation history. Following this investigation, the crucial incompatibility issues between the mathematical computation of the prediction process in the best-performing NN architectures and the actual values derived from the natural physical properties of the deformation paths are examined in detail.
Andrés Aceña, Bruno Cardin Guntsche, Iván Gentile de Austria
We revisit the problem of the structure and physical properties of electrically charged static spherically symmetric solutions of the Einstein-Maxwell system of equations where the matter model is a polytropic gas. We consider a relativistic polytrope equation of state and take the electric charge density to be proportional to the rest mass density. We construct the families of solutions corresponding to various sets of parameters and analyze their stability and compliance with the causality requirement, with special emphasis on the possibility of constructing black hole mimickers. Concretely, we want to test how much electric charge a given object can hold and how compact it can be. We conclude that there is a microscopic bound on the charge density to rest mass density ratio coincident with the macroscopic bound regarding the extremal Reissner-Nordstöm black hole. The macroscopic charge to mass ratio for the object can exceed the corresponding microscopic ratio if the object is non-extremal. Crucially, the only way to obtain a black hole mimicker is by taking a subtle limit in which an electrically counterpoised dust solution is obtained.
The $γ$-ray deposition history in an expanding supernova (SN) ejecta has been mostly used to constrain models for Type Ia SN. Here we expand this methodology to core-collapse SNe, including stripped envelope (SE; Type Ib/Ic/IIb) and Type IIP SNe. We construct bolometric light curves using photometry from the literature and we use the Katz integral to extract the $γ$-ray deposition history. We recover the tight range of $γ$-ray escape times, $t_0\approx30-45\,\textrm{d}$, for Type Ia SNe, and we find a new tight range $t_0\approx80-140\,\textrm{d}$, for SE SNe. Type IIP SNe are clearly separated from other SNe types with $t_0\gtrsim400\,\textrm{d}$, and there is a possible negative correlation between $t_0$ and the synthesized $^{56}$Ni mass. We find that the typical masses of the synthesized $^{56}$Ni in SE SNe are larger than those in Type IIP SNe, in agreement with the results of Kushnir. This disfavours progenitors with the same initial mass range for these explosions. We recover the observed values of $ET$, the time-weighted integrated luminosity from cooling emission, for Type IIP, and we find hints of non-zero $ET$ values in some SE SNe. We apply a simple $ γ$-ray radiation transfer code to calculate the $γ$-ray deposition histories of models from the literature, and we show that the observed histories are a powerful tool for constraining models.
Git metadata contains rich information for developers to understand the overall context of a large software development project. Thus it can help new developers, managers, and testers understand the history of development without needing to dig into a large pile of unfamiliar source code. However, the current tools for Git visualization are not adequate to analyze and explore the metadata: They focus mainly on improving the usability of Git commands instead of on helping users understand the development history. Furthermore, they do not scale for large and complex Git commit graphs, which can play an important role in understanding the overall development history. In this paper, we present Githru, an interactive visual analytics system that enables developers to effectively understand the context of development history through the interactive exploration of Git metadata. We design an interactive visual encoding idiom to represent a large Git graph in a scalable manner while preserving the topological structures in the Git graph. To enable scalable exploration of a large Git commit graph, we propose novel techniques (graph reconstruction, clustering, and Context-Preserving Squash Merge (CSM) methods) to abstract a large-scale Git commit graph. Based on these Git commit graph abstraction techniques, Githru provides an interactive summary view to help users gain an overview of the development history and a comparison view in which users can compare different clusters of commits. The efficacy of Githru has been demonstrated by case studies with domain experts using real-world, in-house datasets from a large software development team at a major international IT company. A controlled user study with 12 developers comparing Githru to previous tools also confirms the effectiveness of Githru in terms of task completion time.
Maude Wagner, Francine Grodstein, Karen Leffondre
et al.
Long-term behavioral and health risk factors constitute a primary focus of research on the etiology of chronic diseases. Yet, identifying critical time-windows during which risk factors have the strongest impact on disease risk is challenging. To assess the trajectory of association of an exposure history with an outcome, the weighted cumulative exposure index (WCIE) has been proposed, with weights reflecting the relative importance of exposures at different times. However, WCIE is restricted to a complete observed error-free exposure whereas exposures are often measured with intermittent missingness and error. Moreover, it rarely explores exposure history that is very distant from the outcome as usually sought in life-course epidemiology. We extend the WCIE methodology to (i) exposures that are intermittently measured with error, and (ii) contexts where the exposure time-window precedes the outcome time-window using a landmark approach. First, the individual exposure history up to the landmark time is estimated using a mixed model that handles missing data and error in exposure measurement, and the predicted complete error-free exposure history is derived. Then the WCIE methodology is applied to assess the trajectory of association between the predicted exposure history and the health outcome collected after the landmark time. In our context, the health outcome is a longitudinal marker analyzed using a mixed model. A simulation study first demonstrates the correct inference obtained with this approach. Then, applied to the Nurses' Health Study (19,415 women) to investigate the association between BMI history (collected from midlife) and subsequent cognitive decline after age 70. In conclusion, this approach, easy to implement, provides a flexible tool for studying complex dynamic relationships and identifying critical time windows while accounting for exposure measurement errors.
We have investigated the toroidal analog of ellipsoidal shells of matter, which are of great significance in Astrophysics. The exact formula for the gravitational potential $Ψ(R,Z)$ of a shell with a circular section at the pole of toroidal coordinates is first established. It depends on the mass of the shell, its main radius and axis-ratio $e$ (i.e. core-to-main radius ratio), and involves the product of the complete elliptic integrals of the first and second kinds. Next, we show that successive partial derivatives $\partial^{n +m} Ψ/\partial_{R^n} \partial_{Z^m}$ are also accessible by analytical means at that singular point, thereby enabling the expansion of the interior potential as a bivariate series. Then, we have generated approximations at orders $0$, $1$, $2$ and $3$, corresponding to increasing accuracy. Numerical experiments confirm the great reliability of the approach, in particular for small-to-moderate axis ratios ($e^2 \lesssim 0.1$ typically). In contrast with the ellipsoidal case (Newton's theorem), the potential is not uniform inside the shell cavity as a consequence of the curvature. We explain how to construct the interior potential of toroidal shells with a thick edge (i.e. tubes), and how a core stratification can be accounted for. This is a new step towards the full description of the gravitating potential and forces of tori and rings. Applications also concern electrically-charged systems, and thus go beyond the context of gravitation.
This talk sketches the main milestones of the path towards cubic kilometer neutrino telescopes. It starts with the first conceptual ideas in the late 1950s and describes the emergence of concepts for detectors with a realistic discovery potential in the 1970s and 1980s. After the pioneering project DUMAND close to Hawaii was terminated in 1995, the further development was carried by NT200 in Lake Baikal, AMANDA at the South Pole and ANTARES in the Mediterranean Sea. In 2013, more than half a century after the first concepts, IceCube has discovered extraterrestrial high-energy neutrinos and opened a new observational window to the cosmos - marking a milestone along a journey which is far from being finished.
A new parametrization of the reionization history is presented to facilitate robust comparisons between different observations and with theory. The evolution of the ionization fraction with redshift can be effectively captured by specifying the midpoint, duration, and asymmetry parameters. Lagrange interpolating functions are then used to construct analytical curves that exactly fit corresponding ionization points. The shape parametrizations are excellent matches to theoretical results from radiation-hydrodynamic simulations. The comparative differences for reionization observables are: ionization fraction $|Δx_\text{i}| \lesssim 0.03$, 21cm brightness temperature $|ΔT_\text{b}| \lesssim 0.7\, \text{mK}$, Thomson optical depth $|Δτ| \lesssim 0.001$, and patchy kinetic Sunyaev-Zel'dovich angular power $|ΔD_\ell | \lesssim 0.1\, μ\text{K}^2$. This accurate and flexible approach will allow parameter-space studies and self-consistent constraints on the reionization history from 21cm, CMB, and high-redshift galaxies and quasars.
The limitations of demographic models as well as the opportunities of evolutionary models are reviewed. Flaws associated with demographic models include the confounding effects of plant architecture, representation of heterogeneous individuals in populations, and changes in deme membership confounding covariance structure. Trait-based evolutionary models include FoxPatch which represents weedy Setaria spp. seed behavior with explicit life history process prediction rules and algorithms.
J. J. Walmswell, J. J. Eldridge, B. J. Brewer
et al.
We propose a new method to infer the star formation histories of resolved stellar populations. With photometry one may plot observed stars on a colour-magnitude diagram (CMD) and then compare with synthetic CMDs representing different star formation histories. This has been accomplished hitherto by parametrising the model star formation history as a histogram, usually with the bin widths set by fixed increases in the logarithm of time. A best fit is then found with maximum likelihood methods and we consider the different means by which a likelihood can be calculated. We then apply Bayesian methods by parametrising the star formation history as an unknown number of Gaussian bursts with unknown parameters. This parametrisation automatically provides a smooth function of time. A Reversal Jump Markov Chain Monte Carlo method is then used to find both the most appropriate number of Gaussians, thus avoiding avoid overfitting, and the posterior probability distribution of the star formation rate. We apply our method to artificial populations and to observed data. We discuss the other advantages of the method: direct comparison of different parametrisations and the ability to calculate the probability that a given star is from a given Gaussian. This allows the investigation of possible sub-populations.
Motivated by the advances of quantum Darwinism and recognizing the role played by redundancy in identifying the small subset of quantum states with resilience characteristic of objective classical reality, we explore the implications of redundant records for consistent histories. The consistent histories formalism is a tool for describing sequences of events taking place in an evolving closed quantum system. A set of histories is consistent when one can reason about them using Boolean logic, i.e., when probabilities of sequences of events that define histories are additive. However, the vast majority of the sets of histories that are merely consistent are flagrantly non-classical in other respects. This embarras de richesses (known as the set selection problem) suggests that one must go beyond consistency to identify how the classical past arises in our quantum Universe. The key intuition we follow is that the records of events that define the familiar objective past are inscribed in many distinct systems, e.g., subsystems of the environment, and are accessible locally in space and time to observers. We identify histories that are not just consistent but redundantly consistent using the partial-trace condition introduced by Finkelstein as a bridge between histories and decoherence. The existence of redundant records is a sufficient condition for redundant consistency. It selects, from the multitude of the alternative sets of consistent histories, a small subset endowed with redundant records characteristic of the objective classical past. The information about an objective history of the past is then simultaneously within reach of many, who can independently reconstruct it and arrive at compatible conclusions in the present.
We present a quantitative star formation history of the nearby dwarf galaxy UGCA 92. This irregular dwarf is situated in the vicinity of the Local Group of galaxies in a zone of strong Galactic extinction (IC 342 group of galaxies). The galaxy was resolved into stars with HST/ACS including old red giant branch. We have constructed a model of the resolved stellar populations and measured the star formation rate and metallicity as function of time. The main star formation activity period occurred about 8 - 14 Gyr ago. These stars are mostly metal-poor, with a mean metallicity [Fe/H] ~ -1.5 -- -2.0 dex. About 84 per cent of the total stellar mass was formed during this event. There are also indications of recent star formation starting about 1.5 Gyr ago and continuing to the present. The star formation in this event shows moderate enhancement from ~ 200 Myr to 300 Myr ago. It is very likely that the ongoing star formation period has higher metallicity of about -0.6 -- -0.3 dex. UGCA 92 is often considered to be the companion to the starburst galaxy NGC 1569. Comparing our star formation history of UGCA 92 with that of NGC 1569 reveals no causal or temporal connection between recent star formation events in these two galaxies. We suggest that the starburst phenomenon in NGC 1569 is not related to the galaxy's closest dwarf neighbours and does not affect their star formation history.
Milan Bratko, Kelly Morrison, Ariana de Campos
et al.
We use a calorimetric technique operating in sweeping magnetic field to study the thermomagnetic history- dependence of the magnetocaloric effect (MCE) in Mn0.985Fe0.015As. We study the magnetization history for which a "colossal" MCE has been reported when inferred indirectly via a Maxwell relation. We observe no colossal effect in the direct calorimetric measurement. We further examine the impact of mixed-phase state on the MCE and show that the first order contribution scales linearly with the phase fraction. This validates various phase-fraction based methods developed to remove the colossal peak anomaly from Maxwell-based estimates.
This review presents a personal view of the role that starbursts play in the star formation history of the universe. It is mainly focused on the properties of nearby starburst galaxies selected for their strong UV and/or FIR emission. The similarities between local starbursts and star-forming galaxies at high redshift are also presented. I discuss too the role that LIRGs and ULIRGs and merging systems play in the formation and evolution of galaxies.
In choosing a family of histories for a system, it is often convenient to choose a succession of locations in phase space, rather than configuration space, for comparison to classical histories. Although there are no good projections onto phase space, several approximate projections have been used in the past; three of these are examined in this paper. Expressions are derived for the probabilities of histories containing arbitrary numbers of projections onto phase space, and the conditions for the decoherence of these histories are studied.
Karl Glazebrook, Ivan K. Baldry, Michael R. Blanton
et al.
We present a determination of the `Cosmic Optical Spectrum' of the Universe, i.e. the ensemble emission from galaxies, as determined from the red-selected Sloan Digital Sky Survey main galaxy sample and compare with previous results of the blue-selected 2dF Galaxy Redshift Survey. Broadly we find good agreement in both the spectrum and the derived star-formation histories. If we use a power-law star-formation history model where star-formation rate $\propto (1+z)^β$ out to z=1, then we find that $β$ of 2 to 3 is still the most likely model and there is no evidence for current surveys missing large amounts of star formation at high redshift. In particular `Fossil Cosmology' of the local universe gives measures of star-formation history which are consistent with direct observations at high redshift. Using the photometry of SDSS we are able to derive the cosmic spectrum in absolute units (i.e.$ W Å$^{-1}$ Mpc$^{-3}$) at 2--5Åresolution and find good agreement with published broad-band luminosity densities. For a Salpeter IMF the best fit stellar mass/light ratio is 3.7--7.5 $\Msun/\Lsun$ in the r-band (corresponding to $\omstars h = 0.0025$--0.0055) and from both the stellar emission history and the H$α$ luminosity density independently we find a cosmological star-formation rate of 0.03--0.04 h $\Msun$ yr$^{-1}$ Mpc$^{-3}$ today.
Recently, there has been great progress toward observationally determining the mean star formation history of the universe. When accurately known, the cosmic star formation rate could provide much information about Galactic evolution, if the Milky Way's star formation rate is representative of the average cosmic star formation history. A simple hypothesis is that our local star formation rate is proportional to the cosmic mean. In addition, to specify a star formation history, one must also adopt an initial mass function (IMF); typically it is assumed that the IMF is a smooth function which is constant in time. We show how to test directly the compatibility of all these assumptions, by making use of the local (solar neighborhood) star formation record encoded in the present-day stellar mass function. Present data suggests that at least one of the following is false: (1) the local IMF is constant in time; (2) the local IMF is a smooth (unimodal) function; and/or (3) star formation in the Galactic disk was representative of the cosmic mean. We briefly discuss how to determine which of these assumptions fail, and improvements in observations which will sharpen this test.