Laura Russo, Caleb Allen, Cameron S. Jorgensen
et al.
Scientists have long been fascinated by magnetoreception, the innate capacity of many animals to sense and use the Earth's magnetic field for navigation. In eusocial insects like honey bees, magnetoreception has been linked to communication and foraging. However, little is known about magnetoreception's phylogenetic patterns and relationship to species traits and natural history. Here, we demonstrate that putative magnetoreception based on ferromagnetic particles is widespread across a diversity of bee species (72 out of 96 species tested), with no phylogenetic signal. We also detected such putative magnetoreception in non-bee outgroups, suggesting this magnetic capacity predates the evolution of the Anthophila. While magnetic signals were found across a diversity of life history traits, the strength of the magnetic signal varied within and between species, and increased with body size and social behavior.
Weiqin Chen, Xinjie Zhang, Dharmashankar Subramanian
et al.
Transformer models (TMs) have exhibited remarkable in-context reinforcement learning (ICRL) capabilities, allowing them to generalize to and improve in previously unseen environments without re-training or fine-tuning. This is typically accomplished by imitating the complete learning histories of a source RL algorithm over a substantial amount of pretraining environments, which, however, may transfer suboptimal behaviors inherited from the source algorithm/dataset. Therefore, in this work, we address the issue of inheriting suboptimality from the perspective of dataset preprocessing. Motivated by the success of the weighted empirical risk minimization, we propose a simple yet effective approach, learning history filtering (LHF), to enhance ICRL by reweighting and filtering the learning histories based on their improvement and stability characteristics. To the best of our knowledge, LHF is the first approach to avoid source suboptimality by dataset preprocessing, and can be combined with the current state-of-the-art (SOTA) ICRL algorithms. We substantiate the effectiveness of LHF through a series of experiments conducted on the well-known ICRL benchmarks, encompassing both discrete environments and continuous robotic manipulation tasks, with three SOTA ICRL algorithms (AD, DPT, DICP) as the backbones. LHF exhibits robust performance across a variety of suboptimal scenarios, as well as under varying hyperparameters and sampling strategies. Notably, the superior performance of LHF becomes more pronounced in the presence of noisy data, indicating the significance of filtering learning histories.
Recent cosmological data and astrophysical observations, such as the Hubble tension and the increasing preference from galaxy surveys for dynamical dark energy, have begun to challenge the standard $Λ$-cold dark matter cosmological model. Primordial magnetic fields (PMFs) offer a mechanism to alleviate these tensions within the framework of the standard model. These fields source excess small-scale baryon clumping, which can speed up recombination and shrink the comoving sound horizon at the surface of last scattering. Computing the modified recombination history requires coupling the radiative transport of Lyman-$α$ photons to compressible magnetohydronamic simulations. Since doing so is generically computationally intractable, we have developed a linearized treatment which self-consistently computes the modified recombination history in the presence of PMF induced baryon clumping for fields with red-tilted spectra. The clumping factors we find are too small to alleviate outstanding cosmological tensions, but our general framework can be applied to other PMF spectra, and provides a significant theoretical step towards a complete account of recombination in the presence of small-scale baryon clumping.
Using a scale-free $N$-body simulation generated with the ABACUS $N$-body code, we test the robustness of halo mass accretion histories via their convergence to self-similarity. We compare two halo finders, ROCKSTAR and COMPASO. We find superior self-similarity in halo mass accretion histories determined using ROCKSTAR, with convergence to 5% or better between $\sim10^2$ to $10^5$ particles. For COMPASO we find weaker convergence over a similar region, with at least 10% between $\sim10^2$ to $10^4$ particles. Furthermore, we find the convergence to self-similarity improves as the simulation evolves, with the largest and deepest regions of convergence appearing after the scale factor quadrupled from the time at which non-linear structures begin to form. With sufficient time evolution, halo mass accretion histories are converged to self-similarity within 5% with as few as $\sim70$ particles for COMPASO and within 2% for as few as $\sim30$ particles for ROCKSTAR.
In this paper, we study how open-source large language models (LLMs) can be effectively deployed for improving query rewriting in conversational search, especially for ambiguous queries. We introduce CHIQ, a two-step method that leverages the capabilities of LLMs to resolve ambiguities in the conversation history before query rewriting. This approach contrasts with prior studies that predominantly use closed-source LLMs to directly generate search queries from conversation history. We demonstrate on five well-established benchmarks that CHIQ leads to state-of-the-art results across most settings, showing highly competitive performances with systems leveraging closed-source LLMs. Our study provides a first step towards leveraging open-source LLMs in conversational search, as a competitive alternative to the prevailing reliance on commercial LLMs. Data, models, and source code will be publicly available upon acceptance at https://github.com/fengranMark/CHIQ.
Natural Language Processing (NLP) plays a pivotal role in the realm of Digital Humanities (DH) and serves as the cornerstone for advancing the structural analysis of historical and cultural heritage texts. This is particularly true for the domains of named entity recognition (NER) and relation extraction (RE). In our commitment to expediting ancient history and culture, we present the ``Chinese Historical Information Extraction Corpus''(CHisIEC). CHisIEC is a meticulously curated dataset designed to develop and evaluate NER and RE tasks, offering a resource to facilitate research in the field. Spanning a remarkable historical timeline encompassing data from 13 dynasties spanning over 1830 years, CHisIEC epitomizes the extensive temporal range and text heterogeneity inherent in Chinese historical documents. The dataset encompasses four distinct entity types and twelve relation types, resulting in a meticulously labeled dataset comprising 14,194 entities and 8,609 relations. To establish the robustness and versatility of our dataset, we have undertaken comprehensive experimentation involving models of various sizes and paradigms. Additionally, we have evaluated the capabilities of Large Language Models (LLMs) in the context of tasks related to ancient Chinese history. The dataset and code are available at \url{https://github.com/tangxuemei1995/CHisIEC}.
Data are given, commentary is supplied and explanations are provided with regard to the technical, the organizational and, of course, the human history connected to the time of research, which resulted to the paper entitled "Soil sampling and Cs-137 analysis of the Chernobyl fallout in Greece", written by late Professor S.E. Simopoulos. This paper has been provided in Greek translation within an issued honorary volume (ISBN 978-960-254-714-4). Reasonably, the narration starts with the review of the political, the financial and the social situation of Greece around 1986. Subsequently, an analysis is given on the then available means, the persons involved, the methods used, the lessons learned and any other connection with the oral history of the NTUA's Nuclear Engineering Laboratory and other relevant Greek Laboratories. For this history, written proof is now scarce and the persons available to pass it on are growing less and less. N.P. Petropoulos, now Laboratory member and then student of Professor S.E. Simopoulos was in charge of preparation of this text.
Maintaining core health and managing living stress among older individuals are crucial for their overall well-being. As people age, various factors impact their physical and mental health, making it essential to address these concerns comprehensively. In this discussion, we will explore the significance of core health, the challenges older people face and strategies to alleviate living stress, covering various aspects such as physical activity, nutrition, social connections and mental health support.
This article aims to explore the anthropological foundations of early Buddhist medical thought by conducting a comprehensive analysis of Pāli texts and their relationship to the development of Indian traditional medicine, such as Āyurveda. The research investigates the possible existence of an ancient Buddhist medical system and compares it with contemporary medical systems, such as Hippocratic medicine. By examining the Bhesajjakkhandhaka and the Bhesajjamañjūsā, two Pāli texts that discuss medicine, the article seeks to outline the key elements of ancient Buddhist medical conceptions. Furthermore, it emphasizes the importance of understanding the evolution of Buddhist medical practices and their potential role in defining Indian traditional medicine. The findings could provide a foundation for historians of Indian medicine to delve into even more complex aspects of the medical tradition in ancient Buddhism.
Christopher Solinas, Douglas Rebstock, Nathan R. Sturtevant
et al.
Historically applied exclusively to perfect information games, depth-limited search with value functions has been key to recent advances in AI for imperfect information games. Most prominent approaches with strong theoretical guarantees require subgame decomposition - a process in which a subgame is computed from public information and player beliefs. However, subgame decomposition can itself require non-trivial computations, and its tractability depends on the existence of efficient algorithms for either full enumeration or generation of the histories that form the root of the subgame. Despite this, no formal analysis of the tractability of such computations has been established in prior work, and application domains have often consisted of games, such as poker, for which enumeration is trivial on modern hardware. Applying these ideas to more complex domains requires understanding their cost. In this work, we introduce and analyze the computational aspects and tractability of filtering histories for subgame decomposition. We show that constructing a single history from the root of the subgame is generally intractable, and then provide a necessary and sufficient condition for efficient enumeration. We also introduce a novel Markov Chain Monte Carlo-based generation algorithm for trick-taking card games - a domain where enumeration is often prohibitively expensive. Our experiments demonstrate its improved scalability in the trick-taking card game Oh Hell. These contributions clarify when and how depth-limited search via subgame decomposition can be an effective tool for sequential decision-making in imperfect information settings.
We present a model-independent reconstruction of the early expansion and thermal histories of the universe, obtained from light element abundance measurements. The expansion history is tightly constrained around the onset of the Big Bang Nucleosynthesis (BBN). The temperature of photons is additionally constrained around the time of neutrino decoupling. Allowing for perturbations to the standard expansion rate, we find that the radiation energy density is constrained to within 15% of its $Λ$CDM value, and only 1% extra matter energy density is allowed around the epoch of BBN. We introduce a new and general analytic fitting formula for the temperature variation, which is flexible enough to reproduce the signal of large classes of beyond-CDM particle models that can alter the temperature through early-time energy injection. We present its constraints from BBN data and from the measurements of effective number of relativistic species and helium-4 abundance probed by the Cosmic Microwave Background radiation anisotropy. Our results provide clarity on the most fundamental properties of the early universe, reconstructed with minimal assumptions about the unknown physics that can occur at keV--MeV energy scales and can be mapped to broad classes of models of interest to cosmology.
According to the common wisdom, between a fraction of the mHz and few Hz the spectral energy density of the inflationary gravitons can be safely disregarded even assuming the most optimistic sensitivities of the space-borne detectors. In this analysis we show that this conclusion is evaded if, prior to nucleosynthesis, the post-inflationary evolution includes a sequence of stages expanding either faster or slower than radiation. As a consequence, contrary to the conventional lore, it is shown that below a fraction of the Hz the spectral energy density of the relic gravitons may exceed (even by eight orders of magnitude) the signal obtained under the hypothesis of radiation dominance throughout the whole expansion history prior to the formation of light nuclei. Since the slopes and the amplitudes of the spectra specifically reflect both the inflationary dynamics and the subsequent decelerated evolution, it is possible to disentangle the contribution of the relic gravitons from other (late-time) bursts of gravitational radiation associated, for instance, with a putative strongly first-order phase transition at the TeV scale. Hence, any limit on the spectral energy density of the relic gravitons in the mHz range simultaneously constrains the post-inflationary expansion history and the inflationary initial data.
I will discuss the six previous and present long-baseline neutrino experiments: two first-generation general experiments, K2K and MINOS, two specialized experiments, OPERA and ICARUS, and two second-generation general experiments, T2K and NOvA. The motivations for and goals of each experiment, the reasons for the choices that each experiment made, and the outcomes will be discussed.
Johannes Kepler described the Copernican universe as consisting of a central, small, brilliant sun with its planetary system, all surrounded by giant stars. These stars were far larger than, and much dimmer than, the sun -- his De Stella Nova shows that every visible star must exceed the size of the Earth's orbit, and the most prominent stars may exceed the size of the entire planetary system. His other writings, including his response to Ingoli, his Dissertatio cum Nuncio Sidereo, and his Epitome Astronomiae Copernicanae, also reflect this Copernican universe. To Kepler, such a universe was an illustration of divine power -- and solid evidence against the stars being suns, against the universe of Giordano Bruno. Kepler's starry universe was in fact the Copernican universe supported by observations of the stars, which showed them to have measureable apparent sizes. Not until the later seventeenth century were those apparent sizes shown to be spurious, allowing for a universe in which the stars were suns.
Thermal history models, that have been used to understand the geological history of Earth, are now being coupled to climate models to map conditions that allow planets to maintain surface water over geologic time - a criteria considered crucial for life. However, the lack of intrinsic uncertainty assessment has blurred guidelines for how thermal history models can be used toward this end. A model, as a representation of something real, is not expected to be complete. Unmodeled effects are assumed to be small enough that the model maintains utility for the issue(s) it was designed to address. The degree to which this holds depends on how unmodeled factors affect the certainty of model predictions. We quantify this intrinsic uncertainty for several parameterized thermal history models (a widely used subclass of planetary models). Single perturbation analysis is used to determine the reactance time of different models. This provides a metric for how long it takes low amplitude, unmodeled effects to decay or grow. Reactance time is shown to scale inversely with the strength of the dominant feedback (negative or positive) within a model. A perturbed physics analysis is then used to determine uncertainty shadows for model outputs. This provides probability distributions for model predictions and tests the structural stability of a model. That is, do model predictions remain qualitatively similar, and within assumed model limits, in the face of intrinsic uncertainty. Once intrinsic uncertainty is accounted for, model outputs/predictions and comparisons to observational data should be treated in a probabilistic way.
In this paper, I study the three chapters devoted to human physiognomy in the Garuḍapurāṇa. Two of the three come directly from Varāhamihira’s sixth-century Bṛhatsaṃhitā with the commentary (vivṛti) of the Kaśmirian Bhaṭṭotpala (fl. ca. 966 or 969 CE). I hope to make two research contributions. First, I hope to show that the date of this section of the Purāṇa, if not indeed the entire Purāṇa, cannot be before the sixth century and probably after the tenth century. Second, I will illustrate how a text in different metres was normalised into anuṣṭubh metre for ease of memory and recitation. I shall conclude with a discussion of the lessons we can learn from this kind of ancient Indian redaction process
This article examines the cross-cultural influence that worked on the absorption process of the goddess Kāmākhyā (Assam) within the Brahmanic pantheon, through a correlation of textual and historical-religious pieces of evidence. 2 2 This article is an enlarged and revised version of a paper that I presented on 18 September 2015 during the sixth Coffee Break Conference (17–19 September) held at the Italian Institute of Oriental Studies of ‘Sapienza’ University of Rome. In Assam, the cross-cultural interaction, between local tribes and Indo-Aryan speakers, began around 200 BCE–100 CE—when the Vedic culture had already changed from its earlier theological pattern. Therefore, after had been influenced by a long cross-cultural negotiation, the early medieval north-eastern purāṇas transformed the dakṣayajña myth, legitimising the temple of Kāmākhyā on Nīlācala as the greatest śākta pīṭha (seat of power), where the yoni (vulva) of Sat ī was preserved. In this way, the Purāṇas reconnected Nīlācala–Kāmākhyā not only to the sexual symbolism, but also to an ancient cremation ground and its death imaginary–a fact that the systematisation of the yoginī cult (ninth–eleventh century) into the Yoginī Kaula school corroborated. In this cross-cultural context, the early medieval Assamese dynasties emerged tied to the danger of liminal powers—linked to both the heterodox śākta-tantra sects and tribal traditions that were harnessed by the kings through the exoteric and esoteric rituals practised at Kāmākhyā.
We present the results of comparative study of amplitude calibrations for East-Asia VLBI Network (EAVN) at 22 and 43 GHz using two different methods of an "a-priori" and a "template spectrum", particularly on lower declination sources. Using observational data sets of early EAVN observations, we investigated the elevation-dependence of the gain values at seven stations of the KaVA (KVN and VERA Array) and three additional telescopes in Japan (Takahagi 32m, Yamaguchi 32m and Nobeyama 45m). By comparing the independently obtained gain values based on these two methods, we found that the gain values from each method were consistent within 10% at elevations higher than 10 degree. We also found that the total flux densities of two images produced from the different amplitude calibrations were in agreement within 10% at both 22 and 43 GHz. By using the template spectrum method, furthermore, the additional radio telescopes can participate in the KaVA (i.e. EAVN) so that it can give a notable sensitivity increase. Therefore, our results will constrain the detailed conditions to reliably measure the VLBI amplitude using EAVN and give a potential to extend possible telescopes comprising EAVN.
Andreea S. Font, Ian G. McCarthy, Amandine M. C. Le Brun
et al.
[Abridged] Typical disc galaxies forming in a LambdaCDM cosmology encounter a violent environment, where they often experience mergers with massive satellites. The fact that disc galaxies are ubiquitous in the local Universe suggests that a quiescent history is not necessary for their formation. Modern cosmological simulations can now obtain relatively realistic populations of disc galaxies, but it still remains to be clarified how discs manage to survive massive mergers. Here we use a suite of high-resolution hydrodynamical simulations set in a LambdaCDM cosmology to elucidate the fate of discs encountering massive mergers. We extract a sample of approximately 100 disc galaxies and follow the changes in their post-merger morphologies, as tracked by their disc-to-total ratios (D/T). We also examine the relations between their present-day morphology, assembly history and gas fractions. We find that approximately half of present-day disc galaxies underwent at least one merger with a satellite of total mass exceeding the host system's stellar mass, a third had mergers with satellites of mass exceeding 3 times the host's stellar mass, and approximately one-sixth had mergers with satellites of mass exceeding 10 times of the host's stellar mass. These mergers lead to a sharp, but often temporary, decrease in the D/T of the hosts, implying that discs are usually disrupted but then quickly re-grow. To do so, high cold gas fractions are required post-merger, as well as a relatively quiescent recent history (over a few Gyrs before z=0). Our results show that discs can form via diverse merger pathways and that quiescent histories are not the dominant mode of disc formation.