Effective clinical history taking is a foundational yet underexplored component of clinical reasoning. While large language models (LLMs) have shown promise on static benchmarks, they often fall short in dynamic, multi-turn diagnostic settings that require iterative questioning and hypothesis refinement. To address this gap, we propose \method{}, a note-driven framework that trains LLMs to conduct structured history taking and diagnosis by learning from widely available medical notes. Instead of relying on scarce and sensitive dialogue data, we convert real-world medical notes into high-quality doctor-patient dialogues using a decision tree-guided generation and refinement pipeline. We then propose a three-stage fine-tuning strategy combining supervised learning, simulated data augmentation, and preference learning. Furthermore, we propose a novel single-turn reasoning paradigm that reframes history taking as a sequence of single-turn reasoning problems. This design enhances interpretability and enables local supervision, dynamic adaptation, and greater sample efficiency. Experimental results show that our method substantially improves clinical reasoning, achieving gains of +16.9 F1 and +21.0 Top-1 diagnostic accuracy over GPT-4o. Our code and dataset can be found at https://github.com/zhentingsheng/Note2Chat.
Multi-role pedagogical agents can create engaging and immersive learning experiences, helping learners better understand knowledge in history learning. However, existing pedagogical agents often struggle with multi-role interactions due to complex controls, limited feedback forms, and difficulty dynamically adapting to user inputs. In this study, we developed a VR prototype with LLM-powered adaptive role-switching and action-switching pedagogical agents to help users learn about the history of the Pavilion of Prince Teng. A 2 x 2 between-subjects study was conducted with 84 participants to assess how adaptive role-switching and action-switching affect participants' learning outcomes and experiences. The results suggest that adaptive role-switching enhances participants' perception of the pedagogical agent's trustworthiness and expertise but may lead to inconsistent learning experiences. Adaptive action-switching increases participants' perceived social presence, expertise, and humanness. The study did not uncover any effects of role-switching and action-switching on usability, learning motivation, and cognitive load. Based on the findings, we proposed five design implications for incorporating adaptive role-switching and action-switching into future VR history education tools.
Offline reinforcement learning (RL) can in principle synthesize more optimal behavior from a dataset consisting only of suboptimal trials. One way that this can happen is by "stitching" together the best parts of otherwise suboptimal trajectories that overlap on similar states, to create new behaviors where each individual state is in-distribution, but the overall returns are higher. However, in many interesting and complex applications, such as autonomous navigation and dialogue systems, the state is partially observed. Even worse, the state representation is unknown or not easy to define. In such cases, policies and value functions are often conditioned on observation histories instead of states. In these cases, it is not clear if the same kind of "stitching" is feasible at the level of observation histories, since two different trajectories would always have different histories, and thus "similar states" that might lead to effective stitching cannot be leveraged. Theoretically, we show that standard offline RL algorithms conditioned on observation histories suffer from poor sample complexity, in accordance with the above intuition. We then identify sufficient conditions under which offline RL can still be efficient -- intuitively, it needs to learn a compact representation of history comprising only features relevant for action selection. We introduce a bisimulation loss that captures the extent to which this happens, and propose that offline RL can explicitly optimize this loss to aid worst-case sample complexity. Empirically, we show that across a variety of tasks either our proposed loss improves performance, or the value of this loss is already minimized as a consequence of standard offline RL, indicating that it correlates well with good performance.
HNCO and SiO are well known shock tracers and have been observed in nearby galaxies, including the nearby (D=3.5 Mpc) starburst galaxy NGC 253. The simultaneous detection of these two species in regions where the star formation rate is high may be used to study the shock history of the gas. We perform a multi-line molecular study using these two shock tracers (SiO and HNCO) with the aim of characterizing the gas properties. We also explore the possibility of reconstructing the shock history in NGC 253's Central Molecular Zone (CMZ). Six SiO transitions and eleven HNCO transitions were imaged at high resolution $1''.6$ (28 pc) with the Atacama Large Millimeter/submillimeter Array (ALMA) as part of the ALCHEMI Large Programme. Both non-LTE radiative transfer analysis and chemical modelling were performed in order to characterize the gas properties, and to investigate the chemical origin of the emission. The non-LTE radiative transfer analysis coupled with Bayesian inference shows clear evidence that the gas traced by SiO has different densities and temperatures than that traced by HNCO, with an indication that shocks are needed to produce both species. Chemical modelling further confirms such a scenario and suggests that fast and slow shocks are responsible for SiO and HNCO production, respectively, in most GMCs. We are also able to infer the physical characteristics of the shocks traced by SiO and HNCO for each GMC. Radiative transfer and chemical analysis of the SiO and HNCO in the CMZ of NGC 253 reveal a complex picture whereby most of the GMCs are subjected to shocks. We speculate on the possible shock scenarios responsible for the observed emission and provide potential history and timescales for each shock scenario. Higher spatial resolution observations of these two species are required in order to quantitatively differentiate between scenarios.
The use of the Monte Carlo technique in a reliable and inexpensive way without the need for a standard radioactive source in determining the detector efficiency is becoming widespread every passing day. It is important to model the detector with the real dimensions for an accurate and precise results for the method. Another parameter as important as detector modeling is the number of histories in the simulation code examined in this study. In this study, the effect of the number of histories on the efficiency was examined in detail using different simulation codes. The results obtained in this work, at least 107 particle numbers should be used in all three programs where the uncertainty is below 1%. If the existing facilities are sufficient, it can be increased to 108s in case of having a more equipped and fast computer. However, going higher than this value does not make any sense as seen from the study.
We introduce the partially observable history process (POHP) formalism for reinforcement learning. POHP centers around the actions and observations of a single agent and abstracts away the presence of other players without reducing them to stochastic processes. Our formalism provides a streamlined interface for designing algorithms that defy categorization as exclusively single or multi-agent, and for developing theory that applies across these domains. We show how the POHP formalism unifies traditional models including the Markov decision process, the Markov game, the extensive-form game, and their partially observable extensions, without introducing burdensome technical machinery or violating the philosophical underpinnings of reinforcement learning. We illustrate the utility of our formalism by concisely exploring observable sequential rationality, examining some theoretical properties of general immediate regret minimization, and generalizing the extensive-form regret minimization (EFR) algorithm.
The spin distribution of massive black holes (MBHs) contains rich information on their assembly history. However, only limited information can be extracted from currently available spin measurements of MBHs owing to the small sample size and large measurement uncertainties. Upcoming X-ray telescopes with improved spectral resolution and larger effective area are expected to provide new insights into the growth history of MBHs. Here we investigate, at a proof of concept level, how stringent constraints can be placed on the accretion history of MBHs by the spin measurements from future X-ray missions. We assume a toy model consisting of a two-phase accretion history composed of an initial coherent phase with a constant disk orientation, followed by a chaotic phase with random disk orientations in each accretion episode. By utilizing mock spin data generated from such models and performing Bayesian Markov Chain Monte Carlo simulations, we find that most accretion models of MBHs can be reconstructed provided that $\gtrsim100$ MBH spins are measured with an accuracy of $\lesssim0.1$. We also quantify the precision of the reconstructed parameters by adopting various combinations of sample sizes and spin accuracies, and find that the sample size is more crucial to model reconstruction once the spin accuracy reaches $\sim 0.1$. To some extent, a better spin accuracy will compensate for a small sample size and vice versa. Future X-ray missions such as the Advanced Telescope for High Energy Astrophysics and the enhanced X-ray Timing and Polarimetry mission, may provide spin measurements of $\gtrsim100$ MBHs with an uncertainty of $\sim0.04-0.1$ and will thus put strong constraints on the MBH growth history.
Kathryn Garside, Aida Gjoka, Robin Henderson
et al.
Persistent homology is used to track the appearance and disappearance of features as we move through a nested sequence of topological spaces. Equating the nested sequence to a filtration and the appearance and disappearance of features to events, we show that simple event history methods can be used for the analysis of topological data. We propose a version of the well known Nelson-Aalen cumulative hazard estimator for the comparison of topological features of random fields and for testing parametric assumptions. We suggest a Cox proportional hazards approach for the analysis of embedded metric trees. The Nelson-Aalen method is illustrated on globally distributed climate data and on neutral hydrogen distribution in the Milky Way. The Cox method is use to compare vascular patterns in fundus images of the eyes of healthy and diabetic retinopathy patients.
In 1614 Johann Georg Locher, a student of the Jesuit astronomer Christoph Scheiner, proposed a physical mechanism to explain how the Earth could orbit the sun. An orbit, Locher said, is a perpetual fall. He proposed this despite the fact that he rejected the Copernican system, citing problems with falling bodies and the sizes of stars under that system. In 1651 and again in 1680, Jesuit writers Giovanni Battista Riccioli and Athanasius Kircher, respectively, considered and rejected outright Locher's idea of an orbit as a perpetual fall. Thus this important concept of an orbit was proposed, considered, and rejected well before Isaac Newton would use an entirely different physics to make the idea that an orbit is a perpetual fall the common way of envisioning and explaining orbits.
Bryan A. Terrazas, Eric F. Bell, Bruno M. B. Henriques
et al.
We use the semi-analytic model developed by Henriques et al. (2015) to explore the origin of star formation history diversity for galaxies that lie at the centre of their dark matter haloes and have present-day stellar masses in the range 5-8 $\times$ 10$^{10}$ M$_{\odot}$, similar to that of the Milky Way. In this model, quenching is the dominant physical mechanism for introducing scatter in the growth histories of these galaxies. We find that present-day quiescent galaxies have a larger variety of growth histories than star-formers since they underwent 'staggered quenching' - a term describing the correlation between the time of quenching and present-day halo mass. While halo mass correlates broadly with quiescence, we find that quiescence is primarily a function of black hole mass, where galaxies quench when heating from their active galactic nuclei becomes sufficient to offset the redshift-dependent cooling rate. In this model, the emergence of a prominent quiescent population is the main process that flattens the stellar mass-halo mass relation at mass scales at or above that of the Milky Way.
The constant mean extrinsic curvature on a spacelike slice may constitute a physically preferred time coordinate, `York time'. One line of enquiry to probe this idea is to understand processes in our cosmological history in terms of York time. Following a review of the theoretical motivations, we focus on slow-roll inflation and the freezing and Hubble re-entry of cosmological perturbations. We show how the mathematical account of these processes is distinct from the conventional account in terms of standard cosmological or conformal time. We also consider the cosmological York-timeline more broadly and contrast it with the conventional cosmological timeline.
After the discovery of the Higgs boson particle on the 4th of July of 2012 at the Large Hadron Collider, sited at the european CERN laboratory, we are entering in a fascinating period for Particle Physics where both theorists and experimentalists are devoted to fully understand the features of this new particle and the possible consequences for High Energy Physics of the Higgs system both within and beyond the Standard Model of fundamental particle interactions. This paper is a summary of the lectures given at the third IDPASC school (Santiago de Compostela, Feb. 2013, Spain) addressed to PhD students, and contains a short introduction to the main basic aspects of the Higgs boson particle in and beyond the Standard Model.
A continuous transition from early Friedmann-like radiation era through to late time cosmic acceleration passing through a long Friedmann-like matter dominated era followed by a second phase of radiation era has been realized in modified theory of gravity containing a combination of curvature squared term, a linear term, a three-half term and an ideal fluid. Thus the history of cosmic evolution is explained by modified theory of gravity singlehandedly. The second phase of radiation-like era might provide an explanation to the hydrogen and helium reionization at low redshift.
We describe a new methodology to analyze the reionization process in numerical simulations: The evo- lution of the reionization is investigated by focusing on the merger histories of individual HII regions. From the merger tree of ionized patches, one can track the individual evolution of the regions such as e.g. their size, or investigate the properties of the percolation process by looking at the formation rate, the frequency of mergers and the number of individual HII regions involved in the mergers. By applying this technique to cosmological simulations with radiative transfer, we show how this methodology is a good candidate to quantify the impact of the star formation adopted on the history of the reionization. As an application we show how different models of sources result in different evolutions and geometry of the reionization even though they produce e.g. similar ionized fraction or optical depth.
One way to understand the role history plays on evolutionary trajectories is by giving ancient life a second opportunity to evolve. Our ability to empirically perform such an experiment, however, is limited by current experimental designs. Combining ancestral sequence reconstruction with synthetic biology allows us to resurrect the past within a modern context and has expanded our understanding of protein functionality within a historical context. Experimental evolution, on the other hand, provides us with the ability to study evolution in action, under controlled conditions in the laboratory. Here we describe a novel experimental setup that integrates two disparate fields - ancestral sequence reconstruction and experimental evolution. This allows us to rewind and replay the evolutionary history of ancient biomolecules in the laboratory. We anticipate that our combination will provide a deeper understanding of the underlying roles that contingency and determinism play in shaping evolutionary processes.
We describe a new methodology to analyze the reionization process in numerical simulations: the chronology and the geometry of reionization is investigated by means of merger histories of individual HII regions. From the merger tree of ionized patches, one can track the individual evolution of the regions properties such as e.g. their size, or the intensity of the percolation process by looking at the formation rate, the frequency of mergers and the number of individual HII regions involved in the mergers. We apply the merger tree technique to simulations of reionization with three different kinds of ionizing source models and two resolutions. Two of them use star particles as ionizing sources. In this case we confront two emissivity evolutions for the sources in order to reach the reionization at z ~ 6. As an alternative we built a semi-analytical model where the dark matter halos extracted from the density fields are assumed as ionizing sources. We then show how this methodology is a good candidate to quantify the impact of the adopted star formation on the history of the observed reionization. The semi-analytical model shows a homogeneous reionization history with 'local' hierarchical growth steps for individual HII regions. On the other hand auto-consistent models for star formation tend to present fewer regions with a dominant region in size which governs the fusion process early in the reionization at the expense of the 'local' reionizations. The differences are attenuated when the resolution of the simulation is increased.
Until is a notoriously difficult temporal operator as it is both existential and universal at the same time: A until B holds at the current time instant w iff either B holds at w or there exists a time instant w' in the future at which B holds and such that A holds in all the time instants between the current one and w'. This "ambivalent" nature poses a significant challenge when attempting to give deduction rules for until. In this paper, in contrast, we make explicit this duality of until to provide well-behaved natural deduction rules for linear-time logics by introducing a new temporal operator that allows us to formalize the "history" of until, i.e., the "internal" universal quantification over the time instants between the current one and w'. This approach provides the basis for formalizing deduction systems for temporal logics endowed with the until operator. For concreteness, we give here a labeled natural deduction system for a linear-time logic endowed with the new operator and show that, via a proper translation, such a system is also sound and complete with respect to the linear temporal logic LTL with until.
We here present our first attempt to use Globular Clusters as tracers of their parent galaxy's formation history. Globular Cluster Systems of most early-type galaxies feature two peaks in their optical colour distributions. Blue-peak Globular Clusters are generally believed to be old and metal-poor. The ages, metallicities, and the origin of the red-peak Globular Clusters are being debated. We here present our analysis of the ages and metallicities of the red peak Globular Clusters in the Virgo S0 NGC 4570 using deep Ks-band photometry from NTT/SOFI (ESO program ID 079.B-0511) for the red-peak Globular Clusters in combination with HST-ACS archival data to break the age-metallicity degeneracy. We analyze the combined g, z, and Ks spectral energy distribution by comparison with a large grid of GALEV evolutionary synthesis models for star clusters with different ages and metallicities. This analysis reveals a substantial population of intermediate-age (1-3 Gyr) and metal-rich (solar metallicity) Globular Clusters. We discuss their age and metallicity distributions together with information on the parent galaxy from the literature to gain insight into the formation history of this galaxy. Our results prove the power of this approach to reveal the (violent) star formation and chemical enrichment histories of galaxies on the basis of combined optical and near-infrared photometry.