This report describes the physics case, the resulting detector requirements, and the evolving detector concepts for the experimental program at the Electron-Ion Collider (EIC). The EIC will be a powerful new high-luminosity facility in the United States with the capability to collide high-energy electron beams with high-energy proton and ion beams, providing access to those regions in the nucleon and nuclei where their structure is dominated by gluons. Moreover, polarized beams in the EIC will give unprecedented access to the spatial and spin structure of the proton, neutron, and light ions. The studies leading to this document were commissioned and organized by the EIC User Group with the objective of advancing the state and detail of the physics program and developing detector concepts that meet the emerging requirements in preparation for the realization of the EIC. The effort aims to provide the basis for further development of concepts for experimental equipment best suited for the science needs, including the importance of two complementary detectors and interaction regions. This report consists of three volumes. Volume I is an executive summary of our findings and developed concepts. In Volume II we describe studies of a wide range of physics measurements and the emerging requirements on detector acceptance and performance. Volume III discusses general-purpose detector concepts and the underlying technologies to meet the physics requirements. These considerations will form the basis for a world-class experimental program that aims to increase our understanding of the fundamental structure of all visible matter
On 2017 August 17 a binary neutron star coalescence candidate (later designated GW170817) with merger time 12:41:04 UTC was observed through gravitational waves by the Advanced LIGO and Advanced Virgo detectors. The Fermi Gamma-ray Burst Monitor independently detected a gamma-ray burst (GRB 170817A) with a time delay of ∼ 1.7 s with respect to the merger time. From the gravitational-wave signal, the source was initially localized to a sky region of 31 deg2 at a luminosity distance of 40 − 8 + 8 Mpc and with component masses consistent with neutron stars. The component masses were later measured to be in the range 0.86 to 2.26 M ⊙ . An extensive observing campaign was launched across the electromagnetic spectrum leading to the discovery of a bright optical transient (SSS17a, now with the IAU identification of AT 2017gfo) in NGC 4993 (at ∼ 40 Mpc ) less than 11 hours after the merger by the One-Meter, Two Hemisphere (1M2H) team using the 1 m Swope Telescope. The optical transient was independently detected by multiple teams within an hour. Subsequent observations targeted the object and its environment. Early ultraviolet observations revealed a blue transient that faded within 48 hours. Optical and infrared observations showed a redward evolution over ∼10 days. Following early non-detections, X-ray and radio emission were discovered at the transient’s position ∼ 9 and ∼ 16 days, respectively, after the merger. Both the X-ray and radio emission likely arise from a physical process that is distinct from the one that generates the UV/optical/near-infrared emission. No ultra-high-energy gamma-rays and no neutrino candidates consistent with the source were found in follow-up searches. These observations support the hypothesis that GW170817 was produced by the merger of two neutron stars in NGC 4993 followed by a short gamma-ray burst (GRB 170817A) and a kilonova/macronova powered by the radioactive decay of r-process nuclei synthesized in the ejecta.
We present the physics program of the Belle II experiment, located on the intensity frontier SuperKEKB e+e- collider. Belle II collected its first collisions in 2018, and is expected to operate for the next decade. It is anticipated to collect 50/ab of collision data over its lifetime. This book is the outcome of a joint effort of Belle II collaborators and theorists through the Belle II theory interface platform (B2TiP), an effort that commenced in 2014. The aim of B2TiP was to elucidate the potential impacts of the Belle II program, which includes a wide scope of physics topics: B physics, charm, tau, quarkonium, electroweak precision measurements and dark sector searches. It is composed of nine working groups (WGs), which are coordinated by teams of theorist and experimentalists conveners: Semileptonic and leptonic B decays, Radiative and Electroweak penguins, phi_1 and phi_2 (time-dependent CP violation) measurements, phi_3 measurements, Charmless hadronic B decay, Charm, Quarkonium(like), tau and low-multiplicity processes, new physics and global fit analyses. This book highlights "golden- and silver-channels", i.e. those that would have the highest potential impact in the field. Theorists scrutinised the role of those measurements and estimated the respective theoretical uncertainties, achievable now as well as prospects for the future. Experimentalists investigated the expected improvements with the large dataset expected from Belle II, taking into account improved performance from the upgraded detector.
Deep reinforcement learning (RL) has achieved several high profile successes in difficult decision-making problems. However, these algorithms typically require a huge amount of data before they reach reasonable performance. In fact, their performance during learning can be extremely poor. This may be acceptable for a simulator, but it severely limits the applicability of deep RL to many real-world tasks, where the agent must learn in the real environment. In this paper we study a setting where the agent may access data from previous control of the system. We present an algorithm, Deep Q-learning from Demonstrations (DQfD), that leverages small sets of demonstration data to massively accelerate the learning process even from relatively small amounts of demonstration data and is able to automatically assess the necessary ratio of demonstration data while learning thanks to a prioritized replay mechanism. DQfD works by combining temporal difference updates with supervised classification of the demonstrator’s actions. We show that DQfD has better initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN) as it starts with better scores on the first million steps on 41 of 42 games and on average it takes PDD DQN 83 million steps to catch up to DQfD’s performance. DQfD learns to out-perform the best demonstration given in 14 of 42 games. In addition, DQfD leverages human demonstrations to achieve state-of-the-art results for 11 games. Finally, we show that DQfD performs better than three related algorithms for incorporating demonstration data into DQN.
G. B. D. C. K. S. H. A. H. H. Y. A. C. B. S. B. L. C. Aad Abbott Abbott Abeling Abidi Aboulhorma Abramow, G. Aad, B. Abbott
et al.
The standard model of particle physics1–4 describes the known fundamental particles and forces that make up our Universe, with the exception of gravity. One of the central features of the standard model is a field that permeates all of space and interacts with fundamental particles5–9. The quantum excitation of this field, known as the Higgs field, manifests itself as the Higgs boson, the only fundamental particle with no spin. In 2012, a particle with properties consistent with the Higgs boson of the standard model was observed by the ATLAS and CMS experiments at the Large Hadron Collider at CERN10,11. Since then, more than 30 times as many Higgs bosons have been recorded by the ATLAS experiment, enabling much more precise measurements and new tests of the theory. Here, on the basis of this larger dataset, we combine an unprecedented number of production and decay processes of the Higgs boson to scrutinize its interactions with elementary particles. Interactions with gluons, photons, and W and Z bosons—the carriers of the strong, electromagnetic and weak forces—are studied in detail. Interactions with three third-generation matter particles (bottom (b) and top (t) quarks, and tau leptons (τ)) are well measured and indications of interactions with a second-generation particle (muons, μ) are emerging. These tests reveal that the Higgs boson discovered ten years ago is remarkably consistent with the predictions of the theory and provide stringent constraints on many models of new phenomena beyond the standard model. Ten years after the discovery of the Higgs boson, the ATLAS experiment at CERN probes its kinematic properties with a significantly larger dataset from 2015–2018 and provides further insights on its interaction with other known particles.
The Ligo Scientific Collaboration, T. Abbott, T. Abbott
et al.
The second Gravitational-Wave Transient Catalog reported on 39 compact binary coalescences observed by the Advanced LIGO and Advanced Virgo detectors between 1 April 2019 15:00 UTC and 1 October 2019 15:00 UTC. We present GWTC-2.1, which reports on a deeper list of candidate events observed over the same period. We analyze the final version of the strain data over this period with improved calibration and better subtraction of excess noise, which has been publicly released. We employ three matched-filter search pipelines for candidate identification, and estimate the astrophysical probability for each candidate event. While GWTC-2 used a false alarm rate threshold of 2 per year, we include in GWTC-2.1, 1201 candidates that pass a false alarm rate threshold of 2 per day. We calculate the source properties of a subset of 44 high-significance candidates that have an astrophysical probability greater than 0.5. Of these candidates, 36 have been reported in GWTC-2. If the 8 additional high-significance candidates presented here are astrophysical, the mass range of events that are unambiguously identified as binary black holes (both objects $\geq 3M_\odot$) is increased compared to GWTC-2, with total masses from $\sim 14 M_\odot$ for GW190924_021846 to $\sim 182 M_\odot$ for GW190426_190642. The primary components of two new candidate events (GW190403_051519 and GW190426_190642) fall in the mass gap predicted by pair instability supernova theory. We also expand the population of binaries with significantly asymmetric mass ratios reported in GWTC-2 by an additional two events (the mass ratio is less than $0.65$ and $0.44$ at $90\%$ probability for GW190403_051519 and GW190917_114630 respectively), and find that 2 of the 8 new events have effective inspiral spins $\chi_\mathrm{eff}>0$ (at $90\%$ credibility), while no binary is consistent with $\chi_\mathrm{eff}<0$ at the same significance.
Conventional, hadronic matter consists of baryons and mesons made of three quarks and a quark–antiquark pair, respectively1,2. Here, we report the observation of a hadronic state containing four quarks in the Large Hadron Collider beauty experiment. This so-called tetraquark contains two charm quarks, a u¯\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\overline{{{{{u}}}}}$$\end{document} and a d¯\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\overline{{{{{d}}}}}$$\end{document} quark. This exotic state has a mass of approximately 3,875 MeV and manifests as a narrow peak in the mass spectrum of D0D0π+ mesons just below the D*+D0 mass threshold. The near-threshold mass together with the narrow width reveals the resonance nature of the state. The LHCb Collaboration reports the observation of an exotic, narrow, tetraquark state that contains two charm quarks, an up antiquark and a down antiquark.
Linear spin wave theory provides the leading term in the calculation of the excitation spectra of long-range ordered magnetic systems as a function of . This term is acquired using the Holstein–Primakoff approximation of the spin operator and valid for small δS fluctuations of the ordered moment. We propose an algorithm that allows magnetic ground states with general moment directions and single-Q incommensurate ordering wave vector using a local coordinate transformation for every spin and a rotating coordinate transformation for the incommensurability. Finally we show, how our model can determine the spin wave spectrum of the magnetic C-site langasites with incommensurate order.
Fotios Anagnostopoulos, S. Basilakos, E. Saridakis
Fotios K. Anagnostopoulos, Spyros Basilakos, 3 and Emmanuel N. Saridakis 4, 5 Department of Physics, National & Kapodistrian University of Athens, Zografou Campus GR 157 73, Athens, Greece National Observatory of Athens, Lofos Nymfon, 11852 Athens, Greece Academy of Athens, Research Center for Astronomy and Applied Mathematics, Soranou Efesiou 4, 11527, Athens, Greece CAS Key Laboratory for Researches in Galaxies and Cosmology, Department of Astronomy, University of Science and Technology of China, Hefei, Anhui 230026, P.R. China School of Astronomy, School of Physical Sciences, University of Science and Technology of China, Hefei 230026, P.R. China
Quantum computing exploits quantum phenomena such as superposition and entanglement to realize a form of parallelism that is not available to traditional computing. It offers the potential of significant computational speed-ups in quantum chemistry, materials science, cryptography, and machine learning. The dominant approach to programming quantum computers is to provide an existing high-level language with libraries that allow for the expression of quantum programs. This approach can permit computations that are meaningless in a quantum context; prohibits succint expression of interaction between classical and quantum logic; and does not provide important constructs that are required for quantum programming. We present Q#, a quantum-focused domain-specific language explicitly designed to correctly, clearly and completely express quantum algorithms. Q# provides a type system; a tightly constrained environment to safely interleave classical and quantum computations; specialized syntax; symbolic code manipulation to automatically generate correct transformations of quantum operations; and powerful functional constructs which aid composition.
In this work, we present a scalable reinforcement learning method for training multi-task policies from large offline datasets that can leverage both human demonstrations and autonomously collected data. Our method uses a Transformer to provide a scalable representation for Q-functions trained via offline temporal difference backups. We therefore refer to the method as Q-Transformer. By discretizing each action dimension and representing the Q-value of each action dimension as separate tokens, we can apply effective high-capacity sequence modeling techniques for Q-learning. We present several design decisions that enable good performance with offline RL training, and show that Q-Transformer outperforms prior offline RL algorithms and imitation learning techniques on a large diverse real-world robotic manipulation task suite. The project's website and videos can be found at https://qtransformer.github.io
Cédric Van Goethem, Parimal V. Naik, Miet Van de Velde
et al.
Mixed matrix membranes (MMMs) have shown great potential in pervaporation (PV). As for many novel membrane materials however, lab-scale testing often involves synthetic feed solutions composed of mixed pure components, overlooking the possibly complex interactions and effects caused by the numerous other components in a real PV feed. This work studies the performance of MMMs with two different types of fillers, a core-shell material consisting of ZIF-8 coated on mesoporous silica and a hollow sphere of silicalite-1, in the PV of a real fermented wheat/hay straw hydrolysate broth for the production of bio-ethanol. All membranes, including a reference unfilled PDMS, show a declining permeability over time. Interestingly, the unfilled PDMS membrane maintains a stable separation factor, whereas the filled PDMS membranes rapidly lose selectivity to levels below that of the reference PDMS membrane. A membrane autopsy using XRD and SEM-EDX revealed an almost complete degradation of the crystalline ZIF-8 in the MMMs. Reference experiments with ZIF-8 nanoparticles in the fermentation broth demonstrated the influence of the broth on the ZIF-8 particles. However, the observed effects from the membrane autopsy could not exactly be replicated, likely due to distinct differences in conditions between the in-situ pervaporation process and the ex-situ reference experiments. These findings raise significant questions regarding the potential applicability of MOF-filled MMMs in real-feed pervaporation processes and, potentially, in harsh condition membrane separations in general. This study clearly confirms the importance of testing membranes in realistic conditions.
In 1987, we analyzed the changes in correlation graphs between various features of the organism during stress and adaptation. After 33 years of research of many authors, discoveries and rediscoveries, we can say with complete confidence: It is useful to analyze correlation graphs. In addition, we should add that the concept of adaptability (‘adaptation energy’) introduced by Selye is useful, especially if it is supplemented by ‘adaptation entropy’ and free energy, as well as an analysis of limiting factors. Our review of these topics, “Dynamic and Thermodynamic Adaptation Models” ( Phys Life Rev , 2021, arXiv:2103.01959 [q-bio.OT]), attracted many comments from leading experts, with new ideas and new problems, from the dynamics of aging and the training of athletes to single-cell omics. Methodological backgrounds, like free energy analysis, were also discussed in depth. In this article, we provide an analytical overview of twelve commenting papers and some related publications.
An endo-functionalized cage is presented that upon copper(i) complexation assembles to a well-defined structural and catalytically active biomimetic model compound.