The CMS trigger system must reduce an input data rate from the LHC bunch-crossing frequency of 40 MHz to a rate which will be written to permanent storage. A detailed study has recently been made of the performance of this system. This paper presents key elements of the results obtained and gives details of a draft “trigger table” for the Level-1 Trigger and the High-Level Trigger selection at a “start-up” luminosity of 2× 1033 cm – 2s – 1. High efficiencies for most physics objects are attainable with a selection that remains inclusive and avoids detailed topological or other requirements on the event.
This work summarizes and puts in an overall perspective studies done within the compact muon solenoid (CMS) concerning the discovery potential for squarks and gluinos, sleptons, charginos and neutralinos, supersymmetric (SUSY) dark matter, lightest Higgs, sparticle mass determination methods and the detector design optimization in view of SUSY searches. It represents the status of our understanding of these subjects as of summer 1997. As a benchmark we used the minimal supergravity-inspired supersymmetric standard model (mSUGRA) with a stable lightest supersymmetric particle (LSP). Discovery of supersymmetry at the large hadron collider should be relatively straightforward. It may occur through the observation of large excesses of events in missing ET plus jets, or with one or more isolated leptons. An excess of trilepton events or isolated dileptons with missing ET, exhibiting a characteristic signature in the l+l− invariant mass distribution, could also be the first manifestation of SUSY production. Squarks and gluinos can be discovered for masses in excess of 2 TeV. Charginos and neutralinos can be discovered from an excess of events in dilepton or trilepton final states. Inclusive searches can give early indications from their copious production in squark and gluino cascade decays. Indirect evidence for sleptons can also be obtained from inclusive dilepton studies. Isolation requirements and a jet veto would allow detection of both the direct chargino/neutralino production and the directly produced sleptons. Squark and gluino production may also represent a copious source of Higgs bosons through cascade decays. The lightest SUSY Higgs h → b may be reconstructed with a signal/background ratio of order 1 thanks to hard cuts on ETmiss justified by escaping LSPs. The LSP of SUSY models with conserved R-parity represents a very good candidate for cosmological dark matter. The region of parameter space where this is true is well covered by our searches, at least for tanβ = 2. If supersymmetry exists at the electroweak scale, it could hardly escape detection in CMS and the study of supersymmetry will form a central part of our physics program.
New sets of CMS underlying-event parameters (“tunes”) are presented for the pythia8 event generator. These tunes use the NNPDF3.1 parton distribution functions (PDFs) at leading (LO), next-to-leading (NLO), or next-to-next-to-leading (NNLO) orders in perturbative quantum chromodynamics, and the strong coupling evolution at LO or NLO. Measurements of charged-particle multiplicity and transverse momentum densities at various hadron collision energies are fit simultaneously to determine the parameters of the tunes. Comparisons of the predictions of the new tunes are provided for observables sensitive to the event shapes at LEP, global underlying event, soft multiparton interactions, and double-parton scattering contributions. In addition, comparisons are made for observables measured in various specific processes, such as multijet, Drell–Yan, and top quark-antiquark pair production including jet substructure observables. The simulation of the underlying event provided by the new tunes is interfaced to a higher-order matrix-element calculation. For the first time, predictions from pythia8 obtained with tunes based on NLO or NNLO PDFs are shown to reliably describe minimum-bias and underlying-event data with a similar level of agreement to predictions from tunes using LO PDF sets.
Precision measurements by the Alpha Magnetic Spectrometer on the International Space Station of the primary cosmic-ray electron flux in the range 0.5 to 700 GeV and the positron flux in the range 0.5 to 500 GeV are presented. The electron flux and the positron flux each require a description beyond a single power-law spectrum. Both the electron flux and the positron flux change their behavior at ∼30 GeV but the fluxes are significantly different in their magnitude and energy dependence. Between 20 and 200 GeV the positron spectral index is significantly harder than the electron spectral index. The determination of the differing behavior of the spectral indices versus energy is a new observation and provides important information on the origins of cosmic-ray electrons and positrons.
In this work, we present a scalable reinforcement learning method for training multi-task policies from large offline datasets that can leverage both human demonstrations and autonomously collected data. Our method uses a Transformer to provide a scalable representation for Q-functions trained via offline temporal difference backups. We therefore refer to the method as Q-Transformer. By discretizing each action dimension and representing the Q-value of each action dimension as separate tokens, we can apply effective high-capacity sequence modeling techniques for Q-learning. We present several design decisions that enable good performance with offline RL training, and show that Q-Transformer outperforms prior offline RL algorithms and imitation learning techniques on a large diverse real-world robotic manipulation task suite. The project's website and videos can be found at https://qtransformer.github.io
A precision measurement by AMS of the antiproton flux and the antiproton-to-proton flux ratio in primary cosmic rays in the absolute rigidity range from 1 to 450 GV is presented based on 3.49×10^{5} antiproton events and 2.42×10^{9} proton events. The fluxes and flux ratios of charged elementary particles in cosmic rays are also presented. In the absolute rigidity range ∼60 to ∼500 GV, the antiproton p[over ¯], proton p, and positron e^{+} fluxes are found to have nearly identical rigidity dependence and the electron e^{-} flux exhibits a different rigidity dependence. Below 60 GV, the (p[over ¯]/p), (p[over ¯]/e^{+}), and (p/e^{+}) flux ratios each reaches a maximum. From ∼60 to ∼500 GV, the (p[over ¯]/p), (p[over ¯]/e^{+}), and (p/e^{+}) flux ratios show no rigidity dependence. These are new observations of the properties of elementary particles in the cosmos.
Heavy metals have been considered to be a serious environmental threat that has adverse impacts on human health as well. To reduce its risk, a new integrated phytoremediation-bioenergy approach could be a viable solution. These crops offer double advantage of phytoremediation as well as the production of valuable by-products like essential oil and this approach contributes to the circular bioeconomy. The growth of aromatic and bioenergy plants keeps heavy metals out of the food chain. It allows for the long-term use of contaminated land, which creates new approaches to addressing pollution problems. This review article mainly highlights how phytoremediation is coupled with bioenergy and essential oil production, along with managing post harvested biomass. The current review also offers a thorough summary of these plants’ utilization in years to address pollution issues and their potential to produce essential oil and bioenergy to meet future energy needs.
An angular analysis of the B^{0}→K^{*0}(→K^{+}π^{-})μ^{+}μ^{-} decay is presented using a dataset corresponding to an integrated luminosity of 4.7 fb^{-1} of pp collision data collected with the LHCb experiment. The full set of CP-averaged observables are determined in bins of the invariant mass squared of the dimuon system. Contamination from decays with the K^{+}π^{-} system in an S-wave configuration is taken into account. The tension seen between the previous LHCb results and the standard model predictions persists with the new data. The precise value of the significance of this tension depends on the choice of theory nuisance parameters.
Photonic crystal nanobeam cavities are versatile platforms of interest for optical communications, optomechanics, optofluidics, cavity QED, etc. In a previous work [Appl. Phys. Lett. 96, 203102 (2010)], we proposed a deterministic method to achieve ultrahigh Q cavities. This follow-up work provides systematic analysis and verifications of the deterministic design recipe and further extends the discussion to air-mode cavities. We demonstrate designs of dielectric-mode and air-mode cavities with Q > 10⁹, as well as dielectric-mode nanobeam cavities with both ultrahigh-Q (> 10⁷) and ultrahigh on-resonance transmissions (T > 95%).
In this work, we consider an extension of symmetric teleparallel gravity, namely, f(Q) gravity, where the fundamental block to describe spacetime is the nonmetricity, Q. Within this formulation of gravitation, we perform an observational analysis of several modified f(Q) models using the redshift approach, where the f(Q) Lagrangian is reformulated as an explicit function of the redshift, f(z). Various different polynomial parametrizations of f(z) are proposed, including new terms which would allow for deviations from the Λ Cold Dark Matter model. Given a variety of observational probes, such as the expansion rate data from early type galaxies, type Ia supernovae, quasars, gamma ray bursts, baryon acoustic oscillations data, and cosmic microwave background distance priors, we have checked the validity of these models at the background level in order to verify if this new formalism provides us with plausible alternative models to explain the late time acceleration of the Universe. Indeed, this novel approach provides a different perspective on the formulation of observationally reliable alternative models of gravity.
David Montes de Oca Zapiain, Mitchell A. Wood, Nicholas Lubbers
et al.
AbstractAdvances in machine learning (ML) have enabled the development of interatomic potentials that promise the accuracy of first principles methods and the low-cost, parallel efficiency of empirical potentials. However, ML-based potentials struggle to achieve transferability, i.e., provide consistent accuracy across configurations that differ from those used during training. In order to realize the promise of ML-based potentials, systematic and scalable approaches to generate diverse training sets need to be developed. This work creates a diverse training set for tungsten in an automated manner using an entropy optimization approach. Subsequently, multiple polynomial and neural network potentials are trained on the entropy-optimized dataset. A corresponding set of potentials are trained on an expert-curated dataset for tungsten for comparison. The models trained to the entropy-optimized data exhibited superior transferability compared to the expert-curated models. Furthermore, the models trained to the expert-curated set exhibited a significant decrease in performance when evaluated on out-of-sample configurations.
In living systems, collective molecular behavior is driven by thermodynamic forces in the form of chemical gradients. Leveraging recent advances in the field of nonequilibrium physics, I show that increasing the thermodynamic force alone can induce qualitatively new behavior. To demonstrate this principle, general equations governing kinetic proofreading and microtubule assembly are derived. These equations show that new capabilities, including catalytic regulation of steady-state behavior and exponential enhancement of molecular discrimination, are only possible if the system is driven sufficiently far from equilibrium, and can emerge sharply at a threshold force. Regardless of design parameters, these results reveal that the thermodynamic force sets fundamental performance limits on tuning sensitivity, error, and waste. Experimental data show that these biomolecular processes operate at the limits allowed by theory.
We extract the e+e− → π+π− cross section in the energy range between 600 and 900 MeV, exploiting the method of initial state radiation. A data set with an integrated luminosity of 2.93 fb−1 taken at a centerof-mass energy of 3.773 GeV with the BESIII detector at the BEPCII collider is used. The cross section is measured with a systematic uncertainty of 0.9%. We extract the pion form factor |Fπ| as well as the contribution of the measured cross section to the leading-order hadronic vacuum polarization contribution to (g − 2)μ. We find this value to be a μ (600 − 900 MeV) = (368.2 ± 2.5stat ± 3.3sys) · 10−10, which is between the corresponding values using the BaBar or KLOE data.
It is conceivable that an RNA virus could use a polysome, that is, a string of ribosomes covering the RNA strand, to protect the genetic material from degradation inside a host cell. This paper discusses how such a virus might operate, and how its presence might be detected by ribosome profiling. There are two possible forms for such a polysomally protected virus, depending upon whether just the forward strand or both the forward and complementary strands can be encased by ribosomes (these will be termed type 1 and type 2, respectively). It is argued that in the type 2 case the viral RNA would evolve an ambigrammatic property, whereby the viral genes are free of stop codons in a reverse reading frame (with forward and reverse codons aligned). Recent observations of ribosome profiles of ambigrammatic narnavirus sequences are consistent with our predictions for the type 2 case.
Backtracking of RNA polymerase (RNAP) is an important pausing mechanism during DNA transcription that is part of the error correction process that enhances transcription fidelity. We model the backtracking mechanism of RNA polymerase, which usually happens when the polymerase tries to incorporate a mismatched nucleotide triphosphate. Previous models have made simplifying assumptions such as neglecting the trailing polymerase behind the backtracking polymerase or assuming that the trailing polymerase is stationary. We derive exact analytic solutions of a stochastic model that includes locally interacting RNAPs by explicitly showing how a trailing RNAP influences the probability that an error is corrected or incorporated by the leading backtracking RNAP. We also provide two related methods for computing the mean times to error correction or incorporation given an initial local RNAP configuration.