Asymmetric Metasurfaces with High-Q Resonances Governed by Bound States in the Continuum.
K. Koshelev, S. Lepeshov, Mingkai Liu
et al.
We reveal that metasurfaces created by seemingly different lattices of (dielectric or metallic) meta-atoms with broken in-plane symmetry can support sharp high-Q resonances arising from a distortion of symmetry-protected bound states in the continuum. We develop a rigorous theory of such asymmetric periodic structures and demonstrate a link between the bound states in the continuum and Fano resonances. Our results suggest the way for smart engineering of resonances in metasurfaces for many applications in nanophotonics and metaoptics.
1092 sitasi
en
Physics, Medicine
The Belle II Physics Book
E. Kou, P. Urquijo, W. Altmannshofer
et al.
We present the physics program of the Belle II experiment, located on the intensity frontier SuperKEKB e+e- collider. Belle II collected its first collisions in 2018, and is expected to operate for the next decade. It is anticipated to collect 50/ab of collision data over its lifetime. This book is the outcome of a joint effort of Belle II collaborators and theorists through the Belle II theory interface platform (B2TiP), an effort that commenced in 2014. The aim of B2TiP was to elucidate the potential impacts of the Belle II program, which includes a wide scope of physics topics: B physics, charm, tau, quarkonium, electroweak precision measurements and dark sector searches. It is composed of nine working groups (WGs), which are coordinated by teams of theorist and experimentalists conveners: Semileptonic and leptonic B decays, Radiative and Electroweak penguins, phi_1 and phi_2 (time-dependent CP violation) measurements, phi_3 measurements, Charmless hadronic B decay, Charm, Quarkonium(like), tau and low-multiplicity processes, new physics and global fit analyses. This book highlights "golden- and silver-channels", i.e. those that would have the highest potential impact in the field. Theorists scrutinised the role of those measurements and estimated the respective theoretical uncertainties, achievable now as well as prospects for the future. Experimentalists investigated the expected improvements with the large dataset expected from Belle II, taking into account improved performance from the upgraded detector.
Space-time-coding digital metasurfaces
Lei Zhang, X. Q. Chen, Shuo Liu
et al.
The recently proposed digital coding metasurfaces make it possible to control electromagnetic (EM) waves in real time, and allow the implementation of many different functionalities in a programmable way. However, current configurations are only space-encoded, and do not exploit the temporal dimension. Here, we propose a general theory of space-time modulated digital coding metasurfaces to obtain simultaneous manipulations of EM waves in both space and frequency domains, i.e., to control the propagation direction and harmonic power distribution simultaneously. As proof-of-principle application examples, we consider harmonic beam steering, beam shaping, and scattering-signature control. For validation, we realize a prototype controlled by a field-programmable gate array, which implements the harmonic beam steering via an optimized space-time coding sequence. Numerical and experimental results, in good agreement, demonstrate good performance of the proposed approach, with potential applications to diverse fields such as wireless communications, cognitive radars, adaptive beamforming, holographic imaging. Current digital coding metasurfaces are only space-encoded. Here, the authors propose space-time modulated digital coding metasurfaces to obtain simultaneous manipulations of electromagnetic waves and present harmonic beam steering, beam shaping, and scattering-signature control as application examples.
1088 sitasi
en
Medicine, Computer Science
Continuous Deep Q-Learning with Model-based Acceleration
S. Gu, T. Lillicrap, I. Sutskever
et al.
Model-free reinforcement learning has been successfully applied to a range of challenging problems, and has recently been extended to handle large neural network policies and value functions. However, the sample complexity of modelfree algorithms, particularly when using high-dimensional function approximators, tends to limit their applicability to physical systems. In this paper, we explore algorithms and representations to reduce the sample complexity of deep reinforcement learning for continuous control tasks. We propose two complementary techniques for improving the efficiency of such algorithms. First, we derive a continuous variant of the Q-learning algorithm, which we call normalized advantage functions (NAF), as an alternative to the more commonly used policy gradient and actor-critic methods. NAF representation allows us to apply Q-learning with experience replay to continuous tasks, and substantially improves performance on a set of simulated robotic control tasks. To further improve the efficiency of our approach, we explore the use of learned models for accelerating model-free reinforcement learning. We show that iteratively refitted local linear models are especially effective for this, and demonstrate substantially faster learning on domains where such models are applicable.
1064 sitasi
en
Computer Science
Analysis of D-Q Small-Signal Impedance of Grid-Tied Inverters
B. Wen, D. Boroyevich, R. Burgos
et al.
1074 sitasi
en
Engineering
Deep Reinforcement Learning with Double Q-Learning
H. V. Hasselt, A. Guez, David Silver
The popular Q-learning algorithm is known to overestimate action values under certain conditions. It was not previously known whether, in practice, such overestimations are common, whether they harm performance, and whether they can generally be prevented. In this paper, we answer all these questions affirmatively. In particular, we first show that the recent DQN algorithm, which combines Q-learning with a deep neural network, suffers from substantial overestimations in some games in the Atari 2600 domain. We then show that the idea behind the Double Q-learning algorithm, which was introduced in a tabular setting, can be generalized to work with large-scale function approximation. We propose a specific adaptation to the DQN algorithm and show that the resulting algorithm not only reduces the observed overestimations, as hypothesized, but that this also leads to much better performance on several games.
8861 sitasi
en
Computer Science
Deep Recurrent Q-Learning for Partially Observable MDPs
Matthew J. Hausknecht, P. Stone
Deep Reinforcement Learning has yielded proficient controllers for complex tasks. However, these controllers have limited memory and rely on being able to perceive the complete game screen at each decision point. To address these shortcomings, this article investigates the effects of adding recurrency to a Deep Q-Network (DQN) by replacing the first post-convolutional fully-connected layer with a recurrent LSTM. The resulting \textit{Deep Recurrent Q-Network} (DRQN), although capable of seeing only a single frame at each timestep, successfully integrates information through time and replicates DQN's performance on standard Atari games and partially observed equivalents featuring flickering game screens. Additionally, when trained with partial observations and evaluated with incrementally more complete observations, DRQN's performance scales as a function of observability. Conversely, when trained with full observations and evaluated with partial observations, DRQN's performance degrades less than DQN's. Thus, given the same length of history, recurrency is a viable alternative to stacking a history of frames in the DQN's input layer and while recurrency confers no systematic advantage when learning to play the game, the recurrent net can better adapt at evaluation time if the quality of observations changes.
1881 sitasi
en
Mathematics, Computer Science
Intangible Capital and the Investment-q Relation
R. Peters, Lucian A. Taylor
Jet energy scale and resolution in the CMS experiment in pp collisions at 8 TeV
Khachatryan, A. Sirunyan, A. Tumasyan
et al.
Improved jet energy scale corrections, based on a data sample corresponding to an integrated luminosity of 19.7 inverse-femtobarns collected by the CMS experiment in proton-proton collisions at a center-of-mass energy of 8 TeV, are presented. The corrections as a function of pseudorapidity eta and transverse momentum pT are extracted from data and simulated events combining several channels and methods. They account successively for the effects of pileup, uniformity of the detector response, and residual data-simulation jet energy scale differences. Further corrections, depending on the jet flavor and distance parameter (jet size) R, are also presented. The jet energy resolution is measured in data and simulated events and is studied as a function of pileup, jet size, and jet flavor. Typical jet energy resolutions at the central rapidities are 15-20% at 30 GeV, about 10% at 100 GeV, and 5% at 1 TeV. The studies exploit events with dijet topology, as well as photon+jet, Z+jet and multijet events. Several new techniques are used to account for the various sources of jet energy scale corrections, and a full set of uncertainties, and their correlations, are provided. The final uncertainties on the jet energy scale are below 3% across the phase space considered by most analyses (pT>30 GeV and abs(eta)30 GeV is reached, when excluding the jet flavor uncertainties, which are provided separately for different jet flavors. A new benchmark for jet energy scale determination at hadron colliders is achieved with 0.32% uncertainty for jets with pT of the order of 165-330 GeV, and abs(eta)<0.8.
Some q‐Rung Orthopair Fuzzy Aggregation Operators and their Applications to Multiple‐Attribute Decision Making
Peide Liu, Peng Wang
The q‐rung orthopair fuzzy sets (q‐ROFs) are an important way to express uncertain information, and they are superior to the intuitionistic fuzzy sets and the Pythagorean fuzzy sets. Their eminent characteristic is that the sum of the qth power of the membership degree and the qth power of the degrees of non‐membership is equal to or less than 1, so the space of uncertain information they can describe is broader. Under these environments, we propose the q‐rung orthopair fuzzy weighted averaging operator and the q‐rung orthopair fuzzy weighted geometric operator to deal with the decision information, and their some properties are well proved. Further, based on these operators, we presented two new methods to deal with the multi‐attribute decision making problems under the fuzzy environment. Finally, we used some practical examples to illustrate the validity and superiority of the proposed method by comparing with other existing methods.
819 sitasi
en
Mathematics, Computer Science
From Q Fever to Coxiella burnetii Infection: a Paradigm Change
C. Eldin, C. Melenotte, O. Mediannikov
et al.
794 sitasi
en
Medicine, Biology
Quantization
Yun Q. Shi, Huifang Sun
. Let { f i } Ni =1 be a set of equi-contractive similitudes on R 1 satisfying the finite-type condition. We study the asymptotic quantization error for the self-similar measures µ associated with { f i } Ni =1 and a positive probability vector. With a verifiable assumption, we prove that the upper and lower quantization coefficient for µ are both bounded away from zero and infinity. This can be regarded as an extension of Graf and Luschgy’s result on self-similar measures with the open set condition. Our result is applicable to a significant class of self-similar measures with overlaps, including Erd¨os measure, the 3-fold convolution of the classical Cantor measure and the self-similar measures on some λ -Cantor sets.
Monolithic ultra-high-Q lithium niobate microring resonator
Mian Zhang, Cheng Wang, Rebecca Cheng
et al.
We demonstrate an ultralow loss monolithic integrated lithium niobate photonic platform consisting of dry-etched subwavelength waveguides with extracted propagation losses as low as 2.7 dB/m and microring resonators with quality factors up to 107.
666 sitasi
en
Materials Science, Physics
A detailed map of Higgs boson interactions by the ATLAS experiment ten years after the discovery
G. B. D. C. K. S. H. A. H. H. Y. A. C. B. S. B. L. C. Aad Abbott Abbott Abeling Abidi Aboulhorma Abramow, G. Aad, B. Abbott
et al.
The standard model of particle physics1–4 describes the known fundamental particles and forces that make up our Universe, with the exception of gravity. One of the central features of the standard model is a field that permeates all of space and interacts with fundamental particles5–9. The quantum excitation of this field, known as the Higgs field, manifests itself as the Higgs boson, the only fundamental particle with no spin. In 2012, a particle with properties consistent with the Higgs boson of the standard model was observed by the ATLAS and CMS experiments at the Large Hadron Collider at CERN10,11. Since then, more than 30 times as many Higgs bosons have been recorded by the ATLAS experiment, enabling much more precise measurements and new tests of the theory. Here, on the basis of this larger dataset, we combine an unprecedented number of production and decay processes of the Higgs boson to scrutinize its interactions with elementary particles. Interactions with gluons, photons, and W and Z bosons—the carriers of the strong, electromagnetic and weak forces—are studied in detail. Interactions with three third-generation matter particles (bottom (b) and top (t) quarks, and tau leptons (τ)) are well measured and indications of interactions with a second-generation particle (muons, μ) are emerging. These tests reveal that the Higgs boson discovered ten years ago is remarkably consistent with the predictions of the theory and provide stringent constraints on many models of new phenomena beyond the standard model. Ten years after the discovery of the Higgs boson, the ATLAS experiment at CERN probes its kinematic properties with a significantly larger dataset from 2015–2018 and provides further insights on its interaction with other known particles.
491 sitasi
en
Physics, Medicine
GWTC-2.1: Deep extended catalog of compact binary coalescences observed by LIGO and Virgo during the first half of the third observing run
The Ligo Scientific Collaboration, T. Abbott, T. Abbott
et al.
The second Gravitational-Wave Transient Catalog reported on 39 compact binary coalescences observed by the Advanced LIGO and Advanced Virgo detectors between 1 April 2019 15:00 UTC and 1 October 2019 15:00 UTC. We present GWTC-2.1, which reports on a deeper list of candidate events observed over the same period. We analyze the final version of the strain data over this period with improved calibration and better subtraction of excess noise, which has been publicly released. We employ three matched-filter search pipelines for candidate identification, and estimate the astrophysical probability for each candidate event. While GWTC-2 used a false alarm rate threshold of 2 per year, we include in GWTC-2.1, 1201 candidates that pass a false alarm rate threshold of 2 per day. We calculate the source properties of a subset of 44 high-significance candidates that have an astrophysical probability greater than 0.5. Of these candidates, 36 have been reported in GWTC-2. If the 8 additional high-significance candidates presented here are astrophysical, the mass range of events that are unambiguously identified as binary black holes (both objects $\geq 3M_\odot$) is increased compared to GWTC-2, with total masses from $\sim 14 M_\odot$ for GW190924_021846 to $\sim 182 M_\odot$ for GW190426_190642. The primary components of two new candidate events (GW190403_051519 and GW190426_190642) fall in the mass gap predicted by pair instability supernova theory. We also expand the population of binaries with significantly asymmetric mass ratios reported in GWTC-2 by an additional two events (the mass ratio is less than $0.65$ and $0.44$ at $90\%$ probability for GW190403_051519 and GW190917_114630 respectively), and find that 2 of the 8 new events have effective inspiral spins $\chi_\mathrm{eff}>0$ (at $90\%$ credibility), while no binary is consistent with $\chi_\mathrm{eff}<0$ at the same significance.
Test of lepton universality in beauty-quark decays
L. C. R. Aaij, C. Beteta, T. Ackernley
et al.
The standard model of particle physics currently provides our best description of fundamental particles and their interactions. The theory predicts that the different charged leptons, the electron, muon and tau, have identical electroweak interaction strengths. Previous measurements have shown that a wide range of particle decays are consistent with this principle of lepton universality. This article presents evidence for the breaking of lepton universality in beauty-quark decays, with a significance of 3.1 standard deviations, based on proton–proton collision data collected with the LHCb detector at CERN’s Large Hadron Collider. The measurements are of processes in which a beauty meson transforms into a strange meson with the emission of either an electron and a positron, or a muon and an antimuon. If confirmed by future measurements, this violation of lepton universality would imply physics beyond the standard model, such as a new fundamental interaction between quarks and leptons. The Large Hadron Collider beauty collaboration reports a test of lepton flavour universality in decays of bottom mesons into strange mesons and a charged lepton pair, finding evidence of a violation of this principle postulated in the standard model.
Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble
Gaon An, Seungyong Moon, Jang-Hyun Kim
et al.
Offline reinforcement learning (offline RL), which aims to find an optimal policy from a previously collected static dataset, bears algorithmic difficulties due to function approximation errors from out-of-distribution (OOD) data points. To this end, offline RL algorithms adopt either a constraint or a penalty term that explicitly guides the policy to stay close to the given dataset. However, prior methods typically require accurate estimation of the behavior policy or sampling from OOD data points, which themselves can be a non-trivial problem. Moreover, these methods under-utilize the generalization ability of deep neural networks and often fall into suboptimal solutions too close to the given dataset. In this work, we propose an uncertainty-based offline RL method that takes into account the confidence of the Q-value prediction and does not require any estimation or sampling of the data distribution. We show that the clipped Q-learning, a technique widely used in online RL, can be leveraged to successfully penalize OOD data points with high prediction uncertainties. Surprisingly, we find that it is possible to substantially outperform existing offline RL methods on various tasks by simply increasing the number of Q-networks along with the clipped Q-learning. Based on this observation, we propose an ensemble-diversified actor-critic algorithm that reduces the number of required ensemble networks down to a tenth compared to the naive ensemble while achieving state-of-the-art performance on most of the D4RL benchmarks considered.
367 sitasi
en
Computer Science
Randomized Ensembled Double Q-Learning: Learning Fast Without a Model
Xinyue Chen, Che Wang, Zijian Zhou
et al.
Using a high Update-To-Data (UTD) ratio, model-based methods have recently achieved much higher sample efficiency than previous model-free methods for continuous-action DRL benchmarks. In this paper, we introduce a simple model-free algorithm, Randomized Ensembled Double Q-Learning (REDQ), and show that its performance is just as good as, if not better than, a state-of-the-art model-based algorithm for the MuJoCo benchmark. Moreover, REDQ can achieve this performance using fewer parameters than the model-based method, and with less wall-clock run time. REDQ has three carefully integrated ingredients which allow it to achieve its high performance: (i) a UTD ratio>>1; (ii) an ensemble of Q functions; (iii) in-target minimization across a random subset of Q functions from the ensemble. Through carefully designed experiments, we provide a detailed analysis of REDQ and related model-free algorithms. To our knowledge, REDQ is the first successful model-free DRL algorithm for continuous-action spaces using a UTD ratio>>1.
360 sitasi
en
Computer Science
IDQL: Implicit Q-Learning as an Actor-Critic Method with Diffusion Policies
Philippe Hansen-Estruch, Ilya Kostrikov, Michael Janner
et al.
Effective offline RL methods require properly handling out-of-distribution actions. Implicit Q-learning (IQL) addresses this by training a Q-function using only dataset actions through a modified Bellman backup. However, it is unclear which policy actually attains the values represented by this implicitly trained Q-function. In this paper, we reinterpret IQL as an actor-critic method by generalizing the critic objective and connecting it to a behavior-regularized implicit actor. This generalization shows how the induced actor balances reward maximization and divergence from the behavior policy, with the specific loss choice determining the nature of this tradeoff. Notably, this actor can exhibit complex and multimodal characteristics, suggesting issues with the conditional Gaussian actor fit with advantage weighted regression (AWR) used in prior methods. Instead, we propose using samples from a diffusion parameterized behavior policy and weights computed from the critic to then importance sampled our intended policy. We introduce Implicit Diffusion Q-learning (IDQL), combining our general IQL critic with the policy extraction method. IDQL maintains the ease of implementation of IQL while outperforming prior offline RL methods and demonstrating robustness to hyperparameters. Code is available at https://github.com/philippe-eecs/IDQL.
253 sitasi
en
Computer Science
STCF conceptual design report (Volume 1): Physics & detector
M. Achasov, X. Ai, R. Aliberti
et al.
The super τ-charm facility (STCF) is an electron–positron collider proposed by the Chinese particle physics community. It is designed to operate in a center-of-mass energy range from 2 to 7 GeV with a peak luminosity of 0.5 × 1035 cm−2·s−1 or higher. The STCF will produce a data sample about a factor of 100 larger than that of the present τ-charm factory — the BEPCII, providing a unique platform for exploring the asymmetry of matter-antimatter (charge-parity violation), in-depth studies of the internal structure of hadrons and the nature of non-perturbative strong interactions, as well as searching for exotic hadrons and physics beyond the Standard Model. The STCF project in China is under development with an extensive R&D program. This document presents the physics opportunities at the STCF, describes conceptual designs of the STCF detector system, and discusses future plans for detector R&D and physics case studies.