N. Barton
Hasil untuk "q-fin.PM"
Menampilkan 20 dari ~1530376 hasil · dari Semantic Scholar, CrossRef
S.
D. Declercq, M. Fossorier
M. Aguilar, D. Aisa, A. Alvino et al.
Precision measurements by the Alpha Magnetic Spectrometer on the International Space Station of the primary cosmic-ray electron flux in the range 0.5 to 700 GeV and the positron flux in the range 0.5 to 500 GeV are presented. The electron flux and the positron flux each require a description beyond a single power-law spectrum. Both the electron flux and the positron flux change their behavior at ∼30 GeV but the fluxes are significantly different in their magnitude and energy dependence. Between 20 and 200 GeV the positron spectral index is significantly harder than the electron spectral index. The determination of the differing behavior of the spectral indices versus energy is a new observation and provides important information on the origins of cosmic-ray electrons and positrons.
Garvesh Raskutti, M. Wainwright, Bin Yu
Consider the high-dimensional linear regression model y = X β* + w, where y ∈ \BBRn is an observation vector, X ∈ \BBRn × d is a design matrix with d >; n, β* ∈ \BBRd is an unknown regression vector, and w ~ N(0, σ2I) is additive Gaussian noise. This paper studies the minimax rates of convergence for estimating β* in either l2-loss and l2-prediction loss, assuming that β* belongs to an lq -ball \BBBq(Rq) for some q ∈ [0,1]. It is shown that under suitable regularity conditions on the design matrix X, the minimax optimal rate in l2-loss and l2-prediction loss scales as Θ(Rq ([(logd)/(n)])1-q/2). The analysis in this paper reveals that conditions on the design matrix X enter into the rates for l2-error and l2-prediction error in complementary ways in the upper and lower bounds. Our proofs of the lower bounds are information theoretic in nature, based on Fano's inequality and results on the metric entropy of the balls \BBBq(Rq), whereas our proofs of the upper bounds are constructive, involving direct analysis of least squares over lq-balls. For the special case q=0, corresponding to models with an exact sparsity constraint, our results show that although computationally efficient l1-based methods can achieve the minimax rates up to constant factors, they require slightly stronger assumptions on the design matrix X than optimal algorithms involving least-squares over the l0-ball.
Jérémie F. Cohen, M. Chalumeau, R. Cohen et al.
Z. Hassan, Sanjay Mandal, P. Sahoo
The current interests in the universe motivate us to go beyond Einstein's General theory of relativity. One of the interesting proposals comes from a new class of teleparallel gravity named symmetric teleparallel gravity, i.e., f(Q) gravity, where the non‐metricity term Q is accountable for fundamental interaction. These alternative modified theories of gravity's vital role are to deal with the recent interests and to present a realistic cosmological model. This manuscript's main objective is to study the traversable wormhole geometries in f(Q) gravity. We construct the wormhole geometries for three cases: (i) by assuming a relation between the radial and lateral pressure, (ii) considering phantom energy equation of state (EoS), and (iii) for a specific shape function in the fundamental interaction of gravity (i.e. for linear form of f(Q) ). Besides, we discuss two wormhole geometries for a general case of f(Q) with two specific shape functions. Then, we discuss the viability of shape functions and the stability analysis of the wormhole solutions for each case. We have found that the null energy condition (NEC) violates each wormhole model which concluded that our outcomes are realistic and stable. Finally, we discuss the embedding diagrams and volume integral quantifier to have a complete view of wormhole geometries.
B. C. M. Ablikim, M. Achasov, P. Adlarson et al.
We report a study of the processes of e^{+}e^{-}→K^{+}D_{s}^{-}D^{*0} and K^{+}D_{s}^{*-}D^{0} based on e^{+}e^{-} annihilation samples collected with the BESIII detector operating at BEPCII at five center-of-mass energies ranging from 4.628 to 4.698 GeV with a total integrated luminosity of 3.7 fb^{-1}. An excess of events over the known contributions of the conventional charmed mesons is observed near the D_{s}^{-}D^{*0} and D_{s}^{*-}D^{0} mass thresholds in the K^{+} recoil-mass spectrum for events collected at sqrt[s]=4.681 GeV. The structure matches a mass-dependent-width Breit-Wigner line shape, whose pole mass and width are determined as (3982.5_{-2.6}^{+1.8}±2.1) MeV/c^{2} and (12.8_{-4.4}^{+5.3}±3.0) MeV, respectively. The first uncertainties are statistical and the second are systematic. The significance of the resonance hypothesis is estimated to be 5.3 σ over the contributions only from the conventional charmed mesons. This is the first candidate for a charged hidden-charm tetraquark with strangeness, decaying into D_{s}^{-}D^{*0} and D_{s}^{*-}D^{0}. However, the properties of the excess need further exploration with more statistics.
N. Dimakis, A. Paliathanasis, T. Christodoulakis
We use Dirac’s method for the quantization of constrained systems in order to quantize a spatially flat Friedmann–Lemaître–Robertson–Walker spacetime in the context of f(Q) cosmology. When the coincident gauge is considered, the resulting minisuperspace system possesses second class constraints. This distinguishes the quantization process from the typical Wheeler–DeWitt quantization, which is applied for cosmological models where only first class constraints are present (e.g. for models in general relativity or in f(R) gravity). We introduce the Dirac brackets, find appropriate canonical coordinates and then apply the canonical quantization procedure. We perform this method both in vacuum and in the presence of matter: a minimally coupled scalar field and a perfect fluid with a linear equation of state. We demonstrate that the matter content changes significantly the quantization procedure, with the perfect fluid even requiring to put in use the theory of fractional quantum mechanics in which the power of the momentum in the Hamiltonian is associated with the fractal dimension of a Lévy flight. The results of this analysis can be applied in f(T) teleparallel cosmology, since f(Q) and f(T) theories have the same degrees of freedom and same dynamical constraints in cosmological studies.
D. Adey, F. An, A. Balantekin et al.
We report a measurement of electron antineutrino oscillation from the Daya Bay Reactor Neutrino Experiment with nearly 4 million reactor ν[over ¯]_{e} inverse β decay candidates observed over 1958 days of data collection. The installation of a flash analog-to-digital converter readout system and a special calibration campaign using different source enclosures reduce uncertainties in the absolute energy calibration to less than 0.5% for visible energies larger than 2 MeV. The uncertainty in the cosmogenic ^{9}Li and ^{8}He background is reduced from 45% to 30% in the near detectors. A detailed investigation of the spent nuclear fuel history improves its uncertainty from 100% to 30%. Analysis of the relative ν[over ¯]_{e} rates and energy spectra among detectors yields sin^{2}2θ_{13}=0.0856±0.0029 and Δm_{32}^{2}=(2.471_{-0.070}^{+0.068})×10^{-3} eV^{2} assuming the normal hierarchy, and Δm_{32}^{2}=-(2.575_{-0.070}^{+0.068})×10^{-3} eV^{2} assuming the inverted hierarchy.
Marlene E Noack, Florian Tietjens, U. Latacz-Lohmann
After three decades of orienting agriculture towards ecological and social sustainability goals, the Ukraine war catapulted productivity and supply goals back onto the political agenda. Against this background, the present study aimed to establish how farmers and food consumers envision the future of agriculture. Application of Q-Methodology revealed three opinion groups for both farmers and consumers. In conclusion, the Ukraine war has not significantly shifted the balance between old and new societal demands on agriculture. Old discrepancies in the views of farmers and the non-farming population persist. While among the farmers surveyed the group of those who adhere to “business as usual” predominates, there is no group among the consumers surveyed who share this view. Rather, there is a majority desire among consumers for the sector to continue to be aligned with sustainability goals. Security of supply is only an issue for a small proportion of the consumers surveyed.
K. Yang, D. Y. Oh, Seung Hoon Lee et al.
Optical microresonators are essential to a broad range of technologies and scientific disciplines. However, many of their applications rely on discrete devices to attain challenging combinations of ultra-low-loss performance (ultrahigh Q) and resonator design requirements. This prevents access to scalable fabrication methods for photonic integration and lithographic feature control. Indeed, finding a microfabrication bridge that connects ultrahigh-Q device functions with photonic circuits is a priority of the microcavity field. Here, an integrated resonator having a record Q factor over 200 million is presented. Its ultra-low-loss and flexible cavity design brings performance to integrated systems that has been the exclusive domain of discrete silica and crystalline microcavity devices. Two distinctly different devices are demonstrated: soliton sources with electronic repetition rates and high-coherence/low-threshold Brillouin lasers. This multi-device capability and performance from a single integrated cavity platform represents a critical advance for future photonic circuits and systems. Using silicon nitride waveguides processed by plasma-enhanced chemical vapour deposition, full integration of ultrahigh-Q resonators with other photonic devices is now possible, representing a critical advance for future photonic circuits and systems.
A. Rubano, F. Cardano, B. Piccirillo et al.
Since their first introduction in 2006, q-plates have found a constantly increasing number of uses in diverse contexts, ranging from fundamental research on complex structured light fields to more applicative innovations of established experimental techniques, passing through a variety of other emerging topics, such as, for instance, quantum information protocols based on the angular momentum of light. In this paper, we present a bird’s-eye view of the progress of this technology in recent years and offer some educated guesses on the most likely future developments.
Justin Fu, Aviral Kumar, Matthew Soh et al.
Q-learning methods represent a commonly used class of algorithms in reinforcement learning: they are generally efficient and simple, and can be combined readily with function approximators for deep reinforcement learning (RL). However, the behavior of Q-learning methods with function approximation is poorly understood, both theoretically and empirically. In this work, we aim to experimentally investigate potential issues in Q-learning, by means of a "unit testing" framework where we can utilize oracles to disentangle sources of error. Specifically, we investigate questions related to function approximation, sampling error and nonstationarity, and where available, verify if trends found in oracle settings hold true with modern deep RL methods. We find that large neural network architectures have many benefits with regards to learning stability; offer several practical compensations for overfitting; and develop a novel sampling method based on explicitly compensating for function approximation error that yields fair improvement on high-dimensional continuous control domains.
M. Ali
In this paper, two new approaches have been presented to view q‐rung orthopair fuzzy sets. In the first approach, these can viewed as L‐fuzzy sets, whereas the second approach is based on the notion of orbits. Uncertainty index is the quantity HA(x)=1−(A+(x))q−(A−(x))q , which remains constant for all points in an orbit. Certain operators can be defined in q‐ROF sets, which affect HA(x) when applied to some q‐ROF sets. Operators Iδ , Mδ,ν , and Kδ,ν have been defined. It is studied that how these operators affect HA(x) when applied to some q‐ROF set A.
K. Azizzadenesheli, E. Brunskill, Anima Anandkumar
We propose Bayesian Deep Q-Network (BDQN), a practical Thompson sampling based Reinforcement Learning (RL) Algorithm. Thompson sampling allows for targeted exploration in high dimensions through posterior sampling but is usually computationally expensive. We address this limitation by introducing uncertainty only at the output layer of the network through a Bayesian Linear Regression (BLR) model, which can be trained with fast closed-form updates and its samples can be drawn efficiently through the Gaussian distribution. We apply our method to a wide range of Atari games in Arcade Learning Environments. Since BDQN carries out more efficient exploration, it is able to reach higher rewards substantially faster than a key baseline, double deep Q network DDQN.
Wenzhi Yang, Yi-bei Zhang, Wan-ying Wu et al.
Traditional Chinese medicine (TCM) has played a pivotal role in maintaining the health of Chinese people and is now gaining increasing acceptance around the global scope. However, TCM is confronting more and more concerns with respect to its quality. The intrinsic “multicomponent and multitarget” feature of TCM necessitates the establishment of a unique quality and bioactivity evaluation system, which is different from that of the Western medicine. However, TCM is investigated essentially as “herbal medicine” or “natural product”, and the pharmacopoeia quality monographs are actually chemical-markers-based, which can ensure the consistency only in the assigned chemical markers, but, to some extent, have deviated from the basic TCM theory. A concept of “quality marker” (Q-marker), following the “property-effect-component” theory, is proposed. The establishment of Q-marker integrates multidisciplinary technologies like natural products chemistry, analytical chemistry, bionics, chemometrics, pharmacology, systems biology, and pharmacodynamics, etc. Q-marker-based fingerprint and multicomponent determination conduce to the construction of more scientific quality control system of TCM. This review delineates the background, definition, and properties of Q-marker, and the associated technologies applied for its establishment. Strategies and approaches for establishing Q-marker-based TCM quality control system are presented and highlighted with a few TCM examples.
Qinglai Wei, F. Lewis, Qiuye Sun et al.
M. Aguilar, L. Ali Cavasonza, G. Ambrosi et al.
AMS-02 is wide acceptance high-energy physics experiment installed on the International Space Station in May 2011 and operating continuously since then. AMS-02 is able to precisely separate cosmic rays light nuclei (1≤ Z ≤ 8) with contaminations less than 10−3. The light nuclei cosmic ray Boron to Carbon flux ratio is very well known sensitive observable for the understanding of the propagation of cosmic rays in the Galaxy, being Boron a secondary product of spallation on the interstellar medium of heavier primary elements such as Carbon and Oxygen. A precision measurement based on 10 million events of the Boron to Carbon ratio in the rigidity range from 2 GV to 1.8 TV is presented.
D. An, A. Balantekin, H. Band et al.
A measurement of electron antineutrino oscillation by the Daya Bay Reactor Neutrino Experiment is described in detail. Six 2.9-GWth nuclear power reactors of the Daya Bay and Ling Ao nuclear power facilities served as intense sources of νe’s. Comparison of the νe rate and energy spectrum measured by antineutrino detectors far from the nuclear reactors (∼1500–1950 m ) relative to detectors near the reactors (∼350–600 m ) allowed a precise measurement of νe disappearance. More than 2.5 million νe inverse beta-decay interactions were observed, based on the combination of 217 days of operation of six antineutrino detectors (December, 2011–July, 2012) with a subsequent 1013 days using the complete configuration of eight detectors (October, 2012–July, 2015). The νe rate observed at the far detectors relative to the near detectors showed a significant deficit, R=0.949±0.002(stat)±0.002(syst). The energy dependence of νe disappearance showed the distinct variation predicted by neutrino oscillation. Analysis using an approximation for the three-flavor oscillation probability yielded the flavor-mixing angle sin^2 2θ_(13)=0.0841±0.0027(stat)±0.0019(syst) and the effective neutrino mass-squared difference of |Δm^2_(ee)|=(2.50±0.06(stat)±0.06(syst))×10^(−3) eV^2. Analysis using the exact three-flavor probability found Δm^2_(32)=(2.45±0.06(stat)±0.06(syst))×10^(−3) eV^2 assuming the normal neutrino mass hierarchy and Δm^2_(32)=(−2.56±0.06(stat)±0.06(syst))×10^(−3) eV^2 for the inverted hierarchy.
Halaman 13 dari 76519