N. Barton
Hasil untuk "q-fin.PR"
Menampilkan 20 dari ~1352626 hasil · dari arXiv, Semantic Scholar
S.
D. Declercq, M. Fossorier
M. Aguilar, D. Aisa, A. Alvino et al.
Precision measurements by the Alpha Magnetic Spectrometer on the International Space Station of the primary cosmic-ray electron flux in the range 0.5 to 700 GeV and the positron flux in the range 0.5 to 500 GeV are presented. The electron flux and the positron flux each require a description beyond a single power-law spectrum. Both the electron flux and the positron flux change their behavior at ∼30 GeV but the fluxes are significantly different in their magnitude and energy dependence. Between 20 and 200 GeV the positron spectral index is significantly harder than the electron spectral index. The determination of the differing behavior of the spectral indices versus energy is a new observation and provides important information on the origins of cosmic-ray electrons and positrons.
Garvesh Raskutti, M. Wainwright, Bin Yu
Consider the high-dimensional linear regression model y = X β* + w, where y ∈ \BBRn is an observation vector, X ∈ \BBRn × d is a design matrix with d >; n, β* ∈ \BBRd is an unknown regression vector, and w ~ N(0, σ2I) is additive Gaussian noise. This paper studies the minimax rates of convergence for estimating β* in either l2-loss and l2-prediction loss, assuming that β* belongs to an lq -ball \BBBq(Rq) for some q ∈ [0,1]. It is shown that under suitable regularity conditions on the design matrix X, the minimax optimal rate in l2-loss and l2-prediction loss scales as Θ(Rq ([(logd)/(n)])1-q/2). The analysis in this paper reveals that conditions on the design matrix X enter into the rates for l2-error and l2-prediction error in complementary ways in the upper and lower bounds. Our proofs of the lower bounds are information theoretic in nature, based on Fano's inequality and results on the metric entropy of the balls \BBBq(Rq), whereas our proofs of the upper bounds are constructive, involving direct analysis of least squares over lq-balls. For the special case q=0, corresponding to models with an exact sparsity constraint, our results show that although computationally efficient l1-based methods can achieve the minimax rates up to constant factors, they require slightly stronger assumptions on the design matrix X than optimal algorithms involving least-squares over the l0-ball.
M. Aguilar, D. Aisa, B. Alpat et al.
Knowledge of the precise rigidity dependence of the helium flux is important in understanding the origin, acceleration, and propagation of cosmic rays. A precise measurement of the helium flux in primary cosmic rays with rigidity (momentum/charge) from 1.9 GV to 3 TV based on 50 million events is presented and compared to the proton flux. The detailed variation with rigidity of the helium flux spectral index is presented for the first time. The spectral index progressively hardens at rigidities larger than 100 GV. The rigidity dependence of the helium flux spectral index is similar to that of the proton spectral index though the magnitudes are different. Remarkably, the spectral index of the proton to helium flux ratio increases with rigidity up to 45 GV and then becomes constant; the flux ratio above 45 GV is well described by a single power law.
C. Kelstrup, Dorte B. Bekker-Jensen, T. Arrey et al.
Jérémie F. Cohen, M. Chalumeau, R. Cohen et al.
B. C. M. Ablikim, M. Achasov, P. Adlarson et al.
We report a study of the processes of e^{+}e^{-}→K^{+}D_{s}^{-}D^{*0} and K^{+}D_{s}^{*-}D^{0} based on e^{+}e^{-} annihilation samples collected with the BESIII detector operating at BEPCII at five center-of-mass energies ranging from 4.628 to 4.698 GeV with a total integrated luminosity of 3.7 fb^{-1}. An excess of events over the known contributions of the conventional charmed mesons is observed near the D_{s}^{-}D^{*0} and D_{s}^{*-}D^{0} mass thresholds in the K^{+} recoil-mass spectrum for events collected at sqrt[s]=4.681 GeV. The structure matches a mass-dependent-width Breit-Wigner line shape, whose pole mass and width are determined as (3982.5_{-2.6}^{+1.8}±2.1) MeV/c^{2} and (12.8_{-4.4}^{+5.3}±3.0) MeV, respectively. The first uncertainties are statistical and the second are systematic. The significance of the resonance hypothesis is estimated to be 5.3 σ over the contributions only from the conventional charmed mesons. This is the first candidate for a charged hidden-charm tetraquark with strangeness, decaying into D_{s}^{-}D^{*0} and D_{s}^{*-}D^{0}. However, the properties of the excess need further exploration with more statistics.
N. Dimakis, A. Paliathanasis, T. Christodoulakis
We use Dirac’s method for the quantization of constrained systems in order to quantize a spatially flat Friedmann–Lemaître–Robertson–Walker spacetime in the context of f(Q) cosmology. When the coincident gauge is considered, the resulting minisuperspace system possesses second class constraints. This distinguishes the quantization process from the typical Wheeler–DeWitt quantization, which is applied for cosmological models where only first class constraints are present (e.g. for models in general relativity or in f(R) gravity). We introduce the Dirac brackets, find appropriate canonical coordinates and then apply the canonical quantization procedure. We perform this method both in vacuum and in the presence of matter: a minimally coupled scalar field and a perfect fluid with a linear equation of state. We demonstrate that the matter content changes significantly the quantization procedure, with the perfect fluid even requiring to put in use the theory of fractional quantum mechanics in which the power of the momentum in the Hamiltonian is associated with the fractal dimension of a Lévy flight. The results of this analysis can be applied in f(T) teleparallel cosmology, since f(Q) and f(T) theories have the same degrees of freedom and same dynamical constraints in cosmological studies.
D. Adey, F. An, A. Balantekin et al.
We report a measurement of electron antineutrino oscillation from the Daya Bay Reactor Neutrino Experiment with nearly 4 million reactor ν[over ¯]_{e} inverse β decay candidates observed over 1958 days of data collection. The installation of a flash analog-to-digital converter readout system and a special calibration campaign using different source enclosures reduce uncertainties in the absolute energy calibration to less than 0.5% for visible energies larger than 2 MeV. The uncertainty in the cosmogenic ^{9}Li and ^{8}He background is reduced from 45% to 30% in the near detectors. A detailed investigation of the spent nuclear fuel history improves its uncertainty from 100% to 30%. Analysis of the relative ν[over ¯]_{e} rates and energy spectra among detectors yields sin^{2}2θ_{13}=0.0856±0.0029 and Δm_{32}^{2}=(2.471_{-0.070}^{+0.068})×10^{-3} eV^{2} assuming the normal hierarchy, and Δm_{32}^{2}=-(2.575_{-0.070}^{+0.068})×10^{-3} eV^{2} assuming the inverted hierarchy.
Marlene E Noack, Florian Tietjens, U. Latacz-Lohmann
After three decades of orienting agriculture towards ecological and social sustainability goals, the Ukraine war catapulted productivity and supply goals back onto the political agenda. Against this background, the present study aimed to establish how farmers and food consumers envision the future of agriculture. Application of Q-Methodology revealed three opinion groups for both farmers and consumers. In conclusion, the Ukraine war has not significantly shifted the balance between old and new societal demands on agriculture. Old discrepancies in the views of farmers and the non-farming population persist. While among the farmers surveyed the group of those who adhere to “business as usual” predominates, there is no group among the consumers surveyed who share this view. Rather, there is a majority desire among consumers for the sector to continue to be aligned with sustainability goals. Security of supply is only an issue for a small proportion of the consumers surveyed.
K. Yang, D. Y. Oh, Seung Hoon Lee et al.
Optical microresonators are essential to a broad range of technologies and scientific disciplines. However, many of their applications rely on discrete devices to attain challenging combinations of ultra-low-loss performance (ultrahigh Q) and resonator design requirements. This prevents access to scalable fabrication methods for photonic integration and lithographic feature control. Indeed, finding a microfabrication bridge that connects ultrahigh-Q device functions with photonic circuits is a priority of the microcavity field. Here, an integrated resonator having a record Q factor over 200 million is presented. Its ultra-low-loss and flexible cavity design brings performance to integrated systems that has been the exclusive domain of discrete silica and crystalline microcavity devices. Two distinctly different devices are demonstrated: soliton sources with electronic repetition rates and high-coherence/low-threshold Brillouin lasers. This multi-device capability and performance from a single integrated cavity platform represents a critical advance for future photonic circuits and systems. Using silicon nitride waveguides processed by plasma-enhanced chemical vapour deposition, full integration of ultrahigh-Q resonators with other photonic devices is now possible, representing a critical advance for future photonic circuits and systems.
A. Rubano, F. Cardano, B. Piccirillo et al.
Since their first introduction in 2006, q-plates have found a constantly increasing number of uses in diverse contexts, ranging from fundamental research on complex structured light fields to more applicative innovations of established experimental techniques, passing through a variety of other emerging topics, such as, for instance, quantum information protocols based on the angular momentum of light. In this paper, we present a bird’s-eye view of the progress of this technology in recent years and offer some educated guesses on the most likely future developments.
C. Schulze, B. Matzdorf
Agri-environmental climate measures (AECM) are considered a promising tool to achieve environmental policy goals. Not only farmers but also policy administrators and intermediaries are important actors whose attitudes and actions drive the success of these measures. To follow the idea of better stakeholder participation in the design of policy instruments, we analyse stakeholder viewpoints on the contract design of AECM. We apply Q methodology with 25 individuals from Brandenburg, Germany, who are from the farmer, policy administrator and intermediary domains. We identify three distinct attitudinal profiles, the “planners”, the “cooperators” and the “individualists”, which do not correspond to the three individual stakeholder groups. The results provide evidence that general differences in the viewpoints of policy designers and implementers on the one hand and farmers on the other hand are not a source of potential institutional mismatch. We further use the attitudinal profiles to develop three types of policy programmes with slightly different underlying rationalities. Policymakers could use such an approach to better develop target group-specific (sub)programmes in parallel. Our research strengthens the argument that multiple stakeholders should be involved in co-designing conservation measures. Moreover, behavioural factors should be considered in policy making processes.
Justin Fu, Aviral Kumar, Matthew Soh et al.
Q-learning methods represent a commonly used class of algorithms in reinforcement learning: they are generally efficient and simple, and can be combined readily with function approximators for deep reinforcement learning (RL). However, the behavior of Q-learning methods with function approximation is poorly understood, both theoretically and empirically. In this work, we aim to experimentally investigate potential issues in Q-learning, by means of a "unit testing" framework where we can utilize oracles to disentangle sources of error. Specifically, we investigate questions related to function approximation, sampling error and nonstationarity, and where available, verify if trends found in oracle settings hold true with modern deep RL methods. We find that large neural network architectures have many benefits with regards to learning stability; offer several practical compensations for overfitting; and develop a novel sampling method based on explicitly compensating for function approximation error that yields fair improvement on high-dimensional continuous control domains.
M. Ali
In this paper, two new approaches have been presented to view q‐rung orthopair fuzzy sets. In the first approach, these can viewed as L‐fuzzy sets, whereas the second approach is based on the notion of orbits. Uncertainty index is the quantity HA(x)=1−(A+(x))q−(A−(x))q , which remains constant for all points in an orbit. Certain operators can be defined in q‐ROF sets, which affect HA(x) when applied to some q‐ROF sets. Operators Iδ , Mδ,ν , and Kδ,ν have been defined. It is studied that how these operators affect HA(x) when applied to some q‐ROF set A.
K. Azizzadenesheli, E. Brunskill, Anima Anandkumar
We propose Bayesian Deep Q-Network (BDQN), a practical Thompson sampling based Reinforcement Learning (RL) Algorithm. Thompson sampling allows for targeted exploration in high dimensions through posterior sampling but is usually computationally expensive. We address this limitation by introducing uncertainty only at the output layer of the network through a Bayesian Linear Regression (BLR) model, which can be trained with fast closed-form updates and its samples can be drawn efficiently through the Gaussian distribution. We apply our method to a wide range of Atari games in Arcade Learning Environments. Since BDQN carries out more efficient exploration, it is able to reach higher rewards substantially faster than a key baseline, double deep Q network DDQN.
Wenzhi Yang, Yi-bei Zhang, Wan-ying Wu et al.
Traditional Chinese medicine (TCM) has played a pivotal role in maintaining the health of Chinese people and is now gaining increasing acceptance around the global scope. However, TCM is confronting more and more concerns with respect to its quality. The intrinsic “multicomponent and multitarget” feature of TCM necessitates the establishment of a unique quality and bioactivity evaluation system, which is different from that of the Western medicine. However, TCM is investigated essentially as “herbal medicine” or “natural product”, and the pharmacopoeia quality monographs are actually chemical-markers-based, which can ensure the consistency only in the assigned chemical markers, but, to some extent, have deviated from the basic TCM theory. A concept of “quality marker” (Q-marker), following the “property-effect-component” theory, is proposed. The establishment of Q-marker integrates multidisciplinary technologies like natural products chemistry, analytical chemistry, bionics, chemometrics, pharmacology, systems biology, and pharmacodynamics, etc. Q-marker-based fingerprint and multicomponent determination conduce to the construction of more scientific quality control system of TCM. This review delineates the background, definition, and properties of Q-marker, and the associated technologies applied for its establishment. Strategies and approaches for establishing Q-marker-based TCM quality control system are presented and highlighted with a few TCM examples.
Colin Turfus, Aurelio Romero-Bermúdez
We extend the short rate model of Turfus and Romero-Bermúdez [2021] to facilitate accurate arbitrage-free analytic pricing of SOFR, SONIA or ESTR caplets, i.e. options on backward-looking compounded rates payments, in a manner consistent with the smile and skew levels observed in the market. These caplet pricing formulae and corresponding LIBOR or term-rate caplet results are translated into effective variance (implied volatility) formulae, which are seen to be of a particularly simple form. They show that the model is essentially equivalent to imposing on a Hull-White model an effective variance which is a quadratic function of the moneyness parameter (rather than a constant) for any given maturity. Results are also illustrated graphically.
Halaman 14 dari 67632