N. Barton
Hasil untuk "q-bio.TO"
Menampilkan 20 dari ~1623934 hasil · dari CrossRef, Semantic Scholar
S.
D. Declercq, M. Fossorier
M. Aguilar, D. Aisa, A. Alvino et al.
Precision measurements by the Alpha Magnetic Spectrometer on the International Space Station of the primary cosmic-ray electron flux in the range 0.5 to 700 GeV and the positron flux in the range 0.5 to 500 GeV are presented. The electron flux and the positron flux each require a description beyond a single power-law spectrum. Both the electron flux and the positron flux change their behavior at ∼30 GeV but the fluxes are significantly different in their magnitude and energy dependence. Between 20 and 200 GeV the positron spectral index is significantly harder than the electron spectral index. The determination of the differing behavior of the spectral indices versus energy is a new observation and provides important information on the origins of cosmic-ray electrons and positrons.
Garvesh Raskutti, M. Wainwright, Bin Yu
Consider the high-dimensional linear regression model y = X β* + w, where y ∈ \BBRn is an observation vector, X ∈ \BBRn × d is a design matrix with d >; n, β* ∈ \BBRd is an unknown regression vector, and w ~ N(0, σ2I) is additive Gaussian noise. This paper studies the minimax rates of convergence for estimating β* in either l2-loss and l2-prediction loss, assuming that β* belongs to an lq -ball \BBBq(Rq) for some q ∈ [0,1]. It is shown that under suitable regularity conditions on the design matrix X, the minimax optimal rate in l2-loss and l2-prediction loss scales as Θ(Rq ([(logd)/(n)])1-q/2). The analysis in this paper reveals that conditions on the design matrix X enter into the rates for l2-error and l2-prediction error in complementary ways in the upper and lower bounds. Our proofs of the lower bounds are information theoretic in nature, based on Fano's inequality and results on the metric entropy of the balls \BBBq(Rq), whereas our proofs of the upper bounds are constructive, involving direct analysis of least squares over lq-balls. For the special case q=0, corresponding to models with an exact sparsity constraint, our results show that although computationally efficient l1-based methods can achieve the minimax rates up to constant factors, they require slightly stronger assumptions on the design matrix X than optimal algorithms involving least-squares over the l0-ball.
Jérémie F. Cohen, M. Chalumeau, R. Cohen et al.
N. Frusciante
We investigate the impact on cosmological observables of $f(Q)$-gravity, a specific class of modified gravity models in which gravity is described by the nonmetricity scalar, $Q$. In particular we focus on a specific model which is indistinguishable from the $\mathrm{\ensuremath{\Lambda}}$-cold-dark-matter ($\mathrm{\ensuremath{\Lambda}}\mathrm{CDM}$) model at the background level, while showing peculiar and measurable signatures at linear perturbation level. These are attributed to a time-dependent Planck mass and are regulated by a single dimensionless parameter, $\ensuremath{\alpha}$. In comparison to the $\mathrm{\ensuremath{\Lambda}}\mathrm{CDM}$ model, we find for positive values of $\ensuremath{\alpha}$ a suppressed matter power spectrum and lensing effect on the cosmic microwave background radiation (CMB) angular power spectrum and an enhanced integrated-Sachs-Wolfe tail of CMB temperature anisotropies. The opposite behaviors are present when the $\ensuremath{\alpha}$ parameter is negative. We also investigate the modified gravitational waves (GWs) propagation and show the prediction of the GWs luminosity distance compared to the standard electromagnetic one. Finally, we infer the accuracy on the free parameter of the model with standard sirens at future GWs detectors.
Z. Hassan, Sanjay Mandal, P. Sahoo
The current interests in the universe motivate us to go beyond Einstein's General theory of relativity. One of the interesting proposals comes from a new class of teleparallel gravity named symmetric teleparallel gravity, i.e., f(Q) gravity, where the non‐metricity term Q is accountable for fundamental interaction. These alternative modified theories of gravity's vital role are to deal with the recent interests and to present a realistic cosmological model. This manuscript's main objective is to study the traversable wormhole geometries in f(Q) gravity. We construct the wormhole geometries for three cases: (i) by assuming a relation between the radial and lateral pressure, (ii) considering phantom energy equation of state (EoS), and (iii) for a specific shape function in the fundamental interaction of gravity (i.e. for linear form of f(Q) ). Besides, we discuss two wormhole geometries for a general case of f(Q) with two specific shape functions. Then, we discuss the viability of shape functions and the stability analysis of the wormhole solutions for each case. We have found that the null energy condition (NEC) violates each wormhole model which concluded that our outcomes are realistic and stable. Finally, we discuss the embedding diagrams and volume integral quantifier to have a complete view of wormhole geometries.
Q. Liu, Q. Ma, Gaoqiang Chen et al.
Abstract Friction stir processing (FSP) is applied to modify the surface microstructure of cast AZ91 magnesium alloy. The electrochemical and hydrogen evolution measurements reveal that the corrosion rate of processed alloy in 3.5 wt% NaCl solution is significantly decreased. This is mainly attributed to the alteration of corrosion process induced by modification on the morphology and distribution of β-Mg17Al12 phase via FSP. It is originally reported that the formation of a compact and continuous β phase layer on the FSPed surface owing to the segregation of fine β phase effectively enhances the stability and passivity of corrosion product film.
D. Adey, F. An, A. Balantekin et al.
We report a measurement of electron antineutrino oscillation from the Daya Bay Reactor Neutrino Experiment with nearly 4 million reactor ν[over ¯]_{e} inverse β decay candidates observed over 1958 days of data collection. The installation of a flash analog-to-digital converter readout system and a special calibration campaign using different source enclosures reduce uncertainties in the absolute energy calibration to less than 0.5% for visible energies larger than 2 MeV. The uncertainty in the cosmogenic ^{9}Li and ^{8}He background is reduced from 45% to 30% in the near detectors. A detailed investigation of the spent nuclear fuel history improves its uncertainty from 100% to 30%. Analysis of the relative ν[over ¯]_{e} rates and energy spectra among detectors yields sin^{2}2θ_{13}=0.0856±0.0029 and Δm_{32}^{2}=(2.471_{-0.070}^{+0.068})×10^{-3} eV^{2} assuming the normal hierarchy, and Δm_{32}^{2}=-(2.575_{-0.070}^{+0.068})×10^{-3} eV^{2} assuming the inverted hierarchy.
N. Dimakis, A. Paliathanasis, T. Christodoulakis
We use Dirac’s method for the quantization of constrained systems in order to quantize a spatially flat Friedmann–Lemaître–Robertson–Walker spacetime in the context of f(Q) cosmology. When the coincident gauge is considered, the resulting minisuperspace system possesses second class constraints. This distinguishes the quantization process from the typical Wheeler–DeWitt quantization, which is applied for cosmological models where only first class constraints are present (e.g. for models in general relativity or in f(R) gravity). We introduce the Dirac brackets, find appropriate canonical coordinates and then apply the canonical quantization procedure. We perform this method both in vacuum and in the presence of matter: a minimally coupled scalar field and a perfect fluid with a linear equation of state. We demonstrate that the matter content changes significantly the quantization procedure, with the perfect fluid even requiring to put in use the theory of fractional quantum mechanics in which the power of the momentum in the Hamiltonian is associated with the fractal dimension of a Lévy flight. The results of this analysis can be applied in f(T) teleparallel cosmology, since f(Q) and f(T) theories have the same degrees of freedom and same dynamical constraints in cosmological studies.
P. Kofinas, A. Dounis, G. Vouros
Abstract This study proposes a cooperative multi-agent system for managing the energy of a stand-alone microgrid. The multi-agent system learns to control the components of the microgrid so as this to achieve its purposes and operate effectively, by means of a distributed, collaborative reinforcement learning method in continuous actions-states space. Stand-alone microgrids present challenges regarding guaranteeing electricity supply and increasing the reliability of the system under the uncertainties introduced by the renewable power sources and the stochastic demand of the consumers. In this article we consider a microgrid that consists of power production, power consumption and power storage units: the power production group includes a Photovoltaic source, a fuel cell and a diesel generator; the power consumption group includes an electrolyzer unit, a desalination plant and a variable electrical load that represent the power consumption of a building; the power storage group includes only the Battery bank. We conjecture that a distributed multi-agent system presents specific advantages to control the microgrid components which operate in a continuous states and actions space: For this purpose we propose the use of fuzzy Q-Learning methods for agents representing microgrid components to act as independent learners, while sharing state variables to coordinate their behavior. Experimental results highlight both the effectiveness of individual agents to control system components, as well as the effectiveness of the multi-agent system to guarantee electricity supply and increase the reliability of the microgrid.
K. Yang, D. Y. Oh, Seung Hoon Lee et al.
Optical microresonators are essential to a broad range of technologies and scientific disciplines. However, many of their applications rely on discrete devices to attain challenging combinations of ultra-low-loss performance (ultrahigh Q) and resonator design requirements. This prevents access to scalable fabrication methods for photonic integration and lithographic feature control. Indeed, finding a microfabrication bridge that connects ultrahigh-Q device functions with photonic circuits is a priority of the microcavity field. Here, an integrated resonator having a record Q factor over 200 million is presented. Its ultra-low-loss and flexible cavity design brings performance to integrated systems that has been the exclusive domain of discrete silica and crystalline microcavity devices. Two distinctly different devices are demonstrated: soliton sources with electronic repetition rates and high-coherence/low-threshold Brillouin lasers. This multi-device capability and performance from a single integrated cavity platform represents a critical advance for future photonic circuits and systems. Using silicon nitride waveguides processed by plasma-enhanced chemical vapour deposition, full integration of ultrahigh-Q resonators with other photonic devices is now possible, representing a critical advance for future photonic circuits and systems.
A. Rubano, F. Cardano, B. Piccirillo et al.
Since their first introduction in 2006, q-plates have found a constantly increasing number of uses in diverse contexts, ranging from fundamental research on complex structured light fields to more applicative innovations of established experimental techniques, passing through a variety of other emerging topics, such as, for instance, quantum information protocols based on the angular momentum of light. In this paper, we present a bird’s-eye view of the progress of this technology in recent years and offer some educated guesses on the most likely future developments.
M. Ali
In this paper, two new approaches have been presented to view q‐rung orthopair fuzzy sets. In the first approach, these can viewed as L‐fuzzy sets, whereas the second approach is based on the notion of orbits. Uncertainty index is the quantity HA(x)=1−(A+(x))q−(A−(x))q , which remains constant for all points in an orbit. Certain operators can be defined in q‐ROF sets, which affect HA(x) when applied to some q‐ROF sets. Operators Iδ , Mδ,ν , and Kδ,ν have been defined. It is studied that how these operators affect HA(x) when applied to some q‐ROF set A.
K. Azizzadenesheli, E. Brunskill, Anima Anandkumar
We propose Bayesian Deep Q-Network (BDQN), a practical Thompson sampling based Reinforcement Learning (RL) Algorithm. Thompson sampling allows for targeted exploration in high dimensions through posterior sampling but is usually computationally expensive. We address this limitation by introducing uncertainty only at the output layer of the network through a Bayesian Linear Regression (BLR) model, which can be trained with fast closed-form updates and its samples can be drawn efficiently through the Gaussian distribution. We apply our method to a wide range of Atari games in Arcade Learning Environments. Since BDQN carries out more efficient exploration, it is able to reach higher rewards substantially faster than a key baseline, double deep Q network DDQN.
Wenzhi Yang, Yi-bei Zhang, Wan-ying Wu et al.
Traditional Chinese medicine (TCM) has played a pivotal role in maintaining the health of Chinese people and is now gaining increasing acceptance around the global scope. However, TCM is confronting more and more concerns with respect to its quality. The intrinsic “multicomponent and multitarget” feature of TCM necessitates the establishment of a unique quality and bioactivity evaluation system, which is different from that of the Western medicine. However, TCM is investigated essentially as “herbal medicine” or “natural product”, and the pharmacopoeia quality monographs are actually chemical-markers-based, which can ensure the consistency only in the assigned chemical markers, but, to some extent, have deviated from the basic TCM theory. A concept of “quality marker” (Q-marker), following the “property-effect-component” theory, is proposed. The establishment of Q-marker integrates multidisciplinary technologies like natural products chemistry, analytical chemistry, bionics, chemometrics, pharmacology, systems biology, and pharmacodynamics, etc. Q-marker-based fingerprint and multicomponent determination conduce to the construction of more scientific quality control system of TCM. This review delineates the background, definition, and properties of Q-marker, and the associated technologies applied for its establishment. Strategies and approaches for establishing Q-marker-based TCM quality control system are presented and highlighted with a few TCM examples.
Qinglai Wei, F. Lewis, Qiuye Sun et al.
M. Aguilar, L. Ali Cavasonza, G. Ambrosi et al.
AMS-02 is wide acceptance high-energy physics experiment installed on the International Space Station in May 2011 and operating continuously since then. AMS-02 is able to precisely separate cosmic rays light nuclei (1≤ Z ≤ 8) with contaminations less than 10−3. The light nuclei cosmic ray Boron to Carbon flux ratio is very well known sensitive observable for the understanding of the propagation of cosmic rays in the Galaxy, being Boron a secondary product of spallation on the interstellar medium of heavier primary elements such as Carbon and Oxygen. A precision measurement based on 10 million events of the Boron to Carbon ratio in the rigidity range from 2 GV to 1.8 TV is presented.
D. An, A. Balantekin, H. Band et al.
A measurement of electron antineutrino oscillation by the Daya Bay Reactor Neutrino Experiment is described in detail. Six 2.9-GWth nuclear power reactors of the Daya Bay and Ling Ao nuclear power facilities served as intense sources of νe’s. Comparison of the νe rate and energy spectrum measured by antineutrino detectors far from the nuclear reactors (∼1500–1950 m ) relative to detectors near the reactors (∼350–600 m ) allowed a precise measurement of νe disappearance. More than 2.5 million νe inverse beta-decay interactions were observed, based on the combination of 217 days of operation of six antineutrino detectors (December, 2011–July, 2012) with a subsequent 1013 days using the complete configuration of eight detectors (October, 2012–July, 2015). The νe rate observed at the far detectors relative to the near detectors showed a significant deficit, R=0.949±0.002(stat)±0.002(syst). The energy dependence of νe disappearance showed the distinct variation predicted by neutrino oscillation. Analysis using an approximation for the three-flavor oscillation probability yielded the flavor-mixing angle sin^2 2θ_(13)=0.0841±0.0027(stat)±0.0019(syst) and the effective neutrino mass-squared difference of |Δm^2_(ee)|=(2.50±0.06(stat)±0.06(syst))×10^(−3) eV^2. Analysis using the exact three-flavor probability found Δm^2_(32)=(2.45±0.06(stat)±0.06(syst))×10^(−3) eV^2 assuming the normal neutrino mass hierarchy and Δm^2_(32)=(−2.56±0.06(stat)±0.06(syst))×10^(−3) eV^2 for the inverted hierarchy.
Halaman 13 dari 81197