Jon C. R. Bennett, Hui Zhang
Hasil untuk "q-fin.PR"
Menampilkan 20 dari ~1352725 hasil · dari Semantic Scholar
D. Fasshauer, R B Sutton, A. Brunger et al.
B. Song, S. Noda, T. Asano et al.
C. Breuil, B. Conrad, Fred Diamond et al.
We complete the proof that every elliptic curve over the rational numbers is modular.
A. Sirunyan, A. Tumasyan, W. Adam et al.
This work summarizes and puts in an overall perspective studies done within the compact muon solenoid (CMS) concerning the discovery potential for squarks and gluinos, sleptons, charginos and neutralinos, supersymmetric (SUSY) dark matter, lightest Higgs, sparticle mass determination methods and the detector design optimization in view of SUSY searches. It represents the status of our understanding of these subjects as of summer 1997. As a benchmark we used the minimal supergravity-inspired supersymmetric standard model (mSUGRA) with a stable lightest supersymmetric particle (LSP). Discovery of supersymmetry at the large hadron collider should be relatively straightforward. It may occur through the observation of large excesses of events in missing ET plus jets, or with one or more isolated leptons. An excess of trilepton events or isolated dileptons with missing ET, exhibiting a characteristic signature in the l+l− invariant mass distribution, could also be the first manifestation of SUSY production. Squarks and gluinos can be discovered for masses in excess of 2 TeV. Charginos and neutralinos can be discovered from an excess of events in dilepton or trilepton final states. Inclusive searches can give early indications from their copious production in squark and gluino cascade decays. Indirect evidence for sleptons can also be obtained from inclusive dilepton studies. Isolation requirements and a jet veto would allow detection of both the direct chargino/neutralino production and the directly produced sleptons. Squark and gluino production may also represent a copious source of Higgs bosons through cascade decays. The lightest SUSY Higgs h → b may be reconstructed with a signal/background ratio of order 1 thanks to hard cuts on ETmiss justified by escaping LSPs. The LSP of SUSY models with conserved R-parity represents a very good candidate for cosmological dark matter. The region of parameter space where this is true is well covered by our searches, at least for tanβ = 2. If supersymmetry exists at the electroweak scale, it could hardly escape detection in CMS and the study of supersymmetry will form a central part of our physics program.
Kristine H. Luce, J. Crowther
M. Morris, J. Teevan, Katrina Panovich
P. Bolton, Hui Chen, Neng Wang
F. Yeh, V. Wedeen, W. Tseng
Lena Mamykina, Bella Manoim, Manas Mittal et al.
Timothy Erickson, Toni M. Whited
X. Yi, Qifan Yang, K. Yang et al.
Frequency combs are having a broad impact on science and technology because they provide a way to coherently link radio/microwave-rate electrical signals with optical-rate signals derived from lasers and atomic transitions. Integrating these systems on a photonic chip would revolutionize instrumentation, time keeping, spectroscopy, navigation, and potentially create new mass-market applications. A key element of such a system-on-a-chip will be a mode-locked comb that can be self-referenced. The recent demonstration of soliton mode locking in crystalline and silicon nitride microresonators has provided a way to both mode lock and generate femtosecond time-scale pulses. Here, soliton mode locking is demonstrated in high-Q silica resonators. The resonators produce low-phase-noise soliton pulse trains at readily detectable pulse rates—two essential properties for the operation of frequency combs. A method for the long-term stabilization of the solitons is also demonstrated, and is used to test the theoretical dependence of the comb power, efficiency, and soliton existence power on the pulse width. The influence of the Raman process on the soliton existence power and efficiency is also observed. The resonators are microfabricated on silicon chips and feature reproducible modal properties required for soliton formation. A low-noise and detectable pulse rate soliton frequency comb on a chip is a significant step towards a fully integrated frequency comb system.
I. Selesnick
Zhao Tong, Hongjian Chen, Xiaomei Deng et al.
Abstract Task scheduling, which plays a vital role in cloud computing, is a critical factor that determines the performance of cloud computing. From the booming economy of information processing to the increasing need of quality of service (QoS) in the business of networking, the dynamic task-scheduling problem has attracted worldwide attention. Due to its complexity, task scheduling has been defined and classified as an NP-hard problem. Additionally, most dynamic online task scheduling often manages tasks in a complex environment, which makes it even more challenging to balance and satisfy the benefits of each aspect of cloud computing. In this paper, we propose a novel artificial intelligence algorithm, called deep Q-learning task scheduling (DQTS), that combines the advantages of the Q-learning algorithm and a deep neural network. This new approach is aimed at solving the problem of handling directed acyclic graph (DAG) tasks in a cloud computing environment. The essential idea of our approach uses the popular deep Q-learning (DQL) method in task scheduling, where fundamental model learning is primarily inspired by DQL. Based on developments in WorkflowSim, experiments are conducted that comparatively consider the variance of makespan and load balance in task scheduling. Both simulation and real-life experiments are conducted to verify the efficiency of optimization and learning abilities in DQTS. The result shows that when compared with several standard algorithms precoded in WorkflowSim, DQTS has advantages regarding learning ability, containment, and scalability. In this paper, we have successfully developed a new method for task scheduling in cloud computing.
K. Churruca, Kristiana Ludlow, W. Wu et al.
Background Q-methodology is an approach to studying complex issues of human ‘subjectivity’. Although this approach was developed in the early twentieth century, the value of Q-methodology in healthcare was not recognised until relatively recently. The aim of this review was to scope the empirical healthcare literature to examine the extent to which Q-methodology has been utilised in healthcare over time, including how it has been used and for what purposes. Methods A search of three electronic databases (Scopus, EBSCO-CINAHL Complete, Medline) was conducted. No date restriction was applied. A title and abstract review, followed by a full-text review, was conducted by a team of five reviewers. Included articles were English-language, peer-reviewed journal articles that used Q-methodology (both Q-sorting and inverted factor analysis) in healthcare settings. The following data items were extracted into a purpose-designed Excel spreadsheet: study details (e.g., setting, country, year), reasons for using Q-methodology, healthcare topic area, participants (type and number), materials (e.g., ranking anchors and Q-set), methods (e.g., development of the Q-set, analysis), study results, and study implications. Data synthesis was descriptive in nature and involved frequency counting, open coding and the organisation by data items. Results Of the 2,302 articles identified by the search, 289 studies were included in this review. We found evidence of increased use of Q-methodology in healthcare, particularly over the last 5 years. However, this research remains diffuse, spread across a large number of journals and topic areas. In a number of studies, we identified limitations in the reporting of methods, such as insufficient information on how authors derived their Q-set, what types of analyses they performed, and the amount of variance explained. Conclusions Although Q-methodology is increasingly being adopted in healthcare research, it still appears to be relatively novel. This review highlight commonalities in how the method has been used, areas of application, and the potential value of the approach. To facilitate reporting of Q-methodological studies, we present a checklist of details that should be included for publication.
Florian Kittelmann, Pavel Sulimov, Kurt Stockinger
Classical and learned query optimizers (LQOs) use cardinality estimations as one of the critical inputs for query planning. Thus, accurately predicting the cardinality of arbitrary queries plays a vital role in query optimization. A recent boom in novel deep learning methods stimulated not only the rise of LQOs but also contributed to the appearance of learned cardinality estimators (LCEs). However, the majority of them are based on classical neural networks, ignoring that multivariate correlations between attributes across different tables could be naturally represented via entanglements in quantum circuits. In this paper, we introduce QardEst - Quantum Cardinality Estimator - a novel quantum neural network approach to estimate the cardinality of join queries. Our experiments conducted with a similar number of trainable parameters suggest that quantum neural networks executed on a quantum simulator outperform classical neural networks in terms of mean squared error as well as the q-error.
S. Hüttel, Sebastian Hess
The scientific production system is crucial in how global challenges are addressed. However, scholars have recently begun to voice concerns about structural inefficiencies within the system, as highlighted, for example, by the replication crisis, the p-value debate and various forms of publication bias. Most suggested remedies tend to address only partial aspects of the system's inefficiencies, but there is currently no unifying agenda in favour of an overall transformation of the system. Based on a critical review of the current scientific system and an exploratory pilot study about the state of student training, we argue that a unifying agenda is urgently needed, particularly given the emergence of artificial intelligence (AI) as a tool in scientific writing and the research discovery process. Without appropriate responses from academia, this trend may even compound current issues around credibility due to limited replicability and ritual-based statistical practice, while amplifying all forms of existing biases. Naïve openness in the science system alone is unlikely to lead to major improvements. We contribute to the debate and call for a system reform by identifying key elements in the definition of transformation pathways towards open, democratic and conscious learning, teaching, reviewing and publishing supported by openly maintained AI tools. Roles and incentives within the review process will have to adapt and be strengthened in relation to those that apply to authors. Scientists will have to write less, learn differently and review more in the future, but need to be trained better in and for AI even today.
C. Carlet, Sihem Mesnager, Chunming Tang et al.
Sunil Mithas, R. Rust
D. Hoaglin
Halaman 12 dari 67637