D. Fasshauer, R B Sutton, A. Brunger et al.
Hasil untuk "q-bio.TO"
Menampilkan 20 dari ~1622042 hasil · dari arXiv, CrossRef, Semantic Scholar
Jon C. R. Bennett, Hui Zhang
B. Song, S. Noda, T. Asano et al.
A. Macfarlane
L. Biedenharn
A. Sirunyan, A. Tumasyan, W. Adam et al.
This work summarizes and puts in an overall perspective studies done within the compact muon solenoid (CMS) concerning the discovery potential for squarks and gluinos, sleptons, charginos and neutralinos, supersymmetric (SUSY) dark matter, lightest Higgs, sparticle mass determination methods and the detector design optimization in view of SUSY searches. It represents the status of our understanding of these subjects as of summer 1997. As a benchmark we used the minimal supergravity-inspired supersymmetric standard model (mSUGRA) with a stable lightest supersymmetric particle (LSP). Discovery of supersymmetry at the large hadron collider should be relatively straightforward. It may occur through the observation of large excesses of events in missing ET plus jets, or with one or more isolated leptons. An excess of trilepton events or isolated dileptons with missing ET, exhibiting a characteristic signature in the l+l− invariant mass distribution, could also be the first manifestation of SUSY production. Squarks and gluinos can be discovered for masses in excess of 2 TeV. Charginos and neutralinos can be discovered from an excess of events in dilepton or trilepton final states. Inclusive searches can give early indications from their copious production in squark and gluino cascade decays. Indirect evidence for sleptons can also be obtained from inclusive dilepton studies. Isolation requirements and a jet veto would allow detection of both the direct chargino/neutralino production and the directly produced sleptons. Squark and gluino production may also represent a copious source of Higgs bosons through cascade decays. The lightest SUSY Higgs h → b may be reconstructed with a signal/background ratio of order 1 thanks to hard cuts on ETmiss justified by escaping LSPs. The LSP of SUSY models with conserved R-parity represents a very good candidate for cosmological dark matter. The region of parameter space where this is true is well covered by our searches, at least for tanβ = 2. If supersymmetry exists at the electroweak scale, it could hardly escape detection in CMS and the study of supersymmetry will form a central part of our physics program.
Kristine H. Luce, J. Crowther
Annette Michalski, Eugen Damoc, J. Hauschild et al.
Mass spectrometry-based proteomics has greatly benefitted from enormous advances in high resolution instrumentation in recent years. In particular, the combination of a linear ion trap with the Orbitrap analyzer has proven to be a popular instrument configuration. Complementing this hybrid trap-trap instrument, as well as the standalone Orbitrap analyzer termed Exactive, we here present coupling of a quadrupole mass filter to an Orbitrap analyzer. This “Q Exactive” instrument features high ion currents because of an S-lens, and fast high-energy collision-induced dissociation peptide fragmentation because of parallel filling and detection modes. The image current from the detector is processed by an “enhanced Fourier Transformation” algorithm, doubling mass spectrometric resolution. Together with almost instantaneous isolation and fragmentation, the instrument achieves overall cycle times of 1 s for a top10 higher energy collisional dissociation method. More than 2500 proteins can be identified in standard 90-min gradients of tryptic digests of mammalian cell lysate— a significant improvement over previous Orbitrap mass spectrometers. Furthermore, the quadrupole Orbitrap analyzer combination enables multiplexed operation at the MS and tandem MS levels. This is demonstrated in a multiplexed single ion monitoring mode, in which the quadrupole rapidly switches among different narrow mass ranges that are analyzed in a single composite MS spectrum. Similarly, the quadrupole allows fragmentation of different precursor masses in rapid succession, followed by joint analysis of the higher energy collisional dissociation fragment ions in the Orbitrap analyzer. High performance in a robust benchtop format together with the ability to perform complex multiplexed scan modes make the Q Exactive an exciting new instrument for the proteomics and general analytical communities.
L. Kriston, I. Scholl, L. Hölzel et al.
M. Morris, J. Teevan, Katrina Panovich
P. Bolton, Hui Chen, Neng Wang
F. Yeh, V. Wedeen, W. Tseng
Lena Mamykina, Bella Manoim, Manas Mittal et al.
X. Yi, Qifan Yang, K. Yang et al.
Frequency combs are having a broad impact on science and technology because they provide a way to coherently link radio/microwave-rate electrical signals with optical-rate signals derived from lasers and atomic transitions. Integrating these systems on a photonic chip would revolutionize instrumentation, time keeping, spectroscopy, navigation, and potentially create new mass-market applications. A key element of such a system-on-a-chip will be a mode-locked comb that can be self-referenced. The recent demonstration of soliton mode locking in crystalline and silicon nitride microresonators has provided a way to both mode lock and generate femtosecond time-scale pulses. Here, soliton mode locking is demonstrated in high-Q silica resonators. The resonators produce low-phase-noise soliton pulse trains at readily detectable pulse rates—two essential properties for the operation of frequency combs. A method for the long-term stabilization of the solitons is also demonstrated, and is used to test the theoretical dependence of the comb power, efficiency, and soliton existence power on the pulse width. The influence of the Raman process on the soliton existence power and efficiency is also observed. The resonators are microfabricated on silicon chips and feature reproducible modal properties required for soliton formation. A low-noise and detectable pulse rate soliton frequency comb on a chip is a significant step towards a fully integrated frequency comb system.
I. Selesnick
Zhao Tong, Hongjian Chen, Xiaomei Deng et al.
Abstract Task scheduling, which plays a vital role in cloud computing, is a critical factor that determines the performance of cloud computing. From the booming economy of information processing to the increasing need of quality of service (QoS) in the business of networking, the dynamic task-scheduling problem has attracted worldwide attention. Due to its complexity, task scheduling has been defined and classified as an NP-hard problem. Additionally, most dynamic online task scheduling often manages tasks in a complex environment, which makes it even more challenging to balance and satisfy the benefits of each aspect of cloud computing. In this paper, we propose a novel artificial intelligence algorithm, called deep Q-learning task scheduling (DQTS), that combines the advantages of the Q-learning algorithm and a deep neural network. This new approach is aimed at solving the problem of handling directed acyclic graph (DAG) tasks in a cloud computing environment. The essential idea of our approach uses the popular deep Q-learning (DQL) method in task scheduling, where fundamental model learning is primarily inspired by DQL. Based on developments in WorkflowSim, experiments are conducted that comparatively consider the variance of makespan and load balance in task scheduling. Both simulation and real-life experiments are conducted to verify the efficiency of optimization and learning abilities in DQTS. The result shows that when compared with several standard algorithms precoded in WorkflowSim, DQTS has advantages regarding learning ability, containment, and scalability. In this paper, we have successfully developed a new method for task scheduling in cloud computing.
Merlin Pelz, Skirmantas Janusonis, Gregory Handy
Serotonin (5-hydroxytryptamine) is a major neurotransmitter whose release from densely distributed serotonergic varicosities shapes plasticity and network integration throughout the brain, yet its extracellular dynamics remain poorly understood due to the sub-micrometer and millisecond scales involved. We develop a mathematical framework that captures the coupled reaction-diffusion processes governing serotonin signaling in realistic tissue microenvironments. Formulating a two-dimensional compartmental-reaction diffusion system, we use strong localized perturbation theory to derive an asymptotically equivalent set of nonlinear integro-ODEs that preserve diffusive coupling while enabling efficient computation. We analyze period-averaged steady states, establish bounds using Jensen's inequality, obtain closed-form spike maxima and minima, and implement a fast marching-scheme solver based on sum-of-exponentials kernels. These mathematical results provide quantitative insight into how firing frequency, varicosity geometry, and uptake kinetics shape extracellular serotonin. The model reveals that varicosities form diffusively coupled microdomains capable of generating spatial "serotonin reservoirs," clarifies aspects of local versus volume transmission, and yields predictions relevant to interpreting high-resolution serotonin imaging and the actions of selective serotonin-reuptake inhibitors.
Yuhong Zhang, Chenghang Li, Boya Wang et al.
Tumor-immune interactions are central to cancer progression and treatment outcomes. In this study, we present a stochastic agent-based model that integrates cellular heterogeneity, spatial cell-cell interactions, and drug resistance evolution to simulate tumor growth and immune response in a two-dimensional microenvironment. The model captures dynamic behaviors of four major cell types--tumor cells, cytotoxic T lymphocytes, helper T cells, and regulatory T cells--and incorporates key biological processes such as proliferation, apoptosis, migration, and immune regulation. Using this framework, we simulate tumor progression under different therapeutic interventions, including radiotherapy, targeted therapy, and immune checkpoint blockade. Our simulations reproduce emergent phenomena such as immune privilege and spatial immune exclusion. Quantitative analyses show that all therapies suppress tumor growth to varying degrees and reshape the tumor microenvironment. Notably, combination therapies--especially targeted therapy with immunotherapy--achieve the most effective tumor control and delay the emergence of resistance. Additionally, sensitivity analyses reveal a nonlinear relationship between treatment intensity and therapeutic efficacy, highlighting the existence of optimal dosing thresholds. This work demonstrates the utility of agent-based modeling in capturing complex tumor-immune dynamics and provides a computational platform for optimizing cancer treatment strategies. The model is extensible, biologically interpretable, and well-suited for future integration with experimental or clinical data.
K. Churruca, Kristiana Ludlow, W. Wu et al.
Background Q-methodology is an approach to studying complex issues of human ‘subjectivity’. Although this approach was developed in the early twentieth century, the value of Q-methodology in healthcare was not recognised until relatively recently. The aim of this review was to scope the empirical healthcare literature to examine the extent to which Q-methodology has been utilised in healthcare over time, including how it has been used and for what purposes. Methods A search of three electronic databases (Scopus, EBSCO-CINAHL Complete, Medline) was conducted. No date restriction was applied. A title and abstract review, followed by a full-text review, was conducted by a team of five reviewers. Included articles were English-language, peer-reviewed journal articles that used Q-methodology (both Q-sorting and inverted factor analysis) in healthcare settings. The following data items were extracted into a purpose-designed Excel spreadsheet: study details (e.g., setting, country, year), reasons for using Q-methodology, healthcare topic area, participants (type and number), materials (e.g., ranking anchors and Q-set), methods (e.g., development of the Q-set, analysis), study results, and study implications. Data synthesis was descriptive in nature and involved frequency counting, open coding and the organisation by data items. Results Of the 2,302 articles identified by the search, 289 studies were included in this review. We found evidence of increased use of Q-methodology in healthcare, particularly over the last 5 years. However, this research remains diffuse, spread across a large number of journals and topic areas. In a number of studies, we identified limitations in the reporting of methods, such as insufficient information on how authors derived their Q-set, what types of analyses they performed, and the amount of variance explained. Conclusions Although Q-methodology is increasingly being adopted in healthcare research, it still appears to be relatively novel. This review highlight commonalities in how the method has been used, areas of application, and the potential value of the approach. To facilitate reporting of Q-methodological studies, we present a checklist of details that should be included for publication.
S. Hüttel, Sebastian Hess
The scientific production system is crucial in how global challenges are addressed. However, scholars have recently begun to voice concerns about structural inefficiencies within the system, as highlighted, for example, by the replication crisis, the p-value debate and various forms of publication bias. Most suggested remedies tend to address only partial aspects of the system's inefficiencies, but there is currently no unifying agenda in favour of an overall transformation of the system. Based on a critical review of the current scientific system and an exploratory pilot study about the state of student training, we argue that a unifying agenda is urgently needed, particularly given the emergence of artificial intelligence (AI) as a tool in scientific writing and the research discovery process. Without appropriate responses from academia, this trend may even compound current issues around credibility due to limited replicability and ritual-based statistical practice, while amplifying all forms of existing biases. Naïve openness in the science system alone is unlikely to lead to major improvements. We contribute to the debate and call for a system reform by identifying key elements in the definition of transformation pathways towards open, democratic and conscious learning, teaching, reviewing and publishing supported by openly maintained AI tools. Roles and incentives within the review process will have to adapt and be strengthened in relation to those that apply to authors. Scientists will have to write less, learn differently and review more in the future, but need to be trained better in and for AI even today.
Halaman 11 dari 81103