Solar water splitting cells.
M. Walter, E. Warren, J. Mckone
et al.
Energy harvested directly from sunlight offers a desirable approach toward fulfilling, with minimal environmental impact, the need for clean energy. Solar energy is a decentralized and inexhaustible natural resource, with the magnitude of the available solar power striking the earth’s surface at any one instant equal to 130 million 500 MW power plants.1 However, several important goals need to be met to fully utilize solar energy for the global energy demand. First, the means for solar energy conversion, storage, and distribution should be environmentally benign, i.e. protecting ecosystems instead of steadily weakening them. The next important goal is to provide a stable, constant energy flux. Due to the daily and seasonal variability in renewable energy sources such as sunlight, energy harvested from the sun needs to be efficiently converted into chemical fuel that can be stored, transported, and used upon demand. The biggest challenge is whether or not these goals can be met in a costeffective way on the terawatt scale.2
8390 sitasi
en
Chemistry, Medicine
Interpretation of organic components from Positive Matrix Factorization of aerosol mass spectrometric data
I. Ulbrich, M. Canagaratna, Q. Zhang
et al.
Abstract. The organic aerosol (OA) dataset from an Aerodyne Aerosol Mass Spectrometer (Q-AMS) collected at the Pittsburgh Air Quality Study (PAQS) in September 2002 was analyzed with Positive Matrix Factorization (PMF). Three components – hydrocarbon-like organic aerosol OA (HOA), a highly-oxygenated OA (OOA-1) that correlates well with sulfate, and a less-oxygenated, semi-volatile OA (OOA-2) that correlates well with nitrate and chloride – are identified and interpreted as primary combustion emissions, aged SOA, and semivolatile, less aged SOA, respectively. The complexity of interpreting the PMF solutions of unit mass resolution (UMR) AMS data is illustrated by a detailed analysis of the solutions as a function of number of components and rotational forcing. A public web-based database of AMS spectra has been created to aid this type of analysis. Realistic synthetic data is also used to characterize the behavior of PMF for choosing the best number of factors, and evaluating the rotations of non-unique solutions. The ambient and synthetic data indicate that the variation of the PMF quality of fit parameter (Q, a normalized chi-squared metric) vs. number of factors in the solution is useful to identify the minimum number of factors, but more detailed analysis and interpretation are needed to choose the best number of factors. The maximum value of the rotational matrix is not useful for determining the best number of factors. In synthetic datasets, factors are "split" into two or more components when solving for more factors than were used in the input. Elements of the "splitting" behavior are observed in solutions of real datasets with several factors. Significant structure remains in the residual of the real dataset after physically-meaningful factors have been assigned and an unrealistic number of factors would be required to explain the remaining variance. This residual structure appears to be due to variability in the spectra of the components (especially OOA-2 in this case), which is likely to be a key limit of the retrievability of components from AMS datasets using PMF and similar methods that need to assume constant component mass spectra. Methods for characterizing and dealing with this variability are needed. Interpretation of PMF factors must be done carefully. Synthetic data indicate that PMF internal diagnostics and similarity to available source component spectra together are not sufficient for identifying factors. It is critical to use correlations between factor and external measurement time series and other criteria to support factor interpretations. True components with
The Language Experience and Proficiency Questionnaire (LEAP-Q): assessing language profiles in bilinguals and multilinguals.
Viorica Marian, Henrike K. Blumenfeld, Margarita Kaushanskaya
1886 sitasi
en
Medicine, Psychology
Q‐ball imaging
D. Tuch
2085 sitasi
en
Mathematics, Medicine
Belle Collaboration
N. Satoyama, K. Abe, I. Adachi
et al.
TOBIN'S MARGINAL q AND AVERAGE q: A NEOCLASSICAL INTERPRETATION
F. Hayashi
The positive false discovery rate: a Bayesian interpretation and the q-value
John D. Storey
2374 sitasi
en
Mathematics
Centroidal Voronoi Tessellations: Applications and Algorithms
Q. Du, V. Faber, M. Gunzburger
2391 sitasi
en
Mathematics, Computer Science
Technical Note: Q-Learning
C. Watkins, P. Dayan
2370 sitasi
en
Computer Science
Tobin's q, Corporate Diversification, and Firm Performance
R. Stulz
A Simple Approximation of Tobin's Q
Kee H. Chung, Stephen W. Pruitt
2543 sitasi
en
Mathematics
Linear Models of Dissipation whose Q is almost Frequency Independent-II
M. Caputo
Validation of the NPI-Q, a brief clinical form of the Neuropsychiatric Inventory.
Daniel I. Kaufer, Jeffrey L. Cummings, Jeffrey L. Cummings
et al.
1837 sitasi
en
Psychology, Medicine
Solving the optimal path planning of a mobile robot using improved Q-learning
Ee Soong Low, P. Ong, K. Cheah
Abstract Q-learning, a type of reinforcement learning, has gained increasing popularity in autonomous mobile robot path planning recently, due to its self-learning ability without requiring a priori model of the environment. Yet, despite such advantage, Q-learning exhibits slow convergence to the optimal solution. In order to address this limitation, the concept of partially guided Q-learning is introduced wherein, the flower pollination algorithm (FPA) is utilized to improve the initialization of Q-learning. Experimental evaluation of the proposed improved Q-learning under the challenging environment with a different layout of obstacles shows that the convergence of Q-learning can be accelerated when Q-values are initialized appropriately using the FPA. Additionally, the effectiveness of the proposed algorithm is validated in a real-world experiment using a three-wheeled mobile robot.
298 sitasi
en
Computer Science
When and how to use Q methodology to understand perspectives in conservation research
Aiora Zabala, C. Sandbrook, Nibedita Mukherjee
Understanding human perspectives is critical in a range of conservation contexts, for example, in overcoming conflicts or developing projects that are acceptable to relevant stakeholders. The Q methodology is a unique semiquantitative technique used to explore human perspectives. It has been applied for decades in other disciplines and recently gained traction in conservation. This paper helps researchers assess when Q is useful for a given conservation question and what its use involves. To do so, we explained the steps necessary to conduct a Q study, from the research design to the interpretation of results. We provided recommendations to minimize biases in conducting a Q study, which can affect mostly when designing the study and collecting the data. We conducted a structured literature review of 52 studies to examine in what empirical conservation contexts Q has been used. Most studies were subnational or national cases, but some also address multinational or global questions. We found that Q has been applied to 4 broad types of conservation goals: addressing conflict, devising management alternatives, understanding policy acceptability, and critically reflecting on the values that implicitly influence research and practice. Through these applications, researchers found hidden views, understood opinions in depth and discovered points of consensus that facilitated unlocking difficult disagreements. The Q methodology has a clear procedure but is also flexible, allowing researchers explore long‐term views, or views about items other than statements, such as landscape images. We also found some inconsistencies in applying and, mainly, in reporting Q studies, whereby it was not possible to fully understand how the research was conducted or why some atypical research decisions had been taken in some studies. Accordingly, we suggest a reporting checklist.
330 sitasi
en
Sociology, Medicine
An Augmented q-Factor Model with Expected Growth*
Kewei Hou, Haitao Mo, Chen Xue
et al.
In the investment theory, firms with high expected investment growth earn higher expected returns than firms with low expected investment growth, holding investment and expected profitability constant. Building on cross-sectional growth forecasts with Tobin’s q, operating cash flows, and change in return on equity as predictors, an expected growth factor earns an average premium of 0.84% per month (t = 10.27) in the 1967–2018 sample. The q5 model, which augments the Hou–Xue–Zhang (2015, Rev. Finan. Stud., 28, 650–705) q-factor model with the expected growth factor, shows strong explanatory power in the cross-section and outperforms the Fama–French (2018, J. Finan. Econom., 128, 234–252) six-factor model.
Smart Manufacturing Scheduling With Edge Computing Using Multiclass Deep Q Network
Chun-Cheng Lin, Der-Jiunn Deng, Yen-Ling Chih
et al.
Manufacturing is involved with complex job shop scheduling problems (JSP). In smart factories, edge computing supports computing resources at the edge of production in a distributed way to reduce response time of making production decisions. However, most works on JSP did not consider edge computing. Therefore, this paper proposes a smart manufacturing factory framework based on edge computing, and further investigates the JSP under such a framework. With recent success of some AI applications, the deep Q network (DQN), which combines deep learning and reinforcement learning, has showed its great computing power to solve complex problems. Therefore, we adjust the DQN with an edge computing framework to solve the JSP. Different from the classical DQN with only one decision, this paper extends the DQN to address the decisions of multiple edge devices. Simulation results show that the proposed method performs better than the other methods using only one dispatching rule.
273 sitasi
en
Computer Science
Quantum agents in the Gym: a variational quantum algorithm for deep Q-learning
Andrea Skolik, S. Jerbi, V. Dunjko
Quantum machine learning (QML) has been identified as one of the key fields that could reap advantages from near-term quantum devices, next to optimization and quantum chemistry. Research in this area has focused primarily on variational quantum algorithms (VQAs), and several proposals to enhance supervised, unsupervised and reinforcement learning (RL) algorithms with VQAs have been put forward. Out of the three, RL is the least studied and it is still an open question whether VQAs can be competitive with state-of-the-art classical algorithms based on neural networks (NNs) even on simple benchmark tasks. In this work, we introduce a training method for parametrized quantum circuits (PQCs) that can be used to solve RL tasks for discrete and continuous state spaces based on the deep Q-learning algorithm. We investigate which architectural choices for quantum Q-learning agents are most important for successfully solving certain types of environments by performing ablation studies for a number of different data encoding and readout strategies. We provide insight into why the performance of a VQA-based Q-learning algorithm crucially depends on the observables of the quantum model and show how to choose suitable observables based on the learning task at hand. To compare our model against the classical DQN algorithm, we perform an extensive hyperparameter search of PQCs and NNs with varying numbers of parameters. We confirm that similar to results in classical literature, the architectural choices and hyperparameters contribute more to the agents’ success in a RL setting than the number of parameters used in the model. Finally, we show when recent separation results between classical and quantum agents for policy gradient RL can be extended to inferring optimal Q-values in restricted families of environments. This work paves the way towards new ideas on how a quantum advantage may be obtained for real-world problems in the future.
205 sitasi
en
Computer Science, Physics
Measurement of Atom Resolvability in CryoEM Maps with Q-scores
G. Pintilie, Kaiming Zhang, Z. Su
et al.
Cryogenic electron microscopy (cryo-EM) maps are now at the point where resolvability of individual atoms can be achieved. However, resolvability is not necessarily uniform throughout the map. We introduce a quantitative parameter to characterize the resolvability of individual atoms in cryo-EM maps, the map Q -score. Q -scores can be calculated for atoms in proteins, nucleic acids, water, ligands and other solvent atoms, using models fitted to or derived from cryo-EM maps. Q -scores can also be averaged to represent larger features such as entire residues and nucleotides. Averaged over entire models, Q -scores correlate very well with the estimated resolution of cryo-EM maps for both protein and RNA. Assuming the models they are calculated from are well fitted to the map, Q -scores can be used as a measure of resolvability in cryo-EM maps at various scales, from entire macromolecules down to individual atoms. Q -score analysis of multiple cryo-EM maps of the same proteins derived from different laboratories confirms the reproducibility of structural features from side chains down to water and ion atoms. Q -scores provide a quantitative metric for resolvability in cryo-EM maps, and can be used at the atom, residue or macromolecule scale.
263 sitasi
en
Biology, Physics
Mildly Conservative Q-Learning for Offline Reinforcement Learning
Jiafei Lyu, Xiaoteng Ma, Xiu Li
et al.
Offline reinforcement learning (RL) defines the task of learning from a static logged dataset without continually interacting with the environment. The distribution shift between the learned policy and the behavior policy makes it necessary for the value function to stay conservative such that out-of-distribution (OOD) actions will not be severely overestimated. However, existing approaches, penalizing the unseen actions or regularizing with the behavior policy, are too pessimistic, which suppresses the generalization of the value function and hinders the performance improvement. This paper explores mild but enough conservatism for offline learning while not harming generalization. We propose Mildly Conservative Q-learning (MCQ), where OOD actions are actively trained by assigning them proper pseudo Q values. We theoretically show that MCQ induces a policy that behaves at least as well as the behavior policy and no erroneous overestimation will occur for OOD actions. Experimental results on the D4RL benchmarks demonstrate that MCQ achieves remarkable performance compared with prior work. Furthermore, MCQ shows superior generalization ability when transferring from offline to online, and significantly outperforms baselines. Our code is publicly available at https://github.com/dmksjfl/MCQ.
147 sitasi
en
Computer Science