Hasil untuk "Electronic computers. Computer science"

Menampilkan 20 dari ~18051394 hasil · dari CrossRef, DOAJ, Semantic Scholar

JSON API
S2 Open Access 2019
Prediction of higher-selectivity catalysts by computer-driven workflow and machine learning

Andrew F. Zahrt, J. Henle, Brennan T. Rose et al.

Predicting catalyst selectivity Asymmetric catalysis is widely used in chemical research and manufacturing to access just one of two possible mirror-image products. Nonetheless, the process of tuning catalyst structure to optimize selectivity is still largely empirical. Zahrt et al. present a framework for more efficient, predictive optimization. As a proof of principle, they focused on a known coupling reaction of imines and thiols catalyzed by chiral phosphoric acid compounds. By modeling multiple conformations of more than 800 prospective catalysts, and then training machine-learning algorithms on a subset of experimental results, they achieved highly accurate predictions of enantioselectivities. Science, this issue p. eaau5631 A model encompassing multiple conformations of chiral phosphoric acid catalysts accurately predicts enantioselectivities. INTRODUCTION The development of new synthetic methods in organic chemistry is traditionally accomplished through empirical optimization. Catalyst design, wherein experimentalists attempt to qualitatively identify correlations between catalyst structure and catalyst efficiency, is no exception. However, this approach is plagued by numerous deficiencies, including the lack of mechanistic understanding of a new transformation, the inherent limitations of human cognitive abilities to find patterns in large collections of data, and the lack of quantitative guidelines to aid catalyst identification. Chemoinformatics provides an attractive alternative to empiricism for several reasons: Mechanistic information is not a prerequisite, catalyst structures can be characterized by three-dimensional (3D) descriptors (numerical representations of molecular properties derived from the 3D molecular structure) that quantify the steric and electronic properties of thousands of candidate molecules, and the suitability of a given catalyst candidate can be quantified by comparing its properties with a computationally derived model trained on experimental data. The ability to accurately predict a selective catalyst by using a set of less than optimal data remains a major goal for machine learning with respect to asymmetric catalysis. We report a method to achieve this goal and propose a more efficient alternative to traditional catalyst design. RATIONALE The workflow we have created consists of the following components: (i) construction of an in silico library comprising a large collection of conceivable, synthetically accessible catalysts derived from a particular scaffold; (ii) calculation of relevant chemical descriptors for each scaffold; (iii) selection of a representative subset of the catalysts [this subset is termed the universal training set (UTS) because it is agnostic to reaction or mechanism and thus can be used to optimize any reaction catalyzed by that scaffold]; (iv) collection of the training data; and (v) application of machine learning methods to generate models that predict the enantioselectivity of each member of the in silico library. These models are evaluated with an external test set of catalysts (predicting selectivities of catalysts outside of the training data). The validated models can then be used to select the optimal catalyst for a given reaction. RESULTS To demonstrate the viability of our method, we predicted reaction outcomes with substrate combinations and catalysts different from the training data and simulated a situation in which highly selective reactions had not been achieved. In the first demonstration, a model was constructed by using support vector machines and validated with three different external test sets. The first test set evaluated the ability of the model to predict the selectivity of only reactions forming new products with catalysts from the training set. The model performed well, with a mean absolute deviation (MAD) of 0.161 kcal/mol. Next, the same model was used to predict the selectivity of an external test set of catalysts with substrate combinations from the training set. The performance of the model was still highly accurate, with a MAD of 0.211 kcal/mol. Lastly, reactions forming new products with the external test catalysts were predicted with a MAD of 0.236 kcal/mol. In the second study, no reactions with selectivity above 80% enantiomeric excess were used as training data. Deep feed-forward neural networks accurately reproduced the experimental selectivity data, successfully predicting the most selective reactions. More notably, the general trends in selectivity, on the basis of average catalyst selectivity, were correctly identified. Despite omitting about half of the experimental free energy range from the training data, we could still make accurate predictions in this region of selectivity space. CONCLUSION The capability to predict selective catalysts has the potential to change the way chemists select and optimize chiral catalysts from an empirically guided to a mathematically guided approach. Chemoinformatics-guided optimization protocol. (A) Generation of a large in silico library of catalyst candidates. (B) Calculation of robust chemical descriptors. (C) Selection of a UTS. (D) Acquisition of experimental selectivity data. (E) Application of machine learning to use moderate- to low-selectivity reactions to predict high-selectivity reactions. R, any group; X, O or S; Y, OH, SH, or NHTf; PC, principal component; ΔΔG, mean selectivity. Catalyst design in asymmetric reaction development has traditionally been driven by empiricism, wherein experimentalists attempt to qualitatively recognize structural patterns to improve selectivity. Machine learning algorithms and chemoinformatics can potentially accelerate this process by recognizing otherwise inscrutable patterns in large datasets. Herein we report a computationally guided workflow for chiral catalyst selection using chemoinformatics at every stage of development. Robust molecular descriptors that are agnostic to the catalyst scaffold allow for selection of a universal training set on the basis of steric and electronic properties. This set can be used to train machine learning methods to make highly accurate predictive models over a broad range of selectivity space. Using support vector machines and deep feed-forward neural networks, we demonstrate accurate predictive modeling in the chiral phosphoric acid–catalyzed thiol addition to N-acylimines.

468 sitasi en Medicine, Computer Science
DOAJ Open Access 2026
A cascaded classification approach using transfer learning and feature engineering for improved breast cancer classification

Chokri Ferkous, Ouissal Fadel, Abderrahmane Kefali et al.

The primary objective of this study is to design a cascaded classification framework that integrates deep-learning representations with handcrafted and clinical features to enhance the reliability and accuracy of breast cancer detection in mammographic screening. A multi-source mammography dataset comprising four databases was used to ensure diversity and reduce bias. The proposed system operates in two stages. In the first stage, transfer learning models (VGG16, ResNet50, and EfficientNet_B0) were evaluated using ROC-AUC, PR-AUC, calibration curves, and bootstrap confidence intervals. EfficientNet_B0, which achieved the best balance between discrimination and calibration, was selected as the feature extractor. In the second stage, the malignancy probability was combined with Haralick texture features, patient age, and breast density, and classified using SVM, Random Forest, MLP, Decision Tree, and Logistic Regression. Model robustness was verified through multi-run experiments (five random seeds) and subgroup analyses by age and density. Among the CNN models, EfficientNet_B0 yielded the best performance (accuracy = 0.9438, ROC-AUC = 0.944, PR-AUC = 0.960). In the second stage, although Random Forest achieved the highest accuracy (0.9556 ± 0.002), SVM obtained the highest mean ROC-AUC (0.980 ± 0.001) with stable accuracy (0.9539 ± 0.001) and the most significant p-values, indicating superior robustness and generalization. The proposed cascaded framework effectively combines deep, handcrafted, and clinical features to improve mammogram classification performance. The SVM-based model demonstrates strong calibration, stability, and subgroup consistency, highlighting its potential for deployment in computer-aided mammography screening systems that assist radiologists in early breast cancer detection.

Electronic computers. Computer science
DOAJ Open Access 2026
Measuring perceived physical fidelity in virtual reality and virtual environments

Bree McEwan, Clarice Wu, Harris Yang et al.

As communication scholars become increasingly interested in studying virtual reality (VR) as a communication channel it will be important to establish useful measures related to perceptual variables in virtual environments. One such variable is physical fidelity: the degree to which virtual environments replicate or resemble places in the physical world. Often in computer science and other fields interested in VR, this variable is measured as reaction time within the system. However, for social scientific VR scholars, it can be important to understand how much the user perceives the environment to have physical fidelity. In the existing literature when physical fidelity is measured as a perceptual variable, it is often conflated with measures of immersion or spatial presence. This paper presents a confirmatory factor analysis approach to establishing a well-fitting scale of perceptual physical fidelity over three separate samples as well as delineating the conceptual and operational differences between physical fidelity, immersion, and spatial presence.

Electronic computers. Computer science
S2 Open Access 2017
Current understanding and future research directions at the onset of the next century of sintering science and technology

R. Bordia, Suk-joong L. Kang, E. Olevsky

Sintering and accompanying microstructural evolution is inarguably the most important step in the processing of ceramics and hard metals. In this process, an ensemble of particles is converted into a coherent object of controlled density and microstructure at an elevated temperature (but below the melting point) due to the thermodynamic tendency of the particle system to decrease its total surface and interfacial energy. Building on a long development history as a major technological process, sintering remains among the most viable methods of fabricating novel ceramics, including high surface area structures, nanopowder-based systems, and tailored structural and functional materials. Developing new and perfecting existing sintering techniques is crucial to meet ever-growing demand for a broad range of technologically significant systems including, for example, fuel and solar cell components, electronic packages and elements for computers and wireless devices, ceramic and metal-based bioimplants, thermoelectric materials, materials for thermal management, and materials for extreme environments. In this study, the current state of the science and technology of sintering is presented. This study is, however, not a comprehensive review of this extremely broad field. Furthermore, it only focuses on the sintering of ceramics. The fundamentals of sintering, including the thermodynamics and kinetics for solid-state- and liquid-phase-sintered systems are described. This study summarizes that the sintering of amorphous ceramics (glasses) is well understood and there is excellent agreement between theory and experiments. For crystalline materials, attention is drawn to the effect of the grain boundary and interface structure on sintering and microstructural evolution, areas that are expected to be significant for future studies. Considerable emphasis is placed on the topics of current research, including the sintering of composites, multilayered systems, microstructure-based models, multiscale models, sintering under external stresses, and innovative and novel sintering approaches, such as field-assisted sintering. This study includes the status of these subfields, the outstanding challenges and opportunities, and the outlook of progress in sintering research. Throughout the manuscript, we highlight the important lessons learned from sintering fundamentals and their implementation in practice.

289 sitasi en Materials Science
S2 Open Access 2024
Machine learning for human emotion recognition: a comprehensive review

Eman M. G. Younis, Someya Mohsen, Essam H. Houssein et al.

Emotion is an interdisciplinary research field investigated by many research areas such as psychology, philosophy, computing, and others. Emotions influence how we make decisions, plan, reason, and deal with various aspects. Automated human emotion recognition (AHER) is a critical research topic in Computer Science. It can be applied in many applications such as marketing, human–robot interaction, electronic games, E-learning, and many more. It is essential for any application requiring to know the emotional state of the person and act accordingly. The automated methods for recognizing emotions use many modalities such as facial expressions, written text, speech, and various biosignals such as the electroencephalograph, blood volume pulse, electrocardiogram, and others to recognize emotions. The signals can be used individually(uni-modal) or as a combination of more than one modality (multi-modal). Most of the work presented is in laboratory experiments and personalized models. Recent research is concerned about in the wild experiments and creating generic models. This study presents a comprehensive review and an evaluation of the state-of-the-art methods for AHER employing machine learning from a computer science perspective and directions for future research work.

48 sitasi en Computer Science
DOAJ Open Access 2025
Quasiperiodicity Protects Quantized Transport in Disordered Systems Without Gaps

Emmanuel Gottlob, Dan S. Borgnia, Robert-Jan Slager et al.

The robustness of topological properties, such as quantized currents, generally depends on the existence of gaps surrounding the relevant energy levels or on symmetry-forbidden transitions. Here, we observe quantized currents that survive the addition of bounded local disorder beyond the closing of the relevant instantaneous energy gaps in a driven Aubry-André-Harper chain, a prototypical model of quasiperiodic systems. We explain the robustness using a local picture in configuration space based on Landau-Zener transitions, which rests on the Anderson localization of the eigenstates. Moreover, we propose a protocol, directly realizable in, for instance, cold atoms or photonic experiments, that leverages this stability to prepare topological many-body states with high Chern numbers and opens new experimental avenues for the study of both the integer and fractional quantum Hall effects.

Physics, Computer software
DOAJ Open Access 2025
Quantum causal inference with extremely light touch

Xiangjing Liu, Yixian Qiu, Oscar Dahlsten et al.

Abstract We give a causal inference scheme using quantum observations alone for a case with both temporal and spatial correlations: a bipartite quantum system with measurements at two times. The protocol determines compatibility with five causal structures distinguished by the direction of causal influence and whether there are initial correlations. We derive and exploit a closed-form expression for the spacetime pseudo-density matrix (PDM) for many times and qubits. This PDM can be determined by light-touch coarse-grained measurements alone. We prove that if there is no signalling between two subsystems, the reduced state of the PDM cannot have negativity, regardless of initial spatial correlations. In addition, the protocol exploits the time asymmetry of the PDM to determine the temporal order. The protocol succeeds for a state with coherence undergoing a fully decohering channel. Thus coherence in the channel is not necessary for the quantum advantage of causal inference from observations alone.

Physics, Electronic computers. Computer science
DOAJ Open Access 2025
Empirical Evaluation of Invariances in Deep Vision Models

Konstantinos Keremis, Eleni Vrochidou, George A. Papakostas

The ability of deep learning models to maintain consistent performance under image transformations-termed invariances, is critical for reliable deployment across diverse computer vision applications. This study presents a comprehensive empirical evaluation of modern convolutional neural networks (CNNs) and vision transformers (ViTs) concerning four fundamental types of image invariances: blur, noise, rotation, and scale. We analyze a curated selection of thirty models across three common vision tasks, object localization, recognition, and semantic segmentation, using benchmark datasets including COCO, ImageNet, and a custom segmentation dataset. Our experimental protocol introduces controlled perturbations to test model robustness and employs task-specific metrics such as mean Intersection over Union (mIoU), and classification accuracy (Acc) to quantify models’ performance degradation. Results indicate that while ViTs generally outperform CNNs under blur and noise corruption in recognition tasks, both model families exhibit significant vulnerabilities to rotation and extreme scale transformations. Notably, segmentation models demonstrate higher resilience to geometric variations, with SegFormer and Mask2Former emerging as the most robust architectures. These findings challenge prevailing assumptions regarding model robustness and provide actionable insights for designing vision systems capable of withstanding real-world input variability.

Photography, Computer applications to medicine. Medical informatics
DOAJ Open Access 2024
Using LLMs for Augmenting Hierarchical Agents with Common Sense Priors

Bharat Prakash, Tim Oates, Tinoosh Mohsenin

Solving long-horizon, temporally-extended tasks using Reinforcement Learning (RL) is challenging, compounded by the common practice of learning without prior knowledge (or tabula rasa learning). Humans can generate and execute plans with temporally-extended actions and quickly learn to perform new tasks because we almost never solve problems from scratch. We want autonomous agents to have this same ability. Recently, LLMs have been shown to encode a tremendous amount of knowledge about the world and to perform impressive in-context learning and reasoning. However, using LLMs to solve real world problems is hard because they are not grounded in the current task. In this paper we exploit the planning capabilities of LLMs while using RL to provide learning from the environment, resulting in a hierarchical agent that uses LLMs to solve long-horizon tasks. Instead of completely relying on LLMs, they guide a high-level policy, making learning significantly more sample efficient. This approach is evaluated in simulation environments such as MiniGrid, SkillHack, and Crafter, and on a real robot arm in block manipulation tasks. We show that agents trained using our approach outperform other baselines methods and, once trained, don't need access to LLMs during deployment.

Technology, Electronic computers. Computer science
DOAJ Open Access 2024
Learning and Evolution: Factors Influencing an Effective Combination

Paolo Pagliuca

(1) Background: The mutual relationship between evolution and learning is a controversial argument among the artificial intelligence and neuro-evolution communities. After more than three decades, there is still no common agreement on the matter. (2) Methods: In this paper, the author investigates whether combining learning and evolution permits finding better solutions than those discovered by evolution alone. In further detail, the author presents a series of empirical studies that highlight some specific conditions determining the success of such combination. Results are obtained in five qualitatively different domains: (i) the 5-bit parity task, (ii) the double-pole balancing problem, (iii) the Rastrigin, Rosenbrock and Sphere optimization functions, (iv) a robot foraging task and (v) a social foraging problem. Moreover, the first three tasks represent benchmark problems in the field of evolutionary computation. (3) Results and discussion: The outcomes indicate that the effect of learning on evolution depends on the nature of the problem. Specifically, when the problem implies limited or absent agent–environment conditions, learning is beneficial for evolution, especially with the introduction of noise during the learning and selection processes. Conversely, when agents are embodied and actively interact with the environment, learning does not provide advantages, and the addition of noise is detrimental. Finally, the absence of stochasticity in the experienced conditions is paramount for the effectiveness of the combination. Furthermore, the length of the learning process must be fine-tuned based on the considered task.

Electronic computers. Computer science
S2 Open Access 2019
MetaPred: Meta-Learning for Clinical Risk Prediction with Limited Patient Electronic Health Records

Xi Sheryl Zhang, Fengyi Tang, H. Dodge et al.

In recent years, large amounts of health data, such as patient Electronic Health Records (EHR), are becoming readily available. This provides an unprecedented opportunity for knowledge discovery and data mining algorithms to dig insights from them, which can, later on, be helpful to the improvement of the quality of care delivery. Predictive modeling of clinical risks, including in-hospital mortality, hospital readmission, chronic disease onset, condition exacerbation, etc., from patient EHR, is one of the health data analytic problems that attract lots of the interests. The reason is not only because the problem is important in clinical settings, but also is challenging when working with EHR such as sparsity, irregularity, temporality, etc. Different from applications in other domains such as computer vision and natural language processing, the data samples in medicine (patients) are relatively limited, which creates lots of troubles for building effective predictive models, especially for complicated ones such as deep learning. In this paper, we propose~\textttMetaPred, a meta-learning framework for clinical risk prediction from longitudinal patient EHR. In particular, in order to predict the target risk with limited data samples, we train a meta-learner from a set of related risk prediction tasks which learns how a good predictor is trained. The meta-learned can then be directly used in target risk prediction, and the limited available samples in the target domain can be used for further fine-tuning the model performance. The effectiveness of \textttMetaPred is tested on a real patient EHR repository from Oregon Health & Science University. We are able to demonstrate that with Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) as base predictors, \textttMetaPred can achieve much better performance for predicting target risk with low resources comparing with the predictor trained on the limited samples available for this risk alone.

116 sitasi en Computer Science, Medicine
S2 Open Access 2021
Quantum Kernels for Real-World Predictions Based on Electronic Health Records

Z. Krunic, Frederik F. Flöther, G. Seegan et al.

Research on near-term quantum machine learning has explored how classical machine learning algorithms endowed with access to quantum kernels (similarity measures) can outperform their purely classical counterparts. Although theoretical work has shown a provable advantage on synthetic data sets, no work done to date has studied empirically whether the quantum advantage is attainable and with what data. In this article, we report the first systematic investigation of empirical quantum advantage (EQA) in healthcare and life sciences and propose an end-to-end framework to study EQA. We selected electronic health records data subsets and created a configuration space of 5–20 features and 200–300 training samples. For each configuration coordinate, we trained classical support vector machine models based on radial basis function kernels and quantum models with custom kernels using an IBM quantum computer, making this one of the largest quantum machine learning experiments to date. We empirically identified regimes where quantum kernels could provide an advantage and introduced a terrain ruggedness index, a metric to help quantitatively estimate how the accuracy of a given model will perform. The generalizable framework introduced here represents a key step toward a priori identification of data sets where quantum advantage could exist.

48 sitasi en Computer Science, Physics

Halaman 10 dari 902570