Jing Pei, Lei Deng, Sen Song et al.
Hasil untuk "Electronic computers. Computer science"
Menampilkan 20 dari ~18052384 hasil · dari DOAJ, CrossRef, arXiv, Semantic Scholar
Arvind Narayanan, Joseph Bonneau, E. Felten et al.
S. Laschat, Angelika Baro, Nelli Steinke et al.
Dirk P. Kroese, T. Brereton, T. Taimre et al.
Feng-Qin Chen, Yu-Fei Leng, Jian-Feng Ge et al.
Background Virtual reality (VR) is the use of computer technology to create an interactive three-dimensional (3D) world, which gives users a sense of spatial presence. In nursing education, VR has been used to help optimize teaching and learning processes. Objective The purpose of this study was to evaluate the effectiveness of VR in nursing education in the areas of knowledge, skills, satisfaction, confidence, and performance time. Methods We conducted a meta-analysis of the effectiveness of VR in nursing education based on the Cochrane methodology. An electronic literature search using the Cochrane Library, Web of Science, PubMed, Embase, and CINAHL (Cumulative Index to Nursing and Allied Health Literature), up to December 2019 was conducted to identify studies that reported the effectiveness of VR on knowledge, skills, satisfaction, confidence, and performance time. The study selection and data extraction were carried out by two independent reviewers. The methodological quality of the selected studies was determined using the Cochrane criteria for risk-of-bias assessment. Results A total of 12 studies, including 821 participants, were selected for the final analysis. We found that VR was more effective than the control conditions in improving knowledge (standard mean difference [SMD]=0.58, 95% CI 0.41-0.75, P<.001, I2=47%). However, there was no difference between VR and the control conditions in skills (SMD=0.01, 95% CI –0.24 to 0.26, P=.93, I2=37%), satisfaction (SMD=0.01, 95% CI –0.79 to 0.80, P=.99, I2=86%), confidence (SMD=0.00, 95% CI –0.28 to 0.27, P=.99, I2=0%), and performance time (SMD=–0.55, 95% CI –2.04 to 0.94, P=.47, I2=97%). Conclusions The results of this study suggest that VR can effectively improve knowledge in nursing education, but it was not more effective than other education methods in areas of skills, satisfaction, confidence, and performance time. Further rigorous studies with a larger sample size are warranted to confirm these results.
Stephen K. Reed
The information sciences provide tools for deductive reasoning to supplement the classifications made by the data sciences and the explanations made by explanatory models. Formal ontologies provide a unifying framework for organizing definitions, research findings, and theories. One of the primary purposes of a formal ontology is to use deductive reasoning to answer questions submitted to computer. A general or upper oncology is required to integrate more specialized domain ontologies. The Suggested Upper Merged Ontology is particularly helpful because it consists of 20,000 concepts with connections to both WordNet and FrameNet. WordNet is an electronic dictionary while FrameNet captures co-occurrences of words to provide a thematic context in which words occur. Together, WordNet, FrameNet, and the Suggested Upper Merged Ontology provide an integration of three major information science tools.
Vera von Burg, G. Low, Thomas Häner et al.
The quantum computation of electronic energies can break the curse of dimensionality that plagues many-particle quantum mechanics. It is for this reason that a universal quantum computer has the potential to fundamentally change computational chemistry and materials science, areas in which strong electron correlations present severe hurdles for traditional electronic structure methods. Here, we present a state-of-the-art analysis of accurate energy measurements on a quantum computer for computational catalysis, using improved quantum algorithms with more than an order of magnitude improvement over the best previous algorithms. As a prototypical example of local catalytic chemical reactivity we consider the case of a ruthenium catalyst that can bind, activate, and transform carbon dioxide to the high-value chemical methanol. We aim at accurate resource estimates for the quantum computing steps required for assessing the electronic energy of key intermediates and transition states of its catalytic cycle. In particular, we present new quantum algorithms for double-factorized representations of the four-index integrals that can significantly reduce the computational cost over previous algorithms, and we discuss the challenges of increasing active space sizes to accurately deal with dynamical correlations. We address the requirements for future quantum hardware in order to make a universal quantum computer a successful and reliable tool for quantum computing enhanced computational materials science and chemistry, and identify open questions for further research.
Saurabh Jain, M. Sayed, Mallika S. Shetty et al.
Newly introduced provisional crowns and fixed dental prostheses (FDP) materials should exhibit good physical and mechanical properties necessary to serve the purpose of their fabrication. The aim of this systematic literature review and meta-analysis is to evaluate the articles comparing the physical and mechanical properties of 3D-printed provisional crown and FDP resin materials with CAD/CAM (Computer-Aided Designing/Computer-Aided Manufacturing) milled and conventional provisional resins. Indexed English literature up to April 2022 was systematically searched for articles using the following electronic databases: MEDLINE-PubMed, Web of Science (core collection), Scopus, and the Cochrane library. This systematic review was structured based on the guidelines given by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). The focused PICO/PECO (Participant, Intervention/exposure, Comparison, Outcome) question was: ‘Do 3D-printed (P) provisional crowns and FDPs (I) have similar physical and mechanical properties (O) when compared to CAD/CAM milled and other conventionally fabricated ones (C)’. Out of eight hundred and ninety-six titles, which were recognized after a primary search, twenty-five articles were included in the qualitative analysis, and their quality analysis was performed using the modified CONSORT scale. Due to the heterogeneity of the studies, only twelve articles were included for quantitative analysis. Within the limitations of this study, it can be concluded that 3D-printed provisional crown and FDP resin materials have superior mechanical properties but inferior physical properties compared to CAD/CAM milled and other conventionally fabricated ones. Three-dimensionally printed provisional crowns and FDP materials can be used as an alternative to conventional and CAD/CAM milled long-term provisional materials.
Chenyu Wu, Nathaniel Corrigan, Chern-Hooi Lim et al.
Over the past decade, the use of photocatalysts (PCs) in controlled polymerization has brought new opportunities in sophisticated macromolecular synthesis. However, the selection of PCs in these systems has been typically based on laborious trial-and-error strategies. To tackle this limitation, computer-guided rational design of PCs based on knowledge of structure-property-performance relationships has emerged. These rational strategies provide rapid and economic methodologies for tuning the performance and functionality of a polymerization system, thus providing further opportunities for polymer science. This review provides an overview of PCs employed in photocontrolled polymerization systems and summarizes their progression from early systems to the current state-of-the-art. Background theories on electronic transitions are also introduced to establish the structure-property-performance relationships from a perspective of quantum chemistry. Typical examples for each type of structure-property relationships are then presented to enlighten future design of PCs for photocontrolled polymerization.
Ahmed Métwalli, Fares Fathy, Esraa Khatab et al.
Ant Colony Optimization (ACO) is a widely adopted metaheuristic for solving complex combinatorial problems; however, performance is often deteriorated by premature convergence and limited exploration in later iterations. Eclipse Randomness–Ant Colony Optimization (ER-ACO) is introduced as a lightweight ACO variant in which an exponentially fading randomness factor is integrated into the state-transition mechanism. Strong early-stage exploration is enabled, and a smooth transition to exploitation is induced, improving convergence behavior and solution quality. Low computational overhead is maintained while exploration and exploitation are dynamically balanced. ER-ACO is positioned within real-time healthcare logistics, with a focus on Emergency Medical Services (EMS) routing and hospital resource scheduling, where rapid and adaptive decision-making is critical for patient outcomes. These systems face dynamic constraints such as fluctuating traffic conditions, urgent patient arrivals, and limited medical resources. Experimental evaluation on benchmark instances indicates that solution cost is reduced by up to 14.3% relative to the slow-fade configuration (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>γ</mi><mo>=</mo><mn>1</mn></mrow></semantics></math></inline-formula>) in the 20-city TSP sweep, and faster stabilization is indicated under the same iteration budget. Additional comparisons against Standard ACO on TSP/QAP benchmarks indicate consistent improvements, with unchanged asymptotic complexity and negligible measured overhead at the tested scales. TSP/QAP benchmarks are used as controlled proxies to isolate algorithmic behavior; EMS deployment is treated as a motivating application pending validation on EMS-specific datasets and formulations. These results highlight ER-ACO’s potential as a lightweight optimization engine for smart healthcare systems, enabling real-time deployment on edge devices for ambulance dispatch, patient transfer, and operating room scheduling.
Cheng Dewen, Qiwei Wang, Yue Liu et al.
Augmented reality head-mounted displays (AR-HMDs) enable users to see real images of the outside world and visualize virtual information generated by a computer at any time and from any location, making them useful for various applications. The manufacture of AR-HMDs combines the fields of optical engineering, optical materials, optical coating, precision manufacturing, electronic science, computer science, physiology, ergonomics, etc. This paper primarily focuses on the optical engineering of AR-HMDs. Optical combiners and display devices are used to combine real-world and virtual-world objects that are visible to the human eye. In this review, existing AR-HMD optical solutions employed for optical combiners are divided into three categories: optical solutions based on macro-, micro-, and nanooptics. The physical principles, optical structure, performance parameters, and manufacturing process of different types of AR-HMD optical solutions are subsequently analyzed. Moreover, their advantages and disadvantages are investigated and evaluated. In addition, the bottlenecks and future development trends in the case of AR-HMD optical solutions are discussed.
Li-sang Liu, Jia-feng Lin, Jinxin Yao et al.
School of Electronic, Electrical Engineering and Physics, Fujian University of Technology, Fuzhou 350118, China Fujian Key Laboratory of A.E.D, Fujian University of Technology, Fuzhou 350118, China National Demonstration Centre for Experimental Electronic Information and Electrical Technology Education, Fujian University of Technology, Fuzhou 350118, China School of Computer Science and Electronic Engineering, University of Essex, Wivenhoe Park, Colchester CO4 3SQ, UK School of Electrical and Electronic Engineering, The University of Adelaide, Adelaide, SA 5005, Australia
Ardian Kelmendi, George Pappas
The automotive industry increasingly relies on 3D modeling technologies to design and manufacture vehicle components with high precision. One critical challenge is optimizing the placement of latches that secure the dashboard side paneling, as these placements vary between models and must maintain minimal tolerance for movement to ensure durability. While generative artificial intelligence (AI) has advanced rapidly in generating text, images, and video, its application to creating accurate 3D CAD models remains limited. This paper proposes a novel framework that integrates a PointNet deep learning model with Python-based CAD automation to predict optimal clip placements and surface thickness for dashboard side panels. Unlike prior studies that focus on general-purpose CAD generation, this work specifically targets automotive interior components and demonstrates a practical method for automating part design. The approach involves generating placement data—potentially via generative AI—and importing it into the CAD environment to produce fully parameterized 3D models. Experimental results show that the prototype achieved a 75% success rate across six of eight test surfaces, indicating strong potential despite the limited sample size. This research highlights a clear pathway for applying generative AI to part design automation in the automotive sector and offers a foundation for scaling to broader design applications.
Vijayamanikandan Vijayarangan, Harshavardhana A. Uranakara, Francisco E. Hernández–Pérez et al.
Using information theory, this study provides insights into how the construction of latent space of autoencoder (AE) using deep neural network (DNN) training finds a smooth (non-stiff) low-dimensional manifold in the stiff dynamical system. Our recent study (Vijayarangan et al. 2023) reported that an AE combined with neural ODE (NODE) as a surrogate reduced order model (ROM) for the integration of stiff chemically reacting systems led to a significant reduction in the temporal stiffness, and the behavior was attributed to the identification of a slow invariant manifold by the nonlinear projection using the AE. The present work offers a fundamental understanding of the mechanism of formation of a non-stiff latent space and stiffness reduction by employing concepts from information theory and better mixing. The learning mechanisms of both the encoder and the decoder are explained by plotting the evolution of mutual information and identifying two different phases. Subsequently, the density distribution is plotted for the physical and latent variables, which shows the transformation of the rare event in the physical space to a highly likely (more probable) event in the latent space provided by the nonlinear autoencoder. Finally, the nonlinear transformation leading to density redistribution is explained using concepts from information theory and probability.
Ritesh Kanchi, Miya Natsuhara, Matt X. Wang
It is critically important to make computing courses accessible for disabled students. This is particularly challenging in large computing courses, which face unique challenges due to the sheer scale of course content and staff. In this experience report, we share our attempts to scale accessibility efforts for a large university-level introductory programming course sequence, with over 3500 enrolled students and 100 teaching assistants (TAs) per year. First, we introduce our approach to auditing and remediating course materials by systematically identifying and resolving accessibility issues. However, remediating content post-hoc is purely reactive and scales poorly. We then discuss two approaches to systems that enable proactive accessibility work. We developed technical systems to manage remediation complexity at scale: redesigning other course content to be web-first and accessible by default, providing alternate accessible views for existing course content, and writing automated tests to receive instant feedback on a subset of accessibility issues. Separately, we established human systems to empower both course staff and students in accessibility best practices: developing and running various TA-targeted accessibility trainings, establishing course-wide accessibility norms, and integrating accessibility topics into core course curriculum. Preliminary qualitative feedback from both staff and students shows increased engagement in accessibility work and accessible technologies. We close by discussing limitations and lessons learned from our work, with advice for others developing similar auditing, remediation, technical, or human systems.
E. Iraola, M. García-Lorenzo, F. Lordan-Gomis et al.
Digital twins are transforming the way we monitor, analyze, and control physical systems, but designing architectures that balance real-time responsiveness with heavy computational demands remains a challenge. Cloud-based solutions often struggle with latency and resource constraints, while edge-based approaches lack the processing power for complex simulations and data-driven optimizations. To address this problem, we propose the High-Precision High-Performance Computer-enabled Digital Twin (HP2C-DT) reference architecture, which integrates High-Performance Computing (HPC) into the computing continuum. Unlike traditional setups that use HPC only for offline simulations, HP2C-DT makes it an active part of digital twin workflows, dynamically assigning tasks to edge, cloud, or HPC resources based on urgency and computational needs. Furthermore, to bridge the gap between theory and practice, we introduce the HP2C-DT framework, a working implementation that uses COMPSs for seamless workload distribution across diverse infrastructures. We test it in a power grid use case, showing how it reduces communication bandwidth by an order of magnitude through edge-side data aggregation, improves response times by up to 2x via dynamic offloading, and maintains near-ideal strong scaling for compute-intensive workflows across a practical range of resources. These results demonstrate how an HPC-driven approach can push digital twins beyond their current limitations, making them smarter, faster, and more capable of handling real-world complexity.
Philip M. Johnson, Carleton Moore, Peter Leong et al.
RadGrad is a curriculum initiative implemented via an application that combines features of social networks, degree planners, individual learning plans, and serious games. RadGrad redefines traditional meanings of "progress" and "success" in the undergraduate computer science degree program in an attempt to improve engagement, retention, and diversity. In this paper, we describe the RadGrad Project and report on an evaluation study designed to assess the impact of RadGrad on student engagement, diversity, and retention. We also present opportunities and challenges that result from the use of the system.
Anders Giovanni Møller, Luca Maria Aiello
Large Language Models are expressive tools that enable complex tasks of text understanding within Computational Social Science. Their versatility, while beneficial, poses a barrier for establishing standardized best practices within the field. To bring clarity on the values of different strategies, we present an overview of the performance of modern LLM-based classification methods on a benchmark of 23 social knowledge tasks. Our results point to three best practices: select models with larger vocabulary and pre-training corpora; avoid simple zero-shot in favor of AI-enhanced prompting; fine-tune on task-specific data, and consider more complex forms instruction-tuning on multiple datasets only when only training data is more abundant.
Tomoyuki Yamakami
Quantum computing has been studied over the past four decades based on two computational models of quantum circuits and quantum Turing machines. To capture quantum polynomial-time computability, a new recursion-theoretic approach was taken lately by Yamakami [J. Symb. Logic 80, pp.~1546--1587, 2020] by way of recursion schematic definition, which constitutes six initial quantum functions and three construction schemes of composition, branching, and multi-qubit quantum recursion. By taking a similar approach, we look into quantum polylogarithmic-time computability and further explore the expressing power of elementary schemes designed for such quantum computation. In particular, we introduce an elementary form of the quantum recursion, called the fast quantum recursion, and formulate $EQS$ (elementary quantum schemes) of ``elementary'' quantum functions. This class $EQS$ captures exactly quantum polylogarithmic-time computability, which forms the complexity class BQPOLYLOGTIME. We also demonstrate the separation of BQPOLYLOGTIME from NLOGTIME and PPOLYLOGTIME. As a natural extension of $EQS$, we further consider an algorithmic procedural scheme that implements the well-known divide-and-conquer strategy. This divide-and-conquer scheme helps compute the parity function but the scheme cannot be realized within our system $EQS$.
Luis Morales-Navarro, Deborah A. Fields, Michael Giang et al.
Debugging, finding and fixing bugs in code, is a heterogeneous process that shapes novice learners' self-beliefs and motivation in computing. Our Debugging by Design intervention (DbD) provocatively puts students in control over bugs by having them collaborate on designing creative buggy projects during an electronic textiles unit in an introductory computing course. We implemented DbD virtually in eight classrooms with two teachers in public schools with historically marginalized populations, using a quasi-experimental design. Data from this study included post-activity results from a validated survey instrument (N=144). For all students, project completion correlated with increased computer science creative expression and e-textiles coding self-efficacy. In the comparison classes, project completion correlated with reduced programming anxiety, problem-solving competency beliefs, and programming self-concept. In DbD classes, project completion is uniquely correlated with increased fascination with design and programming growth mindset. In the discussion, we consider the relative benefits of DbD versus other open-ended projects.
Halaman 14 dari 902620