Marie-Laure Ryan
Hasil untuk "Electronic computers. Computer science"
Menampilkan 20 dari ~18072012 hasil · dari DOAJ, CrossRef, arXiv, Semantic Scholar
Jonas Dhom, Eric Cordes, Christoph Berger et al.
ABSTRACT High energy densities are vital to satisfy the increasing demand for battery storage systems for electric vehicles. One innovative battery type of the next generation is the solid‐state battery, which is characterized by the high expected energy density. The polymer‐based solid‐state battery is notable for its high machinability in production and, therefore, offers great potential for industrial scale. One component of the polymer‐based solid‐state battery is the composite cathode, which faces particular challenges in the individual production processes. The calendering process is essential, as it can increase the ionic conductivity through a reduction of the composite cathode porosity. For this reason, the calendering process for polymer‐based composite cathodes with different compositions of active material and solid electrolyte has been analyzed in depth in this work. This enabled extensive analysis of the calendering process with different material compositions of polymer‐based composite cathodes to provide a profound understanding of the causal‐effect relationships.
Shenda Jiang, Israel Greenfeld, Lin Yang et al.
Abstract Two-dimensional materials (2DMs), possessing atomic-scale thickness, are prone to brittle fracture under loading conditions, which can lead to catastrophic failure. As their structural dimensions approach the nanoscale, conventional linear elastic fracture mechanics (LEFM) based on continuum assumptions is deficient in capturing the underlying failure mechanisms and accurately predicting potential crack instability. This limitation emphasizes the critical need for a new theoretical approach suited to the fracture behavior of 2DM systems. We propose a unified fracture mechanics (UFM) criterion that systematically incorporates two key physical mechanisms governing brittle fracture in 2DMs at the nanoscale, namely nonlinear elasticity and atomic-scale discreteness. By introducing two corrective parameters, for nonlinearity and quantization, the UFM model successfully resolves the limitations of LEFM in predicting failure. This is particularly important in the short crack regime, as small defects are frequent in 2DMs. The theoretical predictions show excellent agreement with molecular dynamics simulations of five different types of 2DMs and accurately capture the fracture strength of both cracked and defect-free structures. In addition, we present an empirical method that allows the fracture behavior of 2DMs to be estimated directly from their intrinsic structural and elastic properties. The unified theoretical framework is applicable not only to the materials simulated in this study but may also be applied to a broader class of atomically thin brittle systems.
LSST Dark Energy Science Collaboration, Eric Aubourg, Camille Avestruz et al.
The Vera C. Rubin Observatory's Legacy Survey of Space and Time (LSST) will produce unprecedented volumes of heterogeneous astronomical data (images, catalogs, and alerts) that challenge traditional analysis pipelines. The LSST Dark Energy Science Collaboration (DESC) aims to derive robust constraints on dark energy and dark matter from these data, requiring methods that are statistically powerful, scalable, and operationally reliable. Artificial intelligence and machine learning (AI/ML) are already embedded across DESC science workflows, from photometric redshifts and transient classification to weak lensing inference and cosmological simulations. Yet their utility for precision cosmology hinges on trustworthy uncertainty quantification, robustness to covariate shift and model misspecification, and reproducible integration within scientific pipelines. This white paper surveys the current landscape of AI/ML across DESC's primary cosmological probes and cross-cutting analyses, revealing that the same core methodologies and fundamental challenges recur across disparate science cases. Since progress on these cross-cutting challenges would benefit multiple probes simultaneously, we identify key methodological research priorities, including Bayesian inference at scale, physics-informed methods, validation frameworks, and active learning for discovery. With an eye on emerging techniques, we also explore the potential of the latest foundation model methodologies and LLM-driven agentic AI systems to reshape DESC workflows, provided their deployment is coupled with rigorous evaluation and governance. Finally, we discuss critical software, computing, data infrastructure, and human capital requirements for the successful deployment of these new methodologies, and consider associated risks and opportunities for broader coordination with external actors.
Mohamed Rowaizak, Ahmad Farhat, Reem Khalil
Neuroscience education must convey 3D structure with clarity and accuracy. Traditional 2D renderings are limited as they lose depth information and hinder spatial understanding. High-resolution resources now exist, yet many are difficult to use in the class. Therefore, we developed an educational brain video that moves from gross to microanatomy using MRI-based models and the published literature. The pipeline used Fiji for preprocessing, MeshLab for mesh cleanup, Rhino 6 for target fixes, Houdini FX for materials, lighting, and renders, and Cinema4D for final refinement of the video. We had our brain models validated by two neuroscientists for educational fidelity. We tested the video in a class with 96 undergraduates randomized to video and lecture or lecture only. Students completed the same pretest and posttest questions. Student feedback revealed that comprehension and motivation to learn increased significantly in the group that watched the video, suggesting its potential as a useful supplement to traditional lectures. A short, well-produced 3D video can supplement lectures and improve learning in this setting. We share software versions and key parameters to support reuse.
Guoqiang Liang, Jingqian Gong, Mengxuan Li et al.
Large language models (LLMs) have exhibited exceptional capabilities in natural language understanding and generation, image recognition, and multimodal tasks, charting a course towards AGI and emerging as a central issue in the global technological race. This manuscript conducts a comprehensive review of the core technologies that support LLMs from a user standpoint, including prompt engineering, knowledge-enhanced retrieval augmented generation, fine tuning, pretraining, and tool learning. Additionally, it traces the historical development of Science of Science (SciSci) and presents a forward looking perspective on the potential applications of LLMs within the scientometric domain. Furthermore, it discusses the prospect of an AI agent based model for scientific evaluation, and presents new research fronts detection and knowledge graph building methods with LLMs.
Jorn K. Teutloff
We present a comparative docking experiment that aligns human-subject interview data with large language model (LLM)-driven synthetic personas to evaluate fidelity, divergence, and blind spots in AI-enabled simulation. Fifteen early-stage startup founders were interviewed about their hopes and concerns regarding AI-powered validation, and the same protocol was replicated with AI-generated founder and investor personas. A structured thematic synthesis revealed four categories of outcomes: (1) Convergent themes - commitment-based demand signals, black-box trust barriers, and efficiency gains were consistently emphasized across both datasets; (2) Partial overlaps - founders worried about outliers being averaged away and the stress of real customer validation, while synthetic personas highlighted irrational blind spots and framed AI as a psychological buffer; (3) Human-only themes - relational and advocacy value from early customer engagement and skepticism toward moonshot markets; and (4) Synthetic-only themes - amplified false positives and trauma blind spots, where AI may overstate adoption potential by missing negative historical experiences. We interpret this comparative framework as evidence that LLM-driven personas constitute a form of hybrid social simulation: more linguistically expressive and adaptable than traditional rule-based agents, yet bounded by the absence of lived history and relational consequence. Rather than replacing empirical studies, we argue they function as a complementary simulation category - capable of extending hypothesis space, accelerating exploratory validation, and clarifying the boundaries of cognitive realism in computational social science.
Douglas C. Schmidt, Dan Runfola
Mastering one or more programming languages has historically been the gateway to implementing ideas on a computer. Today, that gateway is widening with advances in large language models (LLMs) and artificial intelligence (AI)-powered coding assistants. What matters is no longer just fluency in traditional programming languages but the ability to think computationally by translating problems into forms that can be solved with computing tools. The capabilities enabled by these AI-augmented tools are rapidly leading to the commoditization of computational thinking, such that anyone who can articulate a problem in natural language can potentially harness computing power via AI. This shift is poised to radically influence how we teach computer science and data science in the United States and around the world. Educators and industry leaders are grappling with how to adapt: What should students learn when the hottest new programming language is English? How do we prepare a generation of computational thinkers who need not code every algorithm manually, but must still think critically, design solutions, and verify AI-augmented results? This paper explores these questions, examining the impact of natural language programming on software development, the emerging distinction between programmers and prompt-crafting problem solvers, the reforms needed in computer science and data science curricula, and the importance of maintaining our fundamental computational science principles in an AI-augmented future. Along the way, we compare approaches and share best practices for embracing this new paradigm in computing education.
N. Metropolis
Uma Narayanan, Pavan Prajith, Rijo Thomas Mathew et al.
Researchers are concentrating on developing technologies to identify and caution drivers against driving while distracted because it is a major cause of traffic accidents. According to the National Highway Traffic Safety Administrator's report, distracted driving is to blame for roughly one in every five car accidents.Our goal is to create an accurate and dependable method for identifying distracted drivers and alerting them to their lack of focus. We take inspiration from the success of convolutional neural networks in computer vision to do this. Our strategy entails putting in place a CNN-based system that can recognize when a driver is distracted as well as pinpoint the precise cause of their preoccupation. Real-time detection, however, necessitates three apparently mutually exclusive requirements for an optimal network: a small number of parameters, high accuracy, and fast speed.
Jacques Matthee, Kenneth Uren, George van Schoor et al.
Simultaneous Localization and Mapping (SLAM) is a crucial component to the push towards full autonomy of robotic systems, yet it is computationally expensive and can rarely achieve real-time execution speeds on embedded platforms. Therefore, a need exists to evaluate the performance of SLAM algorithms in practical embedded environments – this paper addresses this need by creating prediction models to estimate the performance that ORB-SLAM3 can achieve on embedded platforms. The paper uses three embedded platforms: Nvidia Jetson TX2, Raspberry Pi 3B+ and the Raspberry Pi 4B, to generate a dataset that is used in training and testing performance prediction models. The process of profiling ORB-SLAM3 aids in the selection of inputs to the prediction model as well as benchmarking the embedded platforms’ performances by using PassMark. The EuRoC micro aerial vehicle (MAV) dataset is used to generate the average tracking time that the embedded platforms can achieve when executing ORB-SLAM3, which is the target of the prediction model. The best-performing model has the following results 2.84%, 3.93%, and 0.95 for MAE, RMSE and R2 score respectively. The results show the feasibility of predicting the performance that SLAM applications can achieve on embedded platforms.
Rahul Thakur, S.C. Malik, Masum Raj
The Laplace distribution, also known as the double exponential distribution, is a continuous probability distribution that is often used for modelling the data having heavy tails. In this paper, we proposed the Neutrosophic Laplace distribution which is the extension of the classical Laplace Distribution. We derived various statistical properties of the Neutrosophic Laplace Distribution such as mean, variance, skewness, rth moment, quartiles, and moment-generating function. The expressions for the estimation of the parameters are also derived using the maximum likelihood function of the distribution. A simulation study has been done to evaluate the performance of estimates. An application of the Neutrosophic Laplace Distribution is discussed to study the daily return of the NIFTY50 from Indian Stock Market. The analysis shows that the neutrosophic Laplace Model is acceptable, effective, and adequate for dealing with uncertainty in an unpredictable context.
Milin Zhang, Mohammad Abdi, Jonathan Ashdown et al.
Distributed deep neural networks (DNNs) have been shown to reduce the computational burden of mobile devices and decrease the end-to-end inference latency in edge computing scenarios. While distributed DNNs have been studied, to the best of our knowledge, the resilience of distributed DNNs to adversarial action remains an open problem. In this paper, we fill the existing research gap by rigorously analyzing the robustness of distributed DNNs against adversarial action. We cast this problem in the context of information theory and rigorously proved that (i) the compressed latent dimension improves the robustness but also affect task-oriented performance; and (ii) the deeper splitting point enhances the robustness but also increases the computational burden. These two trade-offs provide a novel perspective to design robust distributed DNN. To test our theoretical findings, we perform extensive experimental analysis by considering 6 different DNN architectures, 6 different approaches for distributed DNN and 10 different adversarial attacks using the ImageNet-1K dataset.
Gérard Fleury, Philippe Lacomme
This paper provides a short introduction to the mathematical foundation of quantum computation for researchers in computer science by providing an introduction fo the mathematical basis of calculations. This paper concerns the mathematical foundations of quantum computation addressing first the representation of qubit using the Bloch sphere and second the special relations between SU(2) and SO(3). The properties of SU(2) are introduced focusing especially about the double-covering of SO(3) and explaining how to map rotations of SO(3) into matrices of SU(2). Quantum physic operators are based on SU(2) since we have a direct relationship to SO(3) namely one isomorphism. We start first from basic representations of qubit in R^3 and representations of operators in SU(2) and we next discuss with operators that permit to move from one SU(2) to another one according to a specific operator of SU(2) that is related to rotation into R^3.
Ryan E. Dougherty
The design of any technical Computer Science course must involve its context within the institution's CS program, but also incorporate any new material that is relevant and appropriately accessible to students. In many institutions, theory of computing (ToC) courses within undergraduate CS programs are often placed near the end of the program, and have a very common structure of building off previous sections of the course. The central question behind any such course is ``What are the limits of computers?'' for various types of computational models. However, what is often intuitive for students about what a ``computer'' is--a Turing machine--is taught at the end of the course, which necessitates motivation for earlier models. This poster contains our experiences in designing a ToC course that teaches the material effectively ``backwards,'' with pedagogic motivation of instead asking the question ``What suitable restrictions can we place on computers to make their problems tractable?'' We also give recommendations for future course design.
Ammar Jamshed
Quantum computing is an advancing area of computing sciences and provides a new base of development for many futuristic technologies discussions on how it can help developing economies will further help developed economies in technology transfer and economic development initiatives related to Research and development within developing countries thus providing a new means of foreign direct investment(FDI) and business innovation for the majority of the globe that lacks infrastructure economic resources required for growth in the technology landscape and cyberinfrastructure for growth in computing applications. Discussion of which areas of support quantum computing can help will further assist developing economies in implementing it for growth opportunities for local systems and businesses.
Thierry Coquand, Simon Huber, Christian Sattler
Cubical type theory provides a constructive justification of homotopy type theory. A crucial ingredient of cubical type theory is a path lifting operation which is explained computationally by induction on the type involving several non-canonical choices. We present in this article two canonicity results, both proved by a sconing argument: a homotopy canonicity result, every natural number is path equal to a numeral, even if we take away the equations defining the lifting operation on the type structure, and a canonicity result, which uses these equations in a crucial way. Both proofs are done internally in a presheaf model.
Thomas J Misa
Gender bias in computing is a hard problem that has resisted decades of research. One obstacle has been the absence of systematic data that might indicate when gender bias emerged in computing and how it has changed. This article presents a new dataset (N=50,000) focusing on formative years of computing as a profession (1950-1980) when U.S. government workforce statistics are thin or non-existent. This longitudinal dataset, based on archival records from six computer user groups (SHARE, USE, and others) and ACM conference attendees and membership rosters, revises commonly held conjectures that gender bias in computing emerged during professionalization of computer science in the 1960s or 1970s and that there was a 'linear' one-time onset of gender bias to the present. Such a linear view also lent support to the "pipeline" model of computing's "losing" women at successive career stages. Instead, this dataset reveals three distinct periods of gender bias in computing and so invites temporally distinct explanations for these changing dynamics. It significantly revises both scholarly assessment and popular understanding about gender bias in computing. It also draws attention to diversity within computing. One consequence of this research for CS reform efforts today is data-driven recognition that legacies of gender bias beginning in the mid-1980s (not in earlier decades) is the problem. A second consequence is correcting the public image of computer science: this research shows that gender bias is a contingent aspect of professional computing, not an intrinsic or permanent one.
Yavuz Inal, J. Wake, Frode Guribye et al.
Background Many mobile health (mHealth) apps for mental health have been made available in recent years. Although there is reason to be optimistic about their effect on improving health and increasing access to care, there is a call for more knowledge concerning how mHealth apps are used in practice. Objective This study aimed to review the literature on how usability is being addressed and measured in mHealth interventions for mental health problems. Methods We conducted a systematic literature review through a search for peer-reviewed studies published between 2001 and 2018 in the following electronic databases: EMBASE, CINAHL, PsycINFO, PubMed, and Web of Science. Two reviewers independently assessed all abstracts against the inclusion and exclusion criteria, following the Preferred Reporting Items for Systematic Review and Meta-Analysis guidelines. Results A total of 299 studies were initially identified based on the inclusion keywords. Following a review of the title, abstract, and full text, 42 studies were found that fulfilled the criteria, most of which evaluated usability with patients (n=29) and health care providers (n=11) as opposed to healthy users (n=8) and were directed at a wide variety of mental health problems (n=24). Half of the studies set out to evaluate usability (n=21), and the remainder focused on feasibility (n=10) or acceptability (n=10). Regarding the maturity of the evaluated systems, most were either prototypes or previously tested versions of the technology, and the studies included few accounts of sketching and participatory design processes. The most common reason referred to for developing mobile mental health apps was the availability of mobile devices to users, their popularity, and how people in general became accustomed to using them for various purposes. Conclusions This study provides a detailed account of how evidence of usability of mHealth apps is gathered in the form of usability evaluations from the perspective of computer science and human-computer interaction, including how users feature in the evaluation, how the study objectives and outcomes are stated, which research methods and techniques are used, and what the notion of mobility features is for mHealth apps. Most studies described their methods as trials, gathered data from a small sample size, and carried out a summative evaluation using a single questionnaire, which indicates that usability evaluation was not the main focus. As many studies described using an adapted version of a standard usability questionnaire, there may be a need for developing a standardized mHealth usability questionnaire.
Paul Leger, Hiroaki Fukuda, Ismael Figueroa
JavaScript is one of the main programming languages to develop highly rich responsive and interactive Web applications. In these kinds of applications, the use of asynchronous operations that execute callbacks is crucial. However, the dependency among nested callbacks, known as callback hell, can make it difficult to understand and maintain them, which will eventually mix concerns. Unfortunately, current solutions for JavaScript do not fully address the aforementioned issue. This paper presents Sync/cc, a JavaScript package that works on modern browsers. This package is a proof-of-concept that uses continuations and aspects that allow developers to write event handlers that need nested callbacks in a synchronous style, preventing callback hell. Unlike current solutions, Sync/cc is modular, succinct, and customizable because it does not require ad-hoc and scattered constructs, code refactoring, or adding ad-hoc implementations such as state machines. In practice, our proposal uses a) continuations to only suspend the current handler execution until the asynchronous operation is resolved, and b) aspects to apply continuations in a non-intrusive way. We test Sync/cc with a management information system that administers courses at a university in Chile.
Halaman 27 dari 903601