Hasil untuk "Electronic computers. Computer science"

Menampilkan 20 dari ~18073846 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar

JSON API
S2 Open Access 2020
The Fast Health Interoperability Resources (FHIR) Standard: Systematic Literature Review of Implementations, Applications, Challenges and Opportunities

Muhammad Ayaz, M. F. Pasha, M. Y. Alzahrani et al.

Background Information technology has shifted paper-based documentation in the health care sector into a digital form, in which patient information is transferred electronically from one place to another. However, there remain challenges and issues to resolve in this domain owing to the lack of proper standards, the growth of new technologies (mobile devices, tablets, ubiquitous computing), and health care providers who are reluctant to share patient information. Therefore, a solid systematic literature review was performed to understand the use of this new technology in the health care sector. To the best of our knowledge, there is a lack of comprehensive systematic literature reviews that focus on Fast Health Interoperability Resources (FHIR)-based electronic health records (EHRs). In addition, FHIR is the latest standard, which is in an infancy stage of development. Therefore, this is a hot research topic with great potential for further research in this domain. Objective The main aim of this study was to explore and perform a systematic review of the literature related to FHIR, including the challenges, implementation, opportunities, and future FHIR applications. Methods In January 2020, we searched articles published from January 2012 to December 2019 via all major digital databases in the field of computer science and health care, including ACM, IEEE Explorer, Springer, Google Scholar, PubMed, and ScienceDirect. We identified 8181 scientific articles published in this field, 80 of which met our inclusion criteria for further consideration. Results The selected 80 scientific articles were reviewed systematically, and we identified open questions, challenges, implementation models, used resources, beneficiary applications, data migration approaches, and goals of FHIR. Conclusions The literature analysis performed in this systematic review highlights the important role of FHIR in the health care domain in the near future.

212 sitasi en Medicine, Computer Science
arXiv Open Access 2026
The Imperative for Grand Challenges in Computing

William Regli, Rajmohan Rajaraman, Daniel Lopresti et al.

Computing is an indispensable component of nearly all technologies and is ubiquitous for vast segments of society. It is also essential to discoveries and innovations in most disciplines. However, while past grand challenges in science have involved computing as one of the tools to address the challenge, these challenges have not been principally about computing. Why has the computing community not yet produced challenges at the scale of grandeur that we see in disciplines such as physics, astronomy, or engineering? How might we go about identifying similarly grand challenges? What are the grand challenges of computing that transcend our discipline's traditional boundaries and have the potential to dramatically improve our understanding of the world and positively shape the future of our society? There is a significant benefit in us, as a field, taking a more intentional approach to "grand challenges." We are seeking challenge problems that are sufficiently compelling as to both ignite the imagination of computer scientists and draw researchers from other disciplines to computational challenges. This paper emphasizes the importance, now more than ever, of defining and pursuing grand challenges in computing as a field, and being intentional about translation and realizing its impacts on science and society. Building on lessons from prior grand challenges, the paper explores the nature of a grand challenge today emphasizing both scale and impact, and how the community may tackle such a grand challenge, given a rapidly changing innovation ecosystem in computing. The paper concludes with a call to action for our community to come together to define grand challenges in computing for the next decade and beyond.

en cs.CY
DOAJ Open Access 2025
The Evolution of Software Usability in Developer Communities: An Empirical Study on Stack Overflow

Hans Djalali, Wajdi Aljedaani, Stephanie Ludi

This study investigates how software developers discuss usability on Stack Overflow through an analysis of posts from 2008 to 2024. Despite recognizing the importance of usability for software success, there is a limited amount of research on developer engagement with usability topics. Using mixed methods that combine quantitative metric analysis and qualitative content review, we examine temporal trends, comparative engagement patterns across eight non-functional requirements, and programming context-specific usability issues. Our findings show a significant decrease in usability posts since 2010, contrasting with other non-functional requirements, such as performance and security. Despite this decline, usability posts exhibit high resolution efficiency, achieving the highest answer and acceptance rates among all topics, suggesting that the community is highly effective at resolving these specialized questions. We identify distinctive platform-specific usability concerns: web development prioritizes responsive layouts and form design; desktop applications emphasize keyboard navigation and complex controls; and mobile development focuses on touch interactions and screen constraints. These patterns indicate a transformation in the sharing of usability knowledge, reflecting the maturation of the field, its integration into frameworks, and the migration to specialized communities. This first longitudinal analysis of usability discussions on Stack Overflow provides insights into developer engagement with usability and highlights opportunities for integrating usability guidance into technical contexts.

Computer software
DOAJ Open Access 2025
PyPOD-GP: Using PyTorch for accelerated chip-level thermal simulation of the GPU

Neil He, Ming-Cheng Cheng, Yu Liu

The rising demand for high-performance computing (HPC) has made full-chip dynamic thermal simulation in many-core GPUs critical for optimizing performance and extending device lifespans. Proper orthogonal decomposition (POD) with Galerkin projection (GP) has shown to offer high accuracy and massive runtime improvements over direct numerical simulation (DNS). However, previous implementations of POD-GP use MPI-based libraries like PETSc and FEniCS and face significant runtime bottlenecks. We propose a PyTorch-based POD-GP library (PyPOD-GP), a GPU-optimized library for chip-level thermal simulation. PyPOD-GP achieves over 23.4× speedup in training and over 10× speedup in inference on a GPU with over 13,000 cores, with just 1.2% error over the device layer.

Computer software
DOAJ Open Access 2025
Differential Cryptanalysis Based on Transformer Model and Attention Mechanism

XIAO Chaoen, LI Zifan, ZHANG Lei, WANG Jianxin, QIAN Siyuan

In differential analysis-based cryptographic attacks, Bayesian optimization is typically used to verify whether the partially decrypted data exhibit differential characteristics. Currently, the primary approach involves training a differential distinguisher using deep learning techniques. However, this method has a notable limitation in that, as the number of encryption rounds increases, the accuracy of the differential characteristics decreases linearly. Therefore, a new differential characteristic discrimination method is proposed based on the attention mechanism and side-channel analysis. Using the difference relationship between multiple rounds of the ciphertext, a difference partition for the SPECK32/64 algorithm is trained based on the transformer. In a key recovery attack, a novel scheme is designed based on the previous ciphertext treatment to distinguish the most influential features of the ciphertext. In the key recovery attack of the SPECK32/64 algorithm, 2<sup>6</sup> selected ciphertext pairs are used. Using the 20th round ciphertext pairs, the 65 536 candidate keys of the 22nd round can be screened within 17 on average, and the key recovery attack of the last two wheels can be completed. The experimental results show that this method achieves a success rate of 90%, effectively addressing the challenge of recognizing ciphertext differential features caused by an increase in the number of encryption rounds.

Computer engineering. Computer hardware, Computer software
DOAJ Open Access 2025
User activity to enhance customer lifetime value modeling in contractual streaming industry

Eudes Adiba, Maurice Comlan, Eugéne C. Ezin et al.

This article presents a model for Customer Lifetime Value (CLV) tailored to the subscription-based streaming industry, incorporating both contractual dynamics and user activity. Unlike traditional CLV models that overlook contracts, this semi-Markov model captures the time users remain in specific subscription plans and the transitions between these subscription plans. Using empirical data from the MTN TV platform for a step-by-step implementation, the study identifies key factors influencing subscription cancellations, such as expiration dates and viewing behavior. The results show that longer subscriptions yield higher CLV, with more predictable churn cycles. These findings can guide marketing strategies and resource management to maximize CLV in the streaming sector.

Electronic computers. Computer science, Economics as a science
DOAJ Open Access 2025
On the Execution and Runtime Verification of UML Activity Diagrams

François Siewe, Guy Merlin Ngounou

The unified modelling language (UML) is an industrial de facto standard for system modelling. It consists of a set of graphical notations (also known as diagrams) and has been used widely in many industrial applications. Although the graphical nature of UML is appealing to system developers, the official documentation of UML does not provide formal semantics for UML diagrams. This makes UML unsuitable for formal verification and, therefore, limited when it comes to the development of safety/security-critical systems where faults can cause damage to people, properties, or the environment. The UML activity diagram is an important UML graphical notation, which is effective in modelling the dynamic aspects of a system. This paper proposes a formal semantics for UML activity diagrams based on the calculus of context-aware ambients (CCA). An algorithm (semantic function) is proposed that maps any activity diagram onto a process in CCA, which describes the behaviours of the UML activity diagram. This process can then be executed and formally verified using the CCA simulation tool ccaPL and the CCA runtime verification tool ccaRV. Hence, design flaws can be detected and fixed early during the system development lifecycle. The pragmatics of the proposed approach are demonstrated using a case study in e-commerce.

Computer software
arXiv Open Access 2025
A Survey on Memory-Efficient Transformer-Based Model Training in AI for Science

Kaiyuan Tian, Linbo Qiao, Baihui Liu et al.

Scientific research faces high costs and inefficiencies with traditional methods, but the rise of deep learning and large language models (LLMs) offers innovative solutions. This survey reviews transformer-based LLM applications across scientific fields such as biology, medicine, chemistry, and meteorology, underscoring their role in advancing research. However, the continuous expansion of model size has led to significant memory demands, hindering further development and application of LLMs for science. This survey systematically reviews and categorizes memory-efficient pre-training techniques for large-scale transformers, including algorithm-level, system-level, and hardware-software co-optimization. Using AlphaFold 2 as an example, we demonstrate how tailored memory optimization methods can reduce storage needs while preserving prediction accuracy. By bridging model efficiency and scientific application needs, we hope to provide insights for scalable and cost-effective LLM training in AI for science.

en cs.LG, cs.AI
DOAJ Open Access 2024
MCPA: multi-scale cross perceptron attention network for 2D medical image segmentation

Liang Xu, Mingxiao Chen, Yi Cheng et al.

Abstract The UNet architecture, based on convolutional neural networks (CNN), has demonstrated its remarkable performance in medical image analysis. However, it faces challenges in capturing long-range dependencies due to the limited receptive fields and inherent bias of convolutional operations. Recently, numerous transformer-based techniques have been incorporated into the UNet architecture to overcome this limitation by effectively capturing global feature correlations. However, the integration of the Transformer modules may result in the loss of local contextual information during the global feature fusion process. In this work, we propose a 2D medical image segmentation model called multi-scale cross perceptron attention network (MCPA). The MCPA consists of three main components: an encoder, a decoder, and a Cross Perceptron. The Cross Perceptron first captures the local correlations using multiple Multi-scale Cross Perceptron modules, facilitating the fusion of features across scales. The resulting multi-scale feature vectors are then spatially unfolded, concatenated, and fed through a Global Perceptron module to model global dependencies. Considering the high computational cost of using 3D neural network models, and the fact that many important clinical data can only be obtained in two dimensions, our MCPA focuses on 2D medical image segmentation. Furthermore, we introduce a progressive dual-branch structure (PDBS) to address the semantic segmentation of the image involving finer tissue structures. This structure gradually shifts the segmentation focus of MCPA network training from large-scale structural features to more sophisticated pixel-level features. We evaluate our proposed MCPA model on several publicly available medical image datasets from different tasks and devices, including the open large-scale dataset of CT (Synapse), MRI (ACDC), and widely used 2D medical imaging datasets captured by fundus camera (DRIVE, CHASE $$\_$$ _ DB1, HRF), and OCTA (ROSE). The experimental results show that our MCPA model achieves state-of-the-art performance.

Electronic computers. Computer science, Information technology
arXiv Open Access 2023
Computer Science Framework to Teach Community-Based Environmental Literacy and Data Literacy to Diverse Students

Clare Baek, Dana Saito-Stehberger, Sharin Jacob et al.

This study introduces an integrated curriculum designed to empower underrepresented students by combining environmental literacy, data literacy, and computer science. The framework promotes environmental awareness, data literacy, and civic engagement using a culturally sustaining approach. This integrated curriculum is embedded with resources to support language development, technology skills, and coding skills to accommodate the diverse needs of students. To evaluate the effectiveness of this curriculum, we conducted a pilot study in a 5th-grade special education classroom with multilingual Latinx students. During the pilot, students utilized Scratch, a block-based coding language, to create interactive projects that showcased locally collected data, which they used to communicate environmental challenges and propose solutions to community leaders. This approach allowed students to engage with environmental literacy at a deeper level, harnessing their creativity and community knowledge in the digital learning environment. Moreover, this curriculum equipped students with the skills to critically analyze political and socio-cultural factors impacting environmental sustainability. Students not only gained knowledge within the classroom but also applied their learning to address real environmental issues within their community. The results of the pilot study underscore the efficacy of this integrated approach.

en cs.CY
DOAJ Open Access 2022
Reaching for upper bound ROUGE score of extractive summarization methods

Iskander Akhmetov, Rustam Mussabayev, Alexander Gelbukh

The extractive text summarization (ETS) method for finding the salient information from a text automatically uses the exact sentences from the source text. In this article, we answer the question of what quality of a summary we can achieve with ETS methods? To maximize the ROUGE-1 score, we used five approaches: (1) adapted reduced variable neighborhood search (RVNS), (2) Greedy algorithm, (3) VNS initialized by Greedy algorithm results, (4) genetic algorithm, and (5) genetic algorithm initialized by the Greedy algorithm results. Furthermore, we ran experiments on articles from the arXive dataset. As a result, we found 0.59 and 0.25 scores for ROUGE-1 and ROUGE-2, respectively achievable by the approach, where the genetic algorithm initialized by the Greedy algorithm results, which happens to yield the best results out of the tested approaches. Moreover, those scores appear to be higher than scores obtained by the current state-of-the-art text summarization models: the best score in the literature for ROUGE-1 on the same data set is 0.46. Therefore, we have room for the development of ETS methods, which are now undeservedly forgotten.

Electronic computers. Computer science
DOAJ Open Access 2022
Gamifying rehabilitation: MILORD platform as an upper limb motion rehabilitation service

Dimitris Fotopoulos, Ioannis Ladakis, Vassilis Kilintzis et al.

Motor learning is based on the correct repetition of specific movements for their permanent storage in the central nervous system (CNS). Rehabilitation relies heavily on the repetition of specific movements, and game scenarios are ideal environments to build routines of repetitive exercises that have entertaining characteristics. In this respect, the gamification of the rehabilitation program, through the introduction of game-specific techniques and design concepts, has gained attention as a complementary or alternative to routine rehabilitation programs. A gamified rehabilitation program promises to gain the patient's attention, to reduce the monotony of the process and preserve motivation to attend, and to create virtual incentives through the game, toward maintaining compliance to the “prescribed” program. This is often achieved through goal-oriented tasks and real-time feedback in the form of points and other in-game rewards. This paper describes MILORD rehabilitation platform, an affordable technological solution, which aims to support health professionals and enable remote rehabilitation, while maintaining health service characteristics and monitoring. MILORD is an end-to-end platform that consists of an interactive computer game, utilizing a leap motion sensor, a centralized user management system, an analysis platform that processes the data generated by the game, and an analysis dashboard presenting a set of meaningful features that describe upper limb movement. Our solution facilitates the monitoring of the patients' progress and provides an alternative way to analyze hand movement. The system was tested with normal subjects and patients and experts to record user's experience, receive feedback, identify any problems, and understand the system's value in monitoring and support motion defect and progress. This small-scale study indicated the capacity of the analysis to quantify the movement in a meaningful way and express the differences between normal and pathological movement, and the user experience was positive with both patients and normal subjects.

Electronic computers. Computer science
DOAJ Open Access 2022
Identification of Different Types of High-Frequency Defects in Superconducting Qubits

Leonid V. Abdurakhimov, Imran Mahboob, Hiraku Toida et al.

Parasitic two-level-system (TLS) defects are one of the major factors limiting the coherence times of superconducting qubits. Although there has been significant progress in characterizing basic parameters of TLS defects, exact mechanisms of interactions between a qubit and various types of TLS defects remained largely unexplored due to the lack of experimental techniques able to probe the form of qubit-defect couplings. Here we present an experimental method of TLS defect spectroscopy using a strong qubit drive that allowed us to distinguish between various types of qubit-defect interactions. By applying this method to a capacitively shunted flux qubit, we detected a rare type of TLS defect with a nonlinear qubit-defect coupling due to critical-current fluctuations, as well as conventional TLS defects with a linear coupling to the qubit caused by charge fluctuations. The presented approach could become the routine method for high-frequency defect inspection and quality control in superconducting qubit fabrication, providing essential feedback for fabrication process optimization. The reported method is a powerful tool to uniquely identify the type of noise fluctuations caused by TLS defects, enabling the development of realistic noise models relevant to noisy intermediate-scale quantum computing and fault-tolerant quantum control.

Physics, Computer software
DOAJ Open Access 2021
A Perspective View of Cotton Leaf Image Classification Using Machine Learning Algorithms Using WEKA

Bhagya M. Patil, Vishwanath Burkpalli

Cotton is one of the major crops in India, where 23% of cotton gets exported to other countries. The cotton yield depends on crop growth, and it gets affected by diseases. In this paper, cotton disease classification is performed using different machine learning algorithms. For this research, the cotton leaf image database was used to segment the images from the natural background using modified factorization-based active contour method. First, the color and texture features are extracted from segmented images. Later, it has to be fed to the machine learning algorithms such as multilayer perceptron, support vector machine, Naïve Bayes, Random Forest, AdaBoost, and K-nearest neighbor. Four color features and eight texture features were extracted, and experimentation was done using three cases: (1) only color features, (2) only texture features, and (3) both color and texture features. The performance of classifiers was better when color features are extracted compared to texture feature extraction. The color features are enough to classify the healthy and unhealthy cotton leaf images. The performance of the classifiers was evaluated using performance parameters such as precision, recall, F-measure, and Matthews correlation coefficient. The accuracies of classifiers such as support vector machine, Naïve Bayes, Random Forest, AdaBoost, and K-nearest neighbor are 93.38%, 90.91%, 95.86%, 92.56%, and 94.21%, respectively, whereas that of the multilayer perceptron classifier is 96.69%.

Electronic computers. Computer science
arXiv Open Access 2021
Clustering Introductory Computer Science Exercises Using Topic Modeling Methods

Laura O. Moraes, Carlos Eduardo Pedreira

Manually determining concepts present in a group of questions is a challenging and time-consuming process. However, the process is an essential step while modeling a virtual learning environment since a mapping between concepts and questions using mastery level assessment and recommendation engines are required. We investigated unsupervised semantic models (known as topic modeling techniques) to assist computer science teachers in this task and propose a method to transform Computer Science 1 teacher-provided code solutions into representative text documents, including the code structure information. By applying non-negative matrix factorization and latent Dirichlet allocation techniques, we extract the underlying relationship between questions and validate the results using an external dataset. We consider the interpretability of the learned concepts using 14 university professors' data, and the results confirm six semantically coherent clusters using the current dataset. Moreover, the six topics comprise the main concepts present in the test dataset, achieving 0.75 in the normalized pointwise mutual information metric. The metric correlates with human ratings, making the proposed method useful and providing semantics for large amounts of unannotated code.

en cs.LG, cs.CL
arXiv Open Access 2021
VFSIE -- Development and Testing Framework for Federated Science Instruments

Anees Al-Najjar, Nageswara S. V. Rao, Neena Imam et al.

Recent developments in softwarization of networked infrastructures combined with containerization of computing workflows promise unprecedented compute anywhere and everywhere capabilities for federations of edge and remote computing systems and science instruments. The development and testing of software stacks that implement these capabilities over physical production federations, however, is not very practical nor cost-effective. In response, we develop a digital twin of the physical infrastructure, called the Virtual Federated Science Instrument Environment (VFSIE). This framework emulates the federation using containers and hosts connected over an emulated network, and supports the development and testing of federation stacks and workflows. We illustrate its use in a case study involving Jupiter Notebook computations and instrument control.

en cs.NI
arXiv Open Access 2021
Performing Creativity With Computational Tools

Daniel Lopes, Jéssica Parente, Pedro Silva et al.

The introduction of new tools in people's workflow has always been promotive of new creative paths. This paper discusses the impact of using computational tools in the performance of creative tasks, especially focusing on graphic design. The study was driven by a grounded theory methodology, applied to a set of semi-structured interviews, made to twelve people working in the areas of graphic design, data science, computer art, music and data visualisation. Among other questions, the results suggest some scenarios in which it is or it is not worth investing in the development of new intelligent creativity-aiding tools.

en cs.CY

Halaman 30 dari 903693