Hasil untuk "Electronic computers. Computer science"

Menampilkan 20 dari ~18084678 hasil · dari DOAJ, CrossRef, Semantic Scholar

JSON API
DOAJ Open Access 2026
AI-Assisted Screening of Oral Reading in Primary School: Using Short Recordings to Flag Reading Difficulty in Greek Pupils

Maria Tsolia, Nikolaos C. Zygouris, Spyros Kamnis et al.

Early identification of reading difficulties enables timely classroom intervention; however, teachers often have limited time and restricted access to specialist assessment. This study explores a brief, teacher-friendly screening approach based on short oral reading recordings to support classroom decision-making. Oral reading samples were collected from 77 Greek primary school pupils (Grades 3–6) during a standardized reading task. Recordings were segmented into 7 s excerpts, converted into spectrogram images, and analyzed using a deep learning model to classify each excerpt as indicative of reading difficulties or not. To reflect realistic school implementation, model development followed an 80/20 participant-level split, with validation conducted on pupils not included in the training set. At the selected operating threshold, the model achieved approximately 84% overall accuracy and a balanced accuracy of 0.85. For practical applicability, a pupil-level indicator—representing the proportion of excerpts flagged as difficult—showed a strong association with expert judgments (r ≈ 0.74). These findings suggest that brief oral reading recordings can provide teachers with an interpretable screening signal to inform monitoring, prioritization, and early classroom support while underscoring the need for further validation under routine school conditions.

Electronic computers. Computer science
DOAJ Open Access 2026
Improved quantum computation using operator backpropagation

Bryce Fuller, Minh C. Tran, Danylo Lykov et al.

Abstract Decoherence of quantum hardware is currently limiting its practical applications. At the same time, classical algorithms for simulating quantum circuits have progressed substantially. Here, we demonstrate a hybrid framework that integrates classical simulations with quantum hardware to improve the computation of an observable’s expectation value by reducing the quantum circuit depth. In this framework, a quantum circuit is partitioned into two subcircuits: one that describes the backpropagated Heisenberg evolution of an observable, executed on a classical computer, while the other is a Schrödinger evolution run on quantum processors. The overall effect is to reduce the depths of the circuits executed on quantum devices and enable the recovery of expectation values at intermediate times throughout the classically backpropagated circuit, trading this with classical overhead and an increased number of circuit executions. We demonstrate the effectiveness of this method on a Hamiltonian simulation problem, achieving more accurate expectation value estimates compared to using quantum hardware alone.

Physics, Electronic computers. Computer science
DOAJ Open Access 2025
Evaluation of deep learning models for flood forecasting in Bangladesh

Asif Rahman Rumee

Flooding is a recurrent and devastating issue in Bangladesh, largely due to its geographical and climatic conditions. This study examined the performance of four deep learning architectures Feed-forward Neural Network (FNN), Recurrent Neural Network (RNN), Gated Recurrent Unit (GRU), and Long Short-Term Memory (LSTM) in predicting floods in Bangladesh. Utilizing a binary classification dataset of historical meteorological and hydrological data, the findings revealed that GRU outperformed the other models, achieving an accuracy of 98%, a precision of 99%, a recall of 98%, and an F1-score of 99%. In contrast, LSTM attained an accuracy of 96%, a precision of 99%, a recall of 95%, and an F1-score of 97%. These results underscored the effectiveness of GRU for operational flood forecasting, which was critical for enhancing disaster preparedness in the region.

Information technology, Electronic computers. Computer science
DOAJ Open Access 2025
Bat optimization of hybrid neural network-FOPID controllers for robust robot manipulator control

Bashra Kadhim Oleiwi, Mohamed Jasim, Ahmad Taher Azar et al.

The position and trajectory tracking control of rigid-link robot manipulators suffers from problems such as poor accuracy, unstable performance, and response caused by unidentified loads and outside disturbances. In this paper, three control structures have been proposed to control a multi-input, multi-output coupled nonlinear three-link rigid robot manipulator (3-LRRM) system and effectively solve the signal chattering in the control signal. To overcome these problems, three hybrid control structures based on combinations between the benefits of fractional order proportional-integral-derivative operations (FOPID) and the benefits of neural networks are proposed for a 3-LRRM. The first hybrid control scheme is a neural network- (NN) like fractional order proportional-integral plus an NN-like fractional order proportional derivative controller (NN-FOPIPD) and the second control scheme is an NN plus FOPID controller (NN + FOPID). In contrast, the third control scheme is the Elman NN-like FOPID controller (ELNN-FOPID). The bat optimization algorithm (BOA) is applied to find the best parameter values of the proposed control scheme by minimizing the performance index of the integral time square error (ITSE). MATLAB software is used to carry out the simulation results. Using the simulation tests, the performance of the suggested controllers is compared without retraining the controller parameters. The robustness of the designed control schemes’ performance is assessed utilizing uncertainties in system parameters, outside disturbances, and initial position changes. The results show that the NN-FOPIPD structure demonstrated the best performance among the suggested controllers.

Mechanical engineering and machinery, Electronic computers. Computer science
DOAJ Open Access 2025
Information coevolution spreading model and simulation based on self-organizing multi-agents

Guoxin Ma, Kang Tian, Hongbo Sun et al.

Abstract Coevolutionary spreading, the interdependent propagation of multiple-type information (or epidemics or social behaviors), has attracted both scientific and industrial attention due to its complex dynamics. While agent-based models (ABMs) are well-suited for modeling single-type contagion dynamics, they struggle to represent the microscopic interdependencies of co-evolving information types within different network topologies. This paper proposes a multi-information co-evolution propagation model based on self-organizing multi-agents, breaking through the limitations of traditional threshold spreading models and agent-based models. The model, which is validated through consistency with traditional SIR models under the circumstance of well-mixed agents, can be used to uncover the spreading mechanisms on different network topologies (such as ER, BA, WS) through a series of transmitting and recovering rules that act on each agent with social contagion behaviors and attributes. Furthermore, sophisticated spreading patterns, such as active counterattack and cooperative operation, are also explored based on this model to simulate the multi-information propagation process. These complex propagation simulations reveal some interesting phenomena: (1) When counterattacking the spread of a specific source information, blindly increasing the proportion of counterattackers or the information exclusion coefficient may not necessarily be the best choice, even without considering costs. (2) In networks with long-short loop structures, compared to the situation of single information dissemination, the coevolutionary spread of two types of information is more prone to avalanche phenomena, with the S (susceptible) state of information dropping sharply from a steady state of 60% to a steady state of 20% by the 10th generation. These findings provide actionable insights for controlling misinformation in social networks and optimizing public health interventions, emphasizing that "more intervention" does not always equate to "better control" in coevolutionary systems.

Electronic computers. Computer science, Information technology
DOAJ Open Access 2025
From data silos to insights: the PRINCE multi-agent knowledge engine for preclinical drug development

Carlos Henrique Vieira-Vieira, Sarang Sanjay Kulkarni, Adam Zalewski et al.

The pharmaceutical industry faces pressure to improve the drug development process while reducing costs in an evolving regulatory landscape. This paper presents the Preclinical Information Center (PRINCE), a cloud-hosted data integration platform developed by Bayer AG in collaboration with Thoughtworks. PRINCE integrates decades of structured and unstructured safety study reports, leveraging a multi-agent architecture based on Large Language Models (LLMs) and advanced data retrieval methodologies, such as Retrieval-Augmented Generation and Text-to-SQL. In this paper, we describe the three-step evolution of PRINCE from a data search tool based on keyword matching to a resourceful research assistant capable of answering complex questions and drafting regulatory-critical documents. We highlight the iterative development process, guided by user feedback, that ensures alignment with evolving research needs and maximizes utility. Finally, we discuss the importance of building trust-based solutions and how transparency and explainability have been integrated into PRINCE. In particular, the integration of a human-in-the-loop approach enhances the accuracy and retains human accountability. We believe that the development and deployment of the PRINCE chatbot demonstrate the transformative potential of AI in the pharmaceutical industry, significantly improving data accessibility and research efficiency, while prioritizing data governance and compliance.

Electronic computers. Computer science
DOAJ Open Access 2024
Option-Critic Algorithm Based on Mutual Information Optimization

LI Junwei, LIU Quan, XU Yapeng

As an important research content of hierarchical reinforcement learning,temporal abstraction allows hierarchical reinforcement learning agents to learn policies at different time scales,which can effectively solve the sparse reward problem that is difficult to deal with in deep reinforcement learning.How to learn excellent temporal abstraction policy end-to-end is always a research challenge in hierarchical reinforcement learning.Based on the Option framework,Option-Critic can effectively solve the above problems through policy gradient theory.However,in the process of policy learning,the OC framework will have the degradation problem that the action distribution of the internal option policies becomes very similar.This degradation problem affects the experimental performance of the OC framework and leads to poor interpretability of the Option.In order to solve the above problems,mutual information knowledge is introduced as the internal reward,and an Option-Critic algorithm with mutual information optimization is proposed.The MIOOC algorithm combines the proximal policy Option-Critic algorithm to ensure the diversity of the lower level policies.In order to verify the effectiveness of the algorithm,the MIOOC algorithm is compared with several common reinforcement learning methods in continuous experimental environments.Experimental results show that the MIOOC algorithm can speed up the learning speed of the model,improve its experimental performance,and its Option internal strategy is more discriminative.

Computer software, Technology (General)
DOAJ Open Access 2024
Applying the Cheetah Algorithm to optimize resource allocation in the fog computing environment

Fatemeh Arvaneh, Faraneh Zarafshan, Abbas Karimi

This study investigates the application of heuristic and meta-heuristic algorithms to address resource allocation challenges in Internet of Things (IoT) applications within fog computing environments. The primary advantage of these algorithms lies in their ability to optimize functions without the need for stringent restrictions, allowing adaptability to various linear, nonlinear, continuous, or discrete problems. Through the implementation and comparison of the Cheetah algorithm, Gray Wolf algorithm, Particle Swarm-Gravitational Search algorithm, and Gray Wolf-Cuckoo Search algorithm using MATLAB software in a simulation environment, the study aims to minimize criterion function and total time and energy consumption for IoT applications. Preliminary results indicate that the statistical average performance of the Cheetah algorithm surpasses that of the Gray Wolf algorithm, the combined Particle Swarm-Gravitational Search algorithm, and the Gray Wolf-Cuckoo Search algorithm. This suggests the efficacy of the Cheetah algorithm in IoT resource allocation optimization within fog computing environments. The study provides insights into the comparative performance of these algorithms, laying the foundation for further exploration into enhancing resource allocation strategies in the dynamic and resource-constrained IoT and fog computing landscapes.

Electronic computers. Computer science, Cybernetics
DOAJ Open Access 2024
A fog-edge-enabled intrusion detection system for smart grids

Noshina Tariq, Amjad Alsirhani, Mamoona Humayun et al.

Abstract The Smart Grid (SG) heavily depends on the Advanced Metering Infrastructure (AMI) technology, which has shown its vulnerability to intrusions. To effectively monitor and raise alarms in response to anomalous activities, the Intrusion Detection System (IDS) plays a crucial role. However, existing intrusion detection models are typically trained on cloud servers, which exposes user data to significant privacy risks and extends the time required for intrusion detection. Training a high-quality IDS using Artificial Intelligence (AI) technologies on a single entity becomes particularly challenging when dealing with vast amounts of distributed data across the network. To address these concerns, this paper presents a novel approach: a fog-edge-enabled Support Vector Machine (SVM)-based federated learning (FL) IDS for SGs. FL is an AI technique for training Edge devices. In this system, only learning parameters are shared with the global model, ensuring the utmost data privacy while enabling collaborative learning to develop a high-quality IDS model. The test and validation results obtained from this proposed model demonstrate its superiority over existing methods, achieving an impressive percentage improvement of 4.17% accuracy, 13.19% recall, 9.63% precision, 13.19% F1 score when evaluated using the NSL-KDD dataset. Furthermore, the model performed exceptionally well on the CICIDS2017 dataset, with improved accuracy, precision, recall, and F1 scores reaching 6.03%, 6.03%, 7.57%, and 7.08%, respectively. This novel approach enhances intrusion detection accuracy and safeguards user data and privacy in SG systems, making it a significant advancement in the field.

Computer engineering. Computer hardware, Electronic computers. Computer science
S2 Open Access 2013
A survey on service quality description

K. Kritikos, Barbara Pernici, P. Plebani et al.

Quality of service (QoS) can be a critical element for achieving the business goals of a service provider, for the acceptance of a service by the user, or for guaranteeing service characteristics in a composition of services, where a service is defined as either a software or a software-support (i.e., infrastructural) service which is available on any type of network or electronic channel. The goal of this article is to compare the approaches to QoS description in the literature, where several models and metamodels are included. consider a large spectrum of models and metamodels to describe service quality, ranging from ontological approaches to define quality measures, metrics, and dimensions, to metamodels enabling the specification of quality-based service requirements and capabilities as well as of SLAs (Service-Level Agreements) and SLA templates for service provisioning. Our survey is performed by inspecting the characteristics of the available approaches to reveal which are the consolidated ones and which are the ones specific to given aspects and to analyze where the need for further research and investigation lies. The approaches here illustrated have been selected based on a systematic review of conference proceedings and journals spanning various research areas in computer science and engineering, including: distributed, information, and telecommunication systems, networks and security, and service-oriented and grid computing.

361 sitasi en Computer Science
DOAJ Open Access 2021
Design of Ensemble Classifier Model Based on MLP Neural Network For Breast Cancer Diagnosis

Amin Rezaeipanah, Rahmad Syah, Siswi Wulandari et al.

Nowadays, breast cancer is one of the leading causes of death women in the worldwide. If breast cancer is detected at the beginning stage, it can ensure long-term survival. Numerous methods have been proposed for the early prediction of this cancer, however, efforts are still ongoing given the importance of the problem. Artificial Neural Networks (ANN) have been established as some of the most dominant machine learning algorithms, where they are very popular for prediction and classification work. In this paper, an Intelligent Ensemble Classification method based on Multi-Layer Perceptron neural network (IEC-MLP) is proposed for breast cancer diagnosis. The proposed method is split into two stages, parameters optimization and ensemble classification. In the first stage, the MLP Neural Network (MLP-NN) parameters, including optimal features, hidden layers, hidden nodes and weights, are optimized with an Evolutionary Algorithm (EA) for maximize the classification accuracy. In the second stage, an ensemble classification algorithm of MLP-NN is applied to classify the patient with optimized parameters. Our proposed IEC-MLP method which can not only help to reduce the complexity of MLP-NN and effectively selection the optimal feature subset, but it can also obtain the minimum misclassification cost. The classification results were evaluated using the IEC-MLP for different breast cancer datasets and the prediction results obtained were very promising (98.74% accuracy on the WBCD dataset). Meanwhile, the proposed method outperforms the GAANN and CAFS algorithms and other state-of-the-art classifiers. In addition, IEC-MLP could also be applied to other cancer diagnosis.

Electronic computers. Computer science
S2 Open Access 2018
Toward Audio Beehive Monitoring: Deep Learning vs. Standard Machine Learning in Classifying Beehive Audio Samples

V. Kulyukin, Sarbajit Mukherjee, Prakhar Amlathe

Electronic beehive monitoring extracts critical information on colony behavior and phenology without invasive beehive inspections and transportation costs. As an integral component of electronic beehive monitoring, audio beehive monitoring has the potential to automate the identification of various stressors for honeybee colonies from beehive audio samples. In this investigation, we designed several convolutional neural networks and compared their performance with four standard machine learning methods (logistic regression, k-nearest neighbors, support vector machines, and random forests) in classifying audio samples from microphones deployed above landing pads of Langstroth beehives. On a dataset of 10,260 audio samples where the training and testing samples were separated from the validation samples by beehive and location, a shallower raw audio convolutional neural network with a custom layer outperformed three deeper raw audio convolutional neural networks without custom layers and performed on par with the four machine learning methods trained to classify feature vectors extracted from raw audio samples. On a more challenging dataset of 12,914 audio samples where the training and testing samples were separated from the validation samples by beehive, location, time, and bee race, all raw audio convolutional neural networks performed better than the four machine learning methods and a convolutional neural network trained to classify spectrogram images of audio samples. A trained raw audio convolutional neural network was successfully tested in situ on a low voltage Raspberry Pi computer, which indicates that convolutional neural networks can be added to a repertoire of in situ audio classification algorithms for electronic beehive monitoring. The main trade-off between deep learning and standard machine learning is between feature engineering and training time: while the convolutional neural networks required no feature engineering and generalized better on the second, more challenging dataset, they took considerably more time to train than the machine learning methods. To ensure the replicability of our findings and to provide performance benchmarks for interested research and citizen science communities, we have made public our source code and our curated datasets.

89 sitasi en Engineering
DOAJ Open Access 2020
Deteksi Dini Status Keanggotaan Industri Kebugaran Menggunakan Pendekatan Supervised Learning

Julio Narabel, Setia Budi

In the fitness industry, the number of members is a major factor for the sustainability of its business. The ability of managers and trainers to detect members who represent traits to quit membership is critical. Four supervised learning classification methods like Support Vector Machine, Random Forest, K-Nearest Neighbor, and Artificial Neural Network were used to generate early detection using two variants of datasets that have different amounts of data. Classification results are separated into three different zones, which are Green Zone, Yellow Zone, and Red Zone. Artificial Neural Network methods using backpropagation training give 99.90% of accuracy on a dataset which has more amount of data. The evaluation has been done using the confusion matrix and AUC-ROC curves.

Electronic computers. Computer science, Technology
DOAJ Open Access 2018
An UML profile for representing real-time design patterns

Hela Marouane, Claude Duvallet, Achraf Makni et al.

Systems which manipulate important volumes of data need to be managed with Real-Time (RT) databases. These systems are subject to several temporal constraints related to data and to transactions. Thus, their design remains a complex task. To remedy this complexity, it is necessary to integrate design methods to support data and transactions temporal constraints. Among the design methods, those based on patterns have been widely used in several fields. However, despite their advantages, these patterns present some shortcomings. Indeed, they do not manage efficiently the patterns variability and they do not specify the pattern elements when they are instantiated. To overcome these limitations, we propose, in this paper, a new UML profile to (i) express the variability in patterns and (ii) to identify the pattern elements in its instance. Besides, in order to well-capture the knowledge of the domain, the proposed profile extends UML with concepts related to real-time databases and integrates OCL (Object Constraint Language) to enforce the variation points consistency. Finally, we give an example of a RT pattern that illustrates these UML extensions, where we implement the proposed profile and we validate the pattern diagrams using the constraints we have proposed. Keywords: UML profile, Design patterns, Object Constraint Language, Real-time database

Electronic computers. Computer science
DOAJ Open Access 2017
A Simple Density with Distance Based Initial Seed Selection Technique for K Means Algorithm

Sajidha Syed Azimuddin, Kalyani Desikan

Open issues with respect to K means algorithm are identifying the number of clusters, initial seed concept selection, clustering tendency, handling empty clusters, identifying outliers etc. In this paper we propose a novel and a simple technique considering both density and distance of the concepts in a dataset to identify initial seed concepts for clustering. Many authors have proposed different techniques to identify initial seed concepts; but our method ensures that the initial seed concepts are chosen from different clusters that are to be generated by the clustering solution. The hallmark of our algorithm is that it is a single pass algorithm that does not require any extra parameters to be estimated. Further, our seed concepts are one among the actual concepts and not the mean of representative concepts as is the case in many other algorithms. We have implemented our proposed algorithm and compared the results with the interval based technique of Fouad Khan. We see that our method outperforms the interval based method. We have also compared our method with the original random K means and K Means++ algorithms.

Electronic computers. Computer science
DOAJ Open Access 2017
GrabCut Image Segmentation Algorithm Based on Structure Tensor

ZHANG Yong,YUAN Jiazheng,LIU Hongzhe,LI Qing

Traditional GrabCut based image segmentation method is mainly based on the image pixel values to build a graph model,and does not take into account the rich texture of color image information.This paper presents an image segmentation algorithm based on GrabCut model,and contrasts results of Structure Tensor(ST) GrabCut segmentation method and traditional GrabCut segmentation method.The method uses the ST and the pixel values to construct the tight ST.For concise and efficient calculation,this paper extends Gaussian Mixture Model(GMM) built based on Grabcut method to tensor space,and uses Kullback-Leible(KL) divergence instead of the commonly used the Riemannian metric.Through a lot of experiments on synthetic texture images and natural images,results show that,compared with carstem Rother,GACWRF method the algorithm has more accurate segmentation effects,not only achieves the texture and color information parameter fusion,but also improves the computational efficiency.

Computer engineering. Computer hardware, Computer software

Halaman 37 dari 904234