Hasil untuk "deep learning"

Menampilkan 20 dari ~3053031 hasil · dari CrossRef, DOAJ

JSON API
DOAJ Open Access 2025
Fault Detection in Steel Belts of Tires Using Magnetic Sensors and Different Deep Learning Models

Sercan Yalçın

Tire failures pose significant safety risks, necessitating advanced inspection techniques. This research investigates the application of magnetic sensors and deep learning for detecting defects in steel belts of the tires. It was aim to develop a robust and accurate fault detection system by measuring magnetic field variations caused by defects. In this study, the magnetic image sensor circuit had been designed and then the images obtained from it have been classified as none, crack, and delamination type steel belt errors. Various deep learning models and their hybrid architectures, were explored and compared. Experimental results demonstrate that all models exhibit strong performance, with the Transformer model achieving the highest accuracy of 96.12%. The developed system offers a potential solution for improving tire safety and reducing maintenance costs in industries.

Electrical engineering. Electronics. Nuclear engineering
DOAJ Open Access 2025
Comprehensive framework of machine learning and deep learning architectures with metaheuristic optimization for high-fidelity prediction of nanofluid specific heat capacity

Priya Mathur, Dheeraj Kumar, Farhan Sheth et al.

Abstract Accurately predicting the specific heat capacity of nanofluids is critical for optimizing their performance in engineering and industrial applications. This study explores twelve machine learning and deep learning models using conventional and stacking ensemble techniques. In the stacking framework, a linear regression model is employed as a meta-learner to improve base model performance. Additionally, two nature-inspired metaheuristic optimization algorithms—Particle Swarm Optimization and Grey Wolf Optimization—were used to fine-tune the hyperparameters of machine learning models. This research is based on a comprehensive dataset of 1,269 experimental nanofluid samples, with key inputs including nanofluid type (hybrid and direct), temperature, and volume concentration. To improve model generalization, data augmentation strategies inspired by polynomial/Fourier expansions and autoencoder-based methods were implemented. The results demonstrate that the stacked multi-layer perceptron model, integrated with linear regression, achieved the highest predictive accuracy, recording an R² score of 0.99927, a mean squared error of 466.06, and a root mean squared error of 21.58. Among standalone machine learning models, CatBoost was the best performer (R² score: 0.99923, MSE: 487.71, RMSE: 22.08), ranking second overall. The impact of metaheuristic optimization was significant; Grey Wolf Optimization, for instance, reduced the LightGBM model’s mean squared error from 29386.43 to 6549.006. These findings underscore the efficacy of hybrid ML/DL frameworks, advanced data augmentation, and metaheuristic optimization in predictive modeling of nanofluid thermophysical properties, providing a robust foundation for future research in heat transfer applications.

Medicine, Science
DOAJ Open Access 2025
Treatment Effect Estimation in Survival Analysis Using Copula-Based Deep Learning Models for Causal Inference

Jong-Min Kim

This paper presents the use of Copula-based deep learning with Horvitz–Thompson (HT) weights and inverse probability of treatment weighting (IPTW) for estimating propensity scores in causal inference problems. This study compares the performance of the statistical methods—Copula-based deep learning with HT and IPTW weights, propensity score matching (PSM), and logistic regression—in estimating the treatment effect (ATE) using both simulated and real-world data. Our results show that the Copula-based recurrent neural network (RNN) with the method of HT weights provides the most precise and robust treatment effect estimate, with narrow confidence intervals indicating high confidence in the results. The PSM model yields the largest treatment effect estimate, but with greater uncertainty, suggesting sensitivity to data imbalances. In contrast, logistic regression and causal forests produce a substantially smaller estimate, potentially underestimating the treatment effect, particularly in structured datasets such as COMPAS scores. Overall, copula-based methods (HT and IPTW) tend to produce higher and more precise estimates, making them effective choices for treatment effect estimation in complex settings. Our findings emphasize the importance of method selection based on both the magnitude and precision of the treatment effect for accurate analysis.

DOAJ Open Access 2025
Comparison of Deep Learning Models for LAI Simulation and Interpretable Hydrothermal Coupling in the Loess Plateau

Junpo Yu, Yajun Si, Wen Zhao et al.

As the world’s largest loess deposit region, the Loess Plateau’s vegetation dynamics are crucial for its regional water–heat balance and ecosystem functioning. Leaf Area Index (LAI) serves as a key indicator bridging canopy architecture and plant physiological activities. Existing studies have made significant advancements in simulating LAI, yet accurate LAI simulation remains challenging. To address this challenge and gain deeper insights into the environmental controls of LAI, this study aims to accurately simulate LAI in the Loess Plateau using deep learning models and to elucidate the spatiotemporal influence of soil moisture and temperature on LAI dynamics. For this purpose, we used three deep learning models, namely Artificial Neural Network (ANN), Long Short-Term Memory (LSTM), and Interpretable Multivariable (IMV)-LSTM, to simulate LAI in the Loess Plateau, only using soil moisture and temperature as inputs. Results indicated that our approach outperformed traditional models and effectively captured LAI variations across different vegetation types. The attention analysis revealed that soil moisture mainly influenced LAI in the arid northwest and temperature was the predominant effect in the humid southeast. Seasonally, soil moisture was crucial in spring and summer, notably in grasslands and croplands, whereas temperature dominated in autumn and winter. Notably, forests had the longest temperature-sensitive periods. As LAI increased, soil moisture became more influential, and at peak LAI, both factors exerted varying controls on different vegetation types. These findings demonstrated the strength of deep learning for simulating vegetation–climate interactions and provided insights into hydrothermal regulation mechanisms in semiarid regions.

DOAJ Open Access 2025
Multimodal artificial intelligence for subepithelial lesion classification and characterization: a multicenter comparative study (with video)

Jiao Li, Xiaojuan Jing, Qin Zhang et al.

Abstract Background Subepithelial lesions (SELs) present significant diagnostic challenges in gastrointestinal endoscopy, particularly in differentiating malignant types, such as gastrointestinal stromal tumors (GISTs) and neuroendocrine tumors, from benign types like leiomyomas. Misdiagnosis can lead to unnecessary interventions or delayed treatment. To address this challenge, we developed ECMAI-WME, a parallel fusion deep learning model integrating white light endoscopy (WLE) and microprobe endoscopic ultrasonography (EUS), to improve SEL classification and lesion characterization. Methods A total of 523 SELs from four hospitals were used to develop serial and parallel fusion AI models. The Parallel Model, demonstrating superior performance, was designated as ECMAI-WME. The model was tested on an external validation cohort (n = 88) and a multicenter test cohort (n = 274). Diagnostic performance, lesion characterization, and clinical decision-making support were comprehensively evaluated and compared with endoscopists’ performance. Results The ECMAI-WME model significantly outperformed endoscopists in diagnostic accuracy (96.35% vs. 63.87–86.13%, p < 0.001) and treatment decision-making accuracy (96.35% vs. 78.47–86.13%, p < 0.001). It achieved 98.72% accuracy in internal validation, 94.32% in external validation, and 96.35% in multicenter testing. For distinguishing gastric GISTs from leiomyomas, the model reached 91.49% sensitivity, 100% specificity, and 96.38% accuracy. Lesion characteristics were identified with a mean accuracy of 94.81% (range: 90.51–99.27%). The model maintained robust performance despite class imbalance, confirmed by five complementary analyses. Subgroup analyses showed consistent accuracy across lesion size, location, or type (p > 0.05), demonstrating strong generalizability. Conclusions The ECMAI-WME model demonstrates excellent diagnostic performance and robustness in the multiclass SEL classification and characterization, supporting its potential for real-time deployment to enhance diagnostic consistency and guide clinical decision-making.

Computer applications to medicine. Medical informatics
CrossRef Open Access 2024
Perspective Chapter: Deep Learning Misconduct and How Conscious Learning Avoids It

Juyang Weng

“Deep learning” uses Post-Selection—selection of a model after training multiple models using data. The performance data of “Deep Learning” have been deceptively inflated due to two misconducts: 1: cheating in the absence of a test; 2: hiding bad-looking data. Through the same misconducts, a simple method Pure-Guess Nearest Neighbor (PGNN) gives no errors on any validation dataset V, as long as V is in the possession of the authors and both the amount of storage space and the time of training are finite but unbounded. The misconducts are fatal, because “Deep Learning” is not generalizable, by overfitting a sample set V. The charges here are applicable to all learning modes. This chapter proposes new AI metrics, called developmental errors for all networks trained, under four Learning Conditions: (1) a body including sensors and effectors, (2) an incremental learning architecture (due to the “big data” flaw), (3) a training experience, and (4) a limited amount of computational resources. Developmental Networks avoid Deep Learning misconduct because they train a sole system, which automatically discovers context rules on the fly by generating emergent Turing machines that are optimal in the sense of maximum likelihood across a lifetime, conditioned on the four Learning Conditions.

DOAJ Open Access 2024
Behavioral Motivation and Influencing Factors of Graduate Students Using AIGC Tool: An Empirical Analysis Based on Questionnaire Survey

Yijia WAN, Liping GU

[Purpose/Significance] To explore in depth the acceptance and usage habits of AIGC tools by graduate students in the process of academic research, and to promote the positive attention and application of emerging technologies by graduate students is one of the goals of library knowledge service and information literacy education. This paper aims to reveal the influence mechanism of internal and external factors on the use of AIGC tools by graduate students at the user level, clarify the behavioral motivation of graduate students to use AIGC tools to support learning and research, help libraries to design and promote AIGC services according to the actual situation, and promote the implementation of AIGC technology in knowledge services. [Method/Process] Based on the UTAUT2 model, considering related theories such as perceived value and the characteristics of AIGC tool and graduate student group, this study constructed the influencing factor model of graduate students' AIGC tool use behavior, and provided empirical evidence through questionnaire survey and structural equation model analysis. The survey respondents are graduate students in universities or research institutes. In this study, questionnaires were distributed to graduate students through social media platforms, enterprise Wechat contacts, email, etc., and the survey period was from July to August 2024. After the data collection, statistical software such as SPSS and SmartPLS was used to analyze all the valid data obtained, including descriptive statistics, reliability and validity test and structural equation model analysis. [Results/ [Conclusions] Functional value, use value and emotional value in the tool aspect, individual innovation in individual aspect and social influence in environmental aspect have significant positive effects on graduate students' willingness to use AIGC tools, and indirectly affect their use behavior. Facilitating conditions, such as network equipment, as supporting factors, also have a significant positive impact on graduate students' usage. It is suggested that AIGC tool developers and library service designers consider the functional advantages and convenience. On the one hand, it is suggested that they pay attention to the functional value of the tool, that is, the auxiliary role to the graduate study and scientific research; on the other hand, they consider whether the tool is design-friendly, easy to operate, with low technical threshold and easy to use on an ongoing basis. From a graduate education perspective, it is important to promote the deep integration of the tool use with one's own professional learning and research in order to realize the improvement of other qualities through information literacy. Meanwhile, strengthening students' innovative thinking and comprehensive ability training, and guiding AIGC tool application ability and scientific research thinking to promote each other are conducive to new technologies to truly support learning and scientific research, and ultimately achieve the goal of developing high-level innovative talents.

Bibliography. Library science. Information resources, Agriculture
DOAJ Open Access 2024
HRCTCov19-a high-resolution chest CT scan image dataset for COVID-19 diagnosis and differentiation

Iraj Abedi, Mahsa Vali, Bentolhoda Otroshi et al.

Abstract Introduction Computed tomography (CT) was a widely used diagnostic technique for COVID-19 during the pandemic. High-Resolution Computed Tomography (HRCT), is a type of computed tomography that enhances image resolution through the utilization of advanced methods. Due to privacy concerns, publicly available COVID-19 CT image datasets are incredibly tough to come by, leading to it being challenging to research and create AI-powered COVID-19 diagnostic algorithms based on CT images. Data description To address this issue, we created HRCTCov19, a new COVID-19 high-resolution chest CT scan image collection that includes not only COVID-19 cases of Ground Glass Opacity (GGO), Crazy Paving, and Air Space Consolidation but also CT images of cases with negative COVID-19. The HRCTCov19 dataset, which includes slice-level and patient-level labeling, has the potential to assist in COVID-19 research, in particular for diagnosis and a distinction using AI algorithms, machine learning, and deep learning methods. This dataset, which can be accessed through the web at http://databiox.com , includes 181,106 chest HRCT images from 395 patients labeled as GGO, Crazy Paving, Air Space Consolidation, and Negative.

Medicine, Biology (General)
DOAJ Open Access 2024
Fast reconstruction of SMS bSSFP myocardial perfusion images using noise map estimation network (NoiseMapNet): a head-to-head comparison with parallel imaging and iterative reconstruction

Naledi Lenah Adam, Grzegorz Kowalik, Andrew Tyler et al.

BackgroundSimultaneous multi-slice (SMS) bSSFP imaging enables stress myocardial perfusion imaging with high spatial resolution and increased spatial coverage. Standard parallel imaging techniques (e.g., TGRAPPA) can be used for image reconstruction but result in high noise level. Alternatively, iterative reconstruction techniques based on temporal regularization (ITER) improve image quality but are associated with reduced temporal signal fidelity and long computation time limiting their online use. The aim is to develop an image reconstruction technique for SMS-bSSFP myocardial perfusion imaging combining parallel imaging and image-based denoising using a novel noise map estimation network (NoiseMapNet), which preserves both sharpness and temporal signal profiles and that has low computational cost.MethodsThe proposed reconstruction of SMS images consists of a standard temporal parallel imaging reconstruction (TGRAPPA) with motion correction (MOCO) followed by image denoising using NoiseMapNet. NoiseMapNet is a deep learning network based on a 2D Unet architecture and aims to predict a noise map from an input noisy image, which is then subtracted from the noisy image to generate the denoised image. This approach was evaluated in 17 patients who underwent stress perfusion imaging using a SMS-bSSFP sequence. Images were reconstructed with (a) TGRAPPA with MOCO (thereafter referred to as TGRAPPA), (b) iterative reconstruction with integrated motion compensation (ITER), and (c) proposed NoiseMapNet-based reconstruction. Normalized mean squared error (NMSE) with respect to TGRAPPA, myocardial sharpness, image quality, perceived SNR (pSNR), and number of diagnostic segments were evaluated.ResultsNMSE of NoiseMapNet was lower than using ITER for both myocardium (0.045 ± 0.021 vs. 0.172 ± 0.041, p &lt; 0.001) and left ventricular blood pool (0.025 ± 0.014 vs. 0.069 ± 0.020, p &lt; 0.001). There were no significant differences between all methods for myocardial sharpness (p = 0.77) and number of diagnostic segments (p = 0.36). ITER led to higher image quality than NoiseMapNet/TGRAPPA (2.7 ± 0.4 vs. 1.8 ± 0.4/1.3 ± 0.6, p &lt; 0.001) and higher pSNR than NoiseMapNet/TGRAPPA (3.0 ± 0.0 vs. 2.0 ± 0.0/1.3 ± 0.6, p &lt; 0.001). Importantly, NoiseMapNet yielded higher pSNR (p &lt; 0.001) and image quality (p &lt; 0.008) than TGRAPPA. Computation time of NoiseMapNet was only 20s for one entire dataset.ConclusionNoiseMapNet-based reconstruction enables fast SMS image reconstruction for stress myocardial perfusion imaging while preserving sharpness and temporal signal profiles.

Diseases of the circulatory (Cardiovascular) system
DOAJ Open Access 2023
Automated deep bottleneck residual 82-layered architecture with Bayesian optimization for the classification of brain and common maternal fetal ultrasound planes

Fatima Rauf, Muhammad Attique Khan, Ali Kashif Bashir et al.

Despite a worldwide decline in maternal mortality over the past two decades, a significant gap persists between low- and high-income countries, with 94% of maternal mortality concentrated in low and middle-income nations. Ultrasound serves as a prevalent diagnostic tool in prenatal care for monitoring fetal growth and development. Nevertheless, acquiring standard fetal ultrasound planes with accurate anatomical structures proves challenging and time-intensive, even for skilled sonographers. Therefore, for determining common maternal fetuses from ultrasound images, an automated computer-aided diagnostic (CAD) system is required. A new residual bottleneck mechanism-based deep learning architecture has been proposed that includes 82 layers deep. The proposed architecture has added three residual blocks, each including two highway paths and one skip connection. In addition, a convolutional layer has been added of size 3 × 3 before each residual block. In the training process, several hyper parameters have been initialized using Bayesian optimization (BO) rather than manual initialization. Deep features are extracted from the average pooling layer and performed the classification. In the classification process, an increase occurred in the computational time; therefore, we proposed an improved search-based moth flame optimization algorithm for optimal feature selection. The data is then classified using neural network classifiers based on the selected features. The experimental phase involved the analysis of ultrasound images, specifically focusing on fetal brain and common maternal fetal images. The proposed method achieved 78.5% and 79.4% accuracy for brain fetal planes and common maternal fetal planes. Comparison with several pre-trained neural nets and state-of-the-art (SOTA) optimization algorithms shows improved accuracy.

Medicine (General)
DOAJ Open Access 2023
Transmission line fault detection and classification based on SA-MobileNetV3

Yanhui Xi, Weijie Zhang, Feng Zhou et al.

Accurate fault detection and classification help to analyze fault causes and quickly restore faulty phases. Deep learning can automatically extract fault features and identify fault types from the original three-phase voltage and current signals. However, this still imposes challenges such as recognition accuracy and computational complexity. More importantly, high level fault features cannot be extracted in the one-dimensional time series. This paper presents a robust fault classification method based on SA-MobileNetV3 for transmission systems. Considering that the SE (Squeeze-and-Excitation) attention module cannot aggregate the spatial dimension information on the channel, SA (shuffle attention) module is introduced into MobileNetV3, which can effectively fuse the importance of pixels in different channels and in different locations at the same channel. Also, transforming the time series three-phase voltage and current signals into two-dimensional images based on CWT (continuous wavelet transform) makes the proposed method be similar to image recognition, which can mine high level fault features and classify the faults visually. To verify the effectiveness of the method, a 735kV transmission line model is built for data generation through Simulink. Various kinds of fault conditions and factors are considered to verify the adaptability and generalizability. Simulation results show that the method can quickly and accurately identify 11 types of faults, and the accuracy rate is as high as 99.90%. A comparison between the proposed method and other existing techniques shows the superiority of the proposed SA- MobileNetV3, and better anti-noise performance makes it more suitable for real fault signals taken on-site.

Electrical engineering. Electronics. Nuclear engineering
DOAJ Open Access 2023
Selection of target-binding proteins from the information of weakly enriched phage display libraries by deep sequencing and machine learning

Tomoyuki Ito, Thuy Duong Nguyen, Yutaka Saito et al.

Despite the advances in surface-display systems for directed evolution, variants with high affinity are not always enriched due to undesirable biases that increase target-unrelated variants during biopanning. Here, our goal was to design a library containing improved variants from the information of the “weakly enriched” library where functional variants were weakly enriched. Deep sequencing for the previous biopanning result, where no functional antibody mimetics were experimentally identified, revealed that weak enrichment was partly due to undesirable biases during phage infection and amplification steps. The clustering analysis of the deep sequencing data from appropriate steps revealed no distinct sequence patterns, but a Bayesian machine learning model trained with the selected deep sequencing data supplied nine clusters with distinct sequence patterns. Phage libraries were designed on the basis of the sequence patterns identified, and four improved variants with target-specific affinity (EC50 = 80–277 nM) were identified by biopanning. The selection and use of deep sequencing data without undesirable bias enabled us to extract the information on prospective variants. In summary, the use of appropriate deep sequencing data and machine learning with the sequence data has the possibility of finding sequence space where functional variants are enriched.

Therapeutics. Pharmacology, Immunologic diseases. Allergy
DOAJ Open Access 2023
Landslide Susceptibility Mapping Based on Deep Learning Algorithms Using Information Value Analysis Optimization

Junjie Ji, Yongzhang Zhou, Qiuming Cheng et al.

Selecting samples with non-landslide attributes significantly impacts the deep-learning modeling of landslide susceptibility mapping. This study presents a method of information value analysis in order to optimize the selection of negative samples used for machine learning. Recurrent neural network (RNN) has a memory function, so when using an RNN for landslide susceptibility mapping purposes, the input order of the landslide-influencing factors affects the resulting quality of the model. The information value analysis calculates the landslide-influencing factors, determines the input order of data based on the importance of any specific factor in determining the landslide susceptibility, and improves the prediction potential of recurrent neural networks. The simple recurrent unit (SRU), a newly proposed variant of the recurrent neural network, is characterized by possessing a faster processing speed and currently has less application history in landslide susceptibility mapping. This study used recurrent neural networks optimized by information value analysis for landslide susceptibility mapping in Xinhui District, Jiangmen City, Guangdong Province, China. Four models were constructed: the RNN model with optimized negative sample selection, the SRU model with optimized negative sample selection, the RNN model, and the SRU model. The results show that the RNN model with optimized negative sample selection has the best performance in terms of AUC value (0.9280), followed by the SRU model with optimized negative sample selection (0.9057), the RNN model (0.7277), and the SRU model (0.6355). In addition, several objective measures of accuracy (0.8598), recall (0.8302), F1 score (0.8544), Matthews correlation coefficient (0.7206), and the receiver operating characteristic also show that the RNN model performs the best. Therefore, the information value analysis can be used to optimize negative sample selection in landslide sensitivity mapping in order to improve the model’s performance; second, SRU is a weaker method than RNN in terms of model performance.

DOAJ Open Access 2023
Feasibility and effectiveness of automatic deep learning network and radiomics models for differentiating tumor stroma ratio in pancreatic ductal adenocarcinoma

Hongfan Liao, Jiang Yuan, Chunhua Liu et al.

Abstract Objective This study aims to compare the feasibility and effectiveness of automatic deep learning network and radiomics models in differentiating low tumor stroma ratio (TSR) from high TSR in pancreatic ductal adenocarcinoma (PDAC). Methods A retrospective analysis was conducted on a total of 207 PDAC patients from three centers (training cohort: n = 160; test cohort: n = 47). TSR was assessed on hematoxylin and eosin-stained specimens by experienced pathologists and divided as low TSR and high TSR. Deep learning and radiomics models were developed including ShuffulNetV2, Xception, MobileNetV3, ResNet18, support vector machine (SVM), k-nearest neighbor (KNN), random forest (RF), and logistic regression (LR). Additionally, the clinical models were constructed through univariate and multivariate logistic regression. Kaplan–Meier survival analysis and log-rank tests were conducted to compare the overall survival time between different TSR groups. Results To differentiate low TSR from high TSR, the deep learning models based on ShuffulNetV2, Xception, MobileNetV3, and ResNet18 achieved AUCs of 0.846, 0.924, 0.930, and 0.941, respectively, outperforming the radiomics models based on SVM, KNN, RF, and LR with AUCs of 0.739, 0.717, 0.763, and 0.756, respectively. Resnet 18 achieved the best predictive performance. The clinical model based on T stage alone performed worse than deep learning models and radiomics models. The survival analysis based on 142 of the 207 patients demonstrated that patients with low TSR had longer overall survival. Conclusions Deep learning models demonstrate feasibility and superiority over radiomics in differentiating TSR in PDAC. The tumor stroma ratio in the PDAC microenvironment plays a significant role in determining prognosis. Critical relevance statement The objective was to compare the feasibility and effectiveness of automatic deep learning networks and radiomics models in identifying the tumor-stroma ratio in pancreatic ductal adenocarcinoma. Our findings demonstrate deep learning models exhibited superior performance compared to traditional radiomics models. Key points • Deep learning demonstrates better performance than radiomics in differentiating tumor-stroma ratio in pancreatic ductal adenocarcinoma. • The tumor-stroma ratio in the pancreatic ductal adenocarcinoma microenvironment plays a protective role in prognosis. • Preoperative prediction of tumor-stroma ratio contributes to clinical decision-making and guiding precise medicine. Graphical Abstract

Medical physics. Medical radiology. Nuclear medicine
DOAJ Open Access 2022
A Novel Approach for Multichannel Epileptic Seizure Classification Based on Internet of Things Framework Using Critical Spectral Verge Feature Derived from Flower Pollination Algorithm

Dhanalekshmi Prasad Yedurkar, Shilpa P. Metkar, Fadi Al-Turjman et al.

A novel approach for multichannel epilepsy seizure classification which will help to automatically locate seizure activity present in the focal brain region was proposed. This paper suggested an Internet of Things (IoT) framework based on a smart phone by utilizing a novel feature termed multiresolution critical spectral verge (MCSV), based on frequency-derived information for epileptic seizure classification which was optimized using a flower pollination algorithm (FPA). A wireless sensor technology (WSN) was utilized to record the electroencephalography (EEG) signal of epileptic patients. Next, the EEG signal was pre-processed utilizing a multiresolution-based adaptive filtering (MRAF) method. Then, the maximal frequency point at which the power spectral density (PSD) of each EEG segment was greater than the average spectral power of the corresponding frequency band was computed. This point was further optimized to extract a point termed as critical spectral verge (CSV) to extract the exact high frequency oscillations representing the actual seizure activity present in the EEG signal. Next, a support vector machine (SVM) classifier was used for channel-wise classification of the seizure and non-seizure regions using CSV as a feature. This process of classification using the CSV feature extracted from the MRAF output is referred to as the MCSV approach. As a final step, cloud-based services were employed to analyze the EEG information from the subject’s smart phone. An exhaustive analysis was undertaken to assess the performance of the MCSV approach for two datasets. The presented approach showed an improved performance with a 93.83% average sensitivity, a 97.94% average specificity, a 97.38% average accuracy with the SVM classifier, and a 95.89% average detection rate as compared with other state-of-the-art studies such as deep learning. The methods presented in the literature were unable to precisely localize the origination of the seizure activity in the brain region and reported a low seizure detection rate. This work introduced an optimized CSV feature which was effectively used for multichannel seizure classification and localization of seizure origination. The proposed MCSV approach will help diagnose epileptic behavior from multichannel EEG signals which will be extremely useful for neuro-experts to analyze seizure details from different regions of the brain.

Chemical technology

Halaman 11 dari 152652