Deep Learning Applications in Medical Image Analysis
Justin Ker, Lipo Wang, J. Rao
et al.
The tremendous success of machine learning algorithms at image recognition tasks in recent years intersects with a time of dramatically increased use of electronic medical records and diagnostic imaging. This review introduces the machine learning algorithms as applied to medical image analysis, focusing on convolutional neural networks, and emphasizing clinical aspects of the field. The advantage of machine learning in an era of medical big data is that significant hierarchal relationships within the data can be discovered algorithmically without laborious hand-crafting of features. We cover key research areas and applications of medical image classification, localization, detection, segmentation, and registration. We conclude by discussing research obstacles, emerging trends, and possible future directions.
1204 sitasi
en
Computer Science
Deep learning for neural networks
D. Fall, 1. Admin, A. Coates
A Survey on Deep Learning
Samira Pouyanfar, Saad Sadiq, Yilin Yan
et al.
The field of machine learning is witnessing its golden era as deep learning slowly becomes the leader in this domain. Deep learning uses multiple layers to represent the abstractions of data to build computational models. Some key enabler deep learning algorithms such as generative adversarial networks, convolutional neural networks, and model transfers have completely changed our perception of information processing. However, there exists an aperture of understanding behind this tremendously fast-paced domain, because it was never previously represented from a multiscope perspective. The lack of core understanding renders these powerful methods as black-box machines that inhibit development at a fundamental level. Moreover, deep learning has repeatedly been perceived as a silver bullet to all stumbling blocks in machine learning, which is far from the truth. This article presents a comprehensive review of historical and recent state-of-the-art approaches in visual, audio, and text processing; social network analysis; and natural language processing, followed by the in-depth analysis on pivoting and groundbreaking advances in deep learning applications. It was also undertaken to review the issues faced in deep learning such as unsupervised learning, black-box models, and online learning and to illustrate how these challenges can be transformed into prolific future research avenues.
881 sitasi
en
Computer Science
Multi-view learning overview: Recent progress and new challenges
Jing Zhao, Xijiong Xie, Xin Xu
et al.
906 sitasi
en
Computer Science
Classification using deep learning neural networks for brain tumors
Heba M. Mohsen, E. El-Dahshan, El-Sayed M. El-Horbaty
et al.
Abstract Deep Learning is a new machine learning field that gained a lot of interest over the past few years. It was widely applied to several applications and proven to be a powerful machine learning tool for many of the complex problems. In this paper we used Deep Neural Network classifier which is one of the DL architectures for classifying a dataset of 66 brain MRIs into 4 classes e.g. normal, glioblastoma, sarcoma and metastatic bronchogenic carcinoma tumors. The classifier was combined with the discrete wavelet transform (DWT) the powerful feature extraction tool and principal components analysis (PCA) and the evaluation of the performance was quite good over all the performance measures.
905 sitasi
en
Computer Science
Meta-Learning: A Survey
Joaquin Vanschoren
Meta-learning, or learning to learn, is the science of systematically observing how different machine learning approaches perform on a wide range of learning tasks, and then learning from this experience, or meta-data, to learn new tasks much faster than otherwise possible. Not only does this dramatically speed up and improve the design of machine learning pipelines or neural architectures, it also allows us to replace hand-engineered algorithms with novel approaches learned in a data-driven way. In this chapter, we provide an overview of the state of the art in this fascinating and continuously evolving field.
829 sitasi
en
Computer Science, Mathematics
Federated Learning for Internet of Things: Recent Advances, Taxonomy, and Open Challenges
L. U. Khan, W. Saad, Zhu Han
et al.
The Internet of Things (IoT) will be ripe for the deployment of novel machine learning algorithm for both network and application management. However, given the presence of massively distributed and private datasets, it is challenging to use classical centralized learning algorithms in the IoT. To overcome this challenge, federated learning can be a promising solution that enables on-device machine learning without the need to migrate the private end-user data to a central cloud. In federated learning, only learning model updates are transferred between end-devices and the aggregation server. Although federated learning can offer better privacy preservation than centralized machine learning, it has still privacy concerns. In this paper, first, we present the recent advances of federated learning towards enabling federated learning-powered IoT applications. A set of metrics such as sparsification, robustness, quantization, scalability, security, and privacy, is delineated in order to rigorously evaluate the recent advances. Second, we devise a taxonomy for federated learning over IoT networks. Finally, we present several open research challenges with their possible solutions.
747 sitasi
en
Computer Science
Online Learning: A Comprehensive Survey
S. Hoi, Doyen Sahoo, Jing Lu
et al.
Online learning represents an important family of machine learning algorithms, in which a learner attempts to resolve an online prediction (or any type of decision-making) task by learning a model/hypothesis from a sequence of data instances one at a time. The goal of online learning is to ensure that the online learner would make a sequence of accurate predictions (or correct decisions) given the knowledge of correct answers to previous prediction or learning tasks and possibly additional information. This is in contrast to many traditional batch learning or offline machine learning algorithms that are often designed to train a model in batch from a given collection of training data instances. This survey aims to provide a comprehensive survey of the online machine learning literatures through a systematic review of basic ideas and key principles and a proper categorization of different algorithms and techniques. Generally speaking, according to the learning type and the forms of feedback information, the existing online learning works can be classified into three major categories: (i) supervised online learning where full feedback information is always available, (ii) online learning with limited feedback, and (iii) unsupervised online learning where there is no feedback available. Due to space limitation, the survey will be mainly focused on the first category, but also briefly cover some basics of the other two categories. Finally, we also discuss some open issues and attempt to shed light on potential future research directions in this field.
784 sitasi
en
Computer Science, Mathematics
A survey on Deep Learning based bearing fault diagnosis
Duy-Tang Hoang, Hee-Jun Kang
Abstract Nowadays, Deep Learning is the most attractive research trend in the area of Machine Learning. With the ability of learning features from raw data by deep architectures with many layers of non-linear data processing units, Deep Learning has become a promising tool for intelligent bearing fault diagnosis. This survey paper intends to provide a systematic review of Deep Learning based bearing fault diagnosis. The three popular Deep Learning algorithms for bearing fault diagnosis including Autoencoder, Restricted Boltzmann Machine, and Convolutional Neural Network are briefly introduced. And their applications are reviewed through publications and research works on the area of bearing fault diagnosis. Further applications and challenges in this research area are also discussed.
722 sitasi
en
Computer Science
Transfer learning using VGG-16 with Deep Convolutional Neural Network for Classifying Images
Srikanth Tammina
— Traditionally, data mining algorithms and machine learning algorithms are engineered to approach the problems in isolation. These algorithms are employed to train the model in separation on a specific feature space and same distribution. Depending on the business case, a model is trained by applying a machine learning algorithm for a specific task. A widespread assumption in the field of machine learning is that training data and test data must have identical feature spaces with the underlying distribution. On the contrary, in real world this assumption may not hold and thus models need to be rebuilt from the scratch if features and distribution changes. It is an arduous process to collect related training data and rebuild the models. In such cases, Transferring of Knowledge or transfer learning from disparate domains would be desirable. Transfer learning is a method of reusing a pre-trained model knowledge for another task. Transfer learning can be used for classification, regression and clustering problems. This paper uses one of the pre-trained models – VGG - 16 with Deep Convolutional Neural Network to classify images.
674 sitasi
en
Computer Science
Learning from positive and unlabeled data: a survey
Jessa Bekker, Jesse Davis
Learning from positive and unlabeled data or PU learning is the setting where a learner only has access to positive examples and unlabeled data. The assumption is that the unlabeled data can contain both positive and negative examples. This setting has attracted increasing interest within the machine learning literature as this type of data naturally arises in applications such as medical diagnosis and knowledge base completion. This article provides a survey of the current state of the art in PU learning. It proposes seven key research questions that commonly arise in this field and provides a broad overview of how the field has tried to address them.
669 sitasi
en
Computer Science, Mathematics
Deep Learning in Alzheimer's Disease: Diagnostic Classification and Prognostic Prediction Using Neuroimaging Data
T. Jo, K. Nho, A. Saykin
Deep learning, a state-of-the-art machine learning approach, has shown outstanding performance over traditional machine learning in identifying intricate structures in complex high-dimensional data, especially in the domain of computer vision. The application of deep learning to early detection and automated classification of Alzheimer's disease (AD) has recently gained considerable attention, as rapid progress in neuroimaging techniques has generated large-scale multimodal neuroimaging data. A systematic review of publications using deep learning approaches and neuroimaging data for diagnostic classification of AD was performed. A PubMed and Google Scholar search was used to identify deep learning papers on AD published between January 2013 and July 2018. These papers were reviewed, evaluated, and classified by algorithm and neuroimaging type, and the findings were summarized. Of 16 studies meeting full inclusion criteria, 4 used a combination of deep learning and traditional machine learning approaches, and 12 used only deep learning approaches. The combination of traditional machine learning for classification and stacked auto-encoder (SAE) for feature selection produced accuracies of up to 98.8% for AD classification and 83.7% for prediction of conversion from mild cognitive impairment (MCI), a prodromal stage of AD, to AD. Deep learning approaches, such as convolutional neural network (CNN) or recurrent neural network (RNN), that use neuroimaging data without pre-processing for feature selection have yielded accuracies of up to 96.0% for AD classification and 84.2% for MCI conversion prediction. The best classification performance was obtained when multimodal neuroimaging and fluid biomarkers were combined. Deep learning approaches continue to improve in performance and appear to hold promise for diagnostic classification of AD using multimodal neuroimaging data. AD research that uses deep learning is still evolving, improving performance by incorporating additional hybrid data types, such as—omics data, increasing transparency with explainable approaches that add knowledge of specific disease-related features and mechanisms.
590 sitasi
en
Computer Science, Engineering
A Decade Survey of Transfer Learning (2010–2020)
Shuteng Niu, Yongxin Liu, Jian Wang
et al.
Transfer learning (TL) has been successfully applied to many real-world problems that traditional machine learning (ML) cannot handle, such as image processing, speech recognition, and natural language processing (NLP). Commonly, TL tends to address three main problems of traditional machine learning: (1) insufficient labeled data, (2) incompatible computation power, and (3) distribution mismatch. In general, TL can be organized into four categories: transductive learning, inductive learning, unsupervised learning, and negative learning. Furthermore, each category can be organized into four learning types: learning on instances, learning on features, learning on parameters, and learning on relations. This article presents a comprehensive survey on TL. In addition, this article presents the state of the art, current trends, applications, and open challenges.
536 sitasi
en
Computer Science
Text Categorization with Support Vector Machines: Learning with Many Relevant Features
T. Joachims
9659 sitasi
en
Computer Science
Machine-Learning Research Four Current Directions
Thomas G. Dietterich
1017 sitasi
en
Computer Science
Making large scale SVM learning practical
T. Joachims
5639 sitasi
en
Computer Science
Introduction to machine learning for brain imaging
S. Lemm, B. Blankertz, Thorsten Dickhaus
et al.
650 sitasi
en
Computer Science, Medicine
Machine learning for fuel cell remaining useful life prediction: A review
Zaid Allal, Hassan N. Noura, Flavien Vernier
et al.
Accurate prediction of the Remaining Useful Life (RUL) of fuel cell (FC) systems is essential to ensure operational reliability, optimize maintenance strategies, and extend system lifetime in safety-critical hydrogen applications. As FC degradation is governed by complex, nonlinear, and stochastic mechanisms, machine learning (ML) has emerged as a powerful paradigm for data-driven prognostics. This paper presents a structured and comprehensive review of recent ML-based approaches for FC RUL estimation, encompassing supervised, unsupervised, and hybrid methodologies, including regression techniques, support vector machines, ensemble models, neural networks, and advanced deep learning architectures. Despite notable progress, our analysis reveals persistent limitations in the current literature, particularly the widespread neglect of underlying electrochemical and physical degradation laws, as well as the scarcity and ambiguity of explicit RUL and End-of-Life (EoL) labels in publicly available datasets. These challenges significantly constrain model generalization, interpretability, and real-world applicability. To address these gaps, we conduct a comparative analysis of more than 20 recent state-of-the-art studies and propose a unified and generalizable RUL estimation pipeline. This framework integrates data acquisition, preprocessing, feature engineering, model design, and validation, while explicitly accounting for physical consistency and operational constraints. In addition, the paper formulates practical, multi-level recommendations, including first-order guidelines for data modeling and learning strategies, second-order recommendations targeting validation protocols and real-world deployment, and the systematic integration of uncertainty quantification (UQ) techniques to enhance robustness, interpretability, and trustworthiness. By consolidating methodological insights, emerging paradigms, and deployment-oriented considerations, this review provides a comprehensive reference and a forward-looking roadmap for the development of reliable, physics-consistent, and scalable RUL prognostic frameworks for fuel cell systems.
Engineering (General). Civil engineering (General)
Automated power line recognition and 3D reconstruction for intelligent grid monitoring
Shengbo Jin, Weigang Zhu, Shaohua Jiang
Monitoring and management of power line corridors are essential for ensuring the safe and reliable operation of power transmission systems. Traditional manual inspection methods are not only inefficient but also pose significant safety risks, while certain existing automated approaches suffer from limited effectiveness in complex terrains or in the presence of discontinuities in point cloud data, resulting in insufficient accuracy in power line extraction and frequent reconstruction failures. To address these challenges, this study proposes a novel power line reconstruction method termed Weighted Multi-feature & Multi-plane Projection Geometric Fusion (WM-MPGF). The proposed method comprises two sequential stages: Weighted Multi-Feature SVM (WMF-SVM) and Multi-plane Projection and Geometric Joint Reconstruction (MPG-Recon). Specifically, WMF-SVM introduces a weighted multi-feature support vector machine framework that integrates elevation data derived from the Digital Elevation Model (DEM) with spatial features extracted via Principal Component Analysis (PCA), while optimizing feature weights through the entropy weight method to enhance the accuracy of power line identification. Subsequently, MPG-Recon performs geometric analysis to construct directional projection planes and applies the Hough transform to project power line points and determine their dominant orientations on the XOY and YOZ planes. The Davies-Bouldin index is employed to determine the optimal number of clusters, thereby enabling accurate estimation of the number of power lines. By integrating the K-means clustering algorithm, the method achieves effective separation of multiple power lines and ensures high-precision fitting of individual conductors. Experimental results indicate that the proposed approach achieves average fitting errors of 5.41 cm on the XOY plane and 5.68 cm in the vertical direction, successfully capturing the three-dimensional structural characteristics of power lines. The method constructs a robust 3D model and provides critical technical support for advanced applications in power line corridor monitoring and maintenance.
Locally Linear Continual Learning for Time Series based on VC-Theoretical Generalization Bounds
Yan V. G. Ferreira, Igor B. Lima, Pedro H. G. Mapa S.
et al.
Most machine learning methods assume fixed probability distributions, limiting their applicability in nonstationary real-world scenarios. While continual learning methods address this issue, current approaches often rely on black-box models or require extensive user intervention for interpretability. We propose SyMPLER (Systems Modeling through Piecewise Linear Evolving Regression), an explainable model for time series forecasting in nonstationary environments based on dynamic piecewise-linear approximations. Unlike other locally linear models, SyMPLER uses generalization bounds from Statistical Learning Theory to automatically determine when to add new local models based on prediction errors, eliminating the need for explicit clustering of the data. Experiments show that SyMPLER can achieve comparable performance to both black-box and existing explainable models while maintaining a human-interpretable structure that reveals insights about the system's behavior. In this sense, our approach conciliates accuracy and interpretability, offering a transparent and adaptive solution for forecasting nonstationary time series.