Hasil untuk "deep learning"

Menampilkan 20 dari ~11040814 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar

JSON API
S2 Open Access 2019
Deep learning-based electroencephalography analysis: a systematic review

Y. Roy, Hubert J. Banville, Isabela Albuquerque et al.

Context. Electroencephalography (EEG) is a complex signal and can require several years of training, as well as advanced signal processing and feature extraction methodologies to be correctly interpreted. Recently, deep learning (DL) has shown great promise in helping make sense of EEG signals due to its capacity to learn good feature representations from raw data. Whether DL truly presents advantages as compared to more traditional EEG processing approaches, however, remains an open question. Objective. In this work, we review 154 papers that apply DL to EEG, published between January 2010 and July 2018, and spanning different application domains such as epilepsy, sleep, brain–computer interfacing, and cognitive and affective monitoring. We extract trends and highlight interesting approaches from this large body of literature in order to inform future research and formulate recommendations. Methods. Major databases spanning the fields of science and engineering were queried to identify relevant studies published in scientific journals, conferences, and electronic preprint repositories. Various data items were extracted for each study pertaining to (1) the data, (2) the preprocessing methodology, (3) the DL design choices, (4) the results, and (5) the reproducibility of the experiments. These items were then analyzed one by one to uncover trends. Results. Our analysis reveals that the amount of EEG data used across studies varies from less than ten minutes to thousands of hours, while the number of samples seen during training by a network varies from a few dozens to several millions, depending on how epochs are extracted. Interestingly, we saw that more than half the studies used publicly available data and that there has also been a clear shift from intra-subject to inter-subject approaches over the last few years. About of the studies used convolutional neural networks (CNNs), while used recurrent neural networks (RNNs), most often with a total of 3–10 layers. Moreover, almost one-half of the studies trained their models on raw or preprocessed EEG time series. Finally, the median gain in accuracy of DL approaches over traditional baselines was across all relevant studies. More importantly, however, we noticed studies often suffer from poor reproducibility: a majority of papers would be hard or impossible to reproduce given the unavailability of their data and code. Significance. To help the community progress and share work more effectively, we provide a list of recommendations for future studies and emphasize the need for more reproducible research. We also make our summary table of DL and EEG papers available and invite authors of published work to contribute to it directly. A planned follow-up to this work will be an online public benchmarking portal listing reproducible results.

1240 sitasi en Computer Science, Engineering
S2 Open Access 2019
Convergence of Edge Computing and Deep Learning: A Comprehensive Survey

Yiwen Han, Xiaofei Wang, Victor C. M. Leung et al.

Ubiquitous sensors and smart devices from factories and communities are generating massive amounts of data, and ever-increasing computing power is driving the core of computation and services from the cloud to the edge of the network. As an important enabler broadly changing people’s lives, from face recognition to ambitious smart factories and cities, developments of artificial intelligence (especially deep learning, DL) based applications and services are thriving. However, due to efficiency and latency issues, the current cloud computing service architecture hinders the vision of “providing artificial intelligence for every person and every organization at everywhere”. Thus, unleashing DL services using resources at the network edge near the data sources has emerged as a desirable solution. Therefore, edge intelligence, aiming to facilitate the deployment of DL services by edge computing, has received significant attention. In addition, DL, as the representative technique of artificial intelligence, can be integrated into edge computing frameworks to build intelligent edge for dynamic, adaptive edge maintenance and management. With regard to mutually beneficial edge intelligence and intelligent edge, this paper introduces and discusses: 1) the application scenarios of both; 2) the practical implementation methods and enabling technologies, namely DL training and inference in the customized edge computing framework; 3) challenges and future trends of more pervasive and fine-grained intelligence. We believe that by consolidating information scattered across the communication, networking, and DL areas, this survey can help readers to understand the connections between enabling technologies while promoting further discussions on the fusion of edge intelligence and intelligent edge, i.e., Edge DL.

1183 sitasi en Computer Science
S2 Open Access 2018
Deep Learning on Graphs: A Survey

Ziwei Zhang, Peng Cui, Wenwu Zhu

Deep learning has been shown to be successful in a number of domains, ranging from acoustics, images, to natural language processing. However, applying deep learning to the ubiquitous graph data is non-trivial because of the unique characteristics of graphs. Recently, substantial research efforts have been devoted to applying deep learning methods to graphs, resulting in beneficial advances in graph analysis techniques. In this survey, we comprehensively review the different types of deep learning methods on graphs. We divide the existing methods into five categories based on their model architectures and training strategies: graph recurrent neural networks, graph convolutional networks, graph autoencoders, graph reinforcement learning, and graph adversarial methods. We then provide a comprehensive overview of these methods in a systematic manner mainly by following their development history. We also analyze the differences and compositions of different methods. Finally, we briefly outline the applications in which they have been used and discuss potential future research directions.

1547 sitasi en Computer Science, Mathematics
S2 Open Access 2017
A Brief Survey of Deep Reinforcement Learning

K. Arulkumaran, M. Deisenroth, Miles Brundage et al.

Deep reinforcement learning (DRL) is poised to revolutionize the field of artificial intelligence (AI) and represents a step toward building autonomous systems with a higherlevel understanding of the visual world. Currently, deep learning is enabling reinforcement learning (RL) to scale to problems that were previously intractable, such as learning to play video games directly from pixels. DRL algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of RL, then progress to the main streams of value-based and policy-based methods. Our survey will cover central algorithms in deep RL, including the deep Q-network (DQN), trust region policy optimization (TRPO), and asynchronous advantage actor critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via RL. To conclude, we describe several current areas of research within the field.

3492 sitasi en Computer Science, Mathematics
S2 Open Access 2017
Deep Learning for Sensor-based Activity Recognition: A Survey

Jindong Wang, Yiqiang Chen, Shuji Hao et al.

Abstract Sensor-based activity recognition seeks the profound high-level knowledge about human activities from multitudes of low-level sensor readings. Conventional pattern recognition approaches have made tremendous progress in the past years. However, those methods often heavily rely on heuristic hand-crafted feature extraction, which could hinder their generalization performance. Additionally, existing methods are undermined for unsupervised and incremental learning tasks. Recently, the recent advancement of deep learning makes it possible to perform automatic high-level feature extraction thus achieves promising performance in many areas. Since then, deep learning based methods have been widely adopted for the sensor-based activity recognition tasks. This paper surveys the recent advance of deep learning based sensor-based activity recognition. We summarize existing literature from three aspects: sensor modality, deep model, and application. We also present detailed insights on existing work and propose grand challenges for future research.

1786 sitasi en Computer Science
S2 Open Access 2016
Deep learning with coherent nanophotonic circuits

Yichen Shen, N. Harris, S. Skirlo et al.

Artificial neural networks are computational network models inspired by signal processing in the brain. These models have dramatically improved performance for many machine-learning tasks, including speech and image recognition. However, today's computing hardware is inefficient at implementing neural networks, in large part because much of it was designed for von Neumann computing schemes. Significant effort has been made towards developing electronic architectures tuned to implement artificial neural networks that exhibit improved computational speed and accuracy. Here, we propose a new architecture for a fully optical neural network that, in principle, could offer an enhancement in computational speed and power efficiency over state-of-the-art electronics for conventional inference tasks. We experimentally demonstrate the essential part of the concept using a programmable nanophotonic processor featuring a cascaded array of 56 programmable Mach–Zehnder interferometers in a silicon photonic integrated circuit and show its utility for vowel recognition. Programmable silicon nanophotonic processor empowers optical neural networks.

2826 sitasi en Physics, Computer Science
S2 Open Access 2016
Geometric Deep Learning on Graphs and Manifolds Using Mixture Model CNNs

Federico Monti, Davide Boscaini, Jonathan Masci et al.

Deep learning has achieved a remarkable performance breakthrough in several fields, most notably in speech recognition, natural language processing, and computer vision. In particular, convolutional neural network (CNN) architectures currently produce state-of-the-art performance on a variety of image analysis tasks such as object detection and recognition. Most of deep learning research has so far focused on dealing with 1D, 2D, or 3D Euclidean-structured data such as acoustic signals, images, or videos. Recently, there has been an increasing interest in geometric deep learning, attempting to generalize deep learning methods to non-Euclidean structured data such as graphs and manifolds, with a variety of applications from the domains of network analysis, computational social science, or computer graphics. In this paper, we propose a unified framework allowing to generalize CNN architectures to non-Euclidean domains (graphs and manifolds) and learn local, stationary, and compositional task-specific features. We show that various non-Euclidean CNN methods previously proposed in the literature can be considered as particular instances of our framework. We test the proposed method on standard tasks from the realms of image-, graph-and 3D shape analysis and show that it consistently outperforms previous approaches.

1940 sitasi en Computer Science
S2 Open Access 2019
Deep Learning Recommendation Model for Personalization and Recommendation Systems

M. Naumov, Dheevatsa Mudigere, H. Shi et al.

With the advent of deep learning, neural network-based recommendation models have emerged as an important tool for tackling personalization and recommendation tasks. These networks differ significantly from other deep learning networks due to their need to handle categorical features and are not well studied or understood. In this paper, we develop a state-of-the-art deep learning recommendation model (DLRM) and provide its implementation in both PyTorch and Caffe2 frameworks. In addition, we design a specialized parallelization scheme utilizing model parallelism on the embedding tables to mitigate memory constraints while exploiting data parallelism to scale-out compute from the fully-connected layers. We compare DLRM against existing recommendation models and characterize its performance on the Big Basin AI platform, demonstrating its usefulness as a benchmark for future algorithmic experimentation and system co-design.

888 sitasi en Computer Science
S2 Open Access 2021
Model complexity of deep learning: a survey

Xia Hu, Lingyang Chu, J. Pei et al.

Model complexity is a fundamental problem in deep learning. In this paper, we conduct a systematic overview of the latest studies on model complexity in deep learning. Model complexity of deep learning can be categorized into expressive capacity and effective model complexity. We review the existing studies on those two categories along four important factors, including model framework, model size, optimization process, and data complexity. We also discuss the applications of deep learning model complexity including understanding model generalization, model optimization, and model selection and design. We conclude by proposing several interesting future directions.

380 sitasi en Computer Science
S2 Open Access 2021
Geometric deep learning on molecular representations

Kenneth Atz, F. Grisoni, G. Schneider

Geometric deep learning (GDL) is based on neural network architectures that incorporate and process symmetry information. GDL bears promise for molecular modelling applications that rely on molecular representations with different symmetry properties and levels of abstraction. This Review provides a structured and harmonized overview of molecular GDL, highlighting its applications in drug discovery, chemical synthesis prediction and quantum chemistry. It contains an introduction to the principles of GDL, as well as relevant molecular representations, such as molecular graphs, grids, surfaces and strings, and their respective properties. The current challenges for GDL in the molecular sciences are discussed, and a forecast of future opportunities is attempted. Geometric representations are becoming more important in molecular deep learning as the spatial structure of molecules contains important information about their properties. Kenneth Atz and colleagues review current progress and challenges in this emerging field of geometric deep learning.

378 sitasi en Computer Science, Physics
S2 Open Access 2024
Loss of plasticity in deep continual learning

Shibhansh Dohare, J. F. Hernandez-Garcia, Qingfeng Lan et al.

Artificial neural networks, deep-learning methods and the backpropagation algorithm1 form the foundation of modern machine learning and artificial intelligence. These methods are almost always used in two phases, one in which the weights of the network are updated and one in which the weights are held constant while the network is used or evaluated. This contrasts with natural learning and many applications, which require continual learning. It has been unclear whether or not deep learning methods work in continual learning settings. Here we show that they do not—that standard deep-learning methods gradually lose plasticity in continual-learning settings until they learn no better than a shallow network. We show such loss of plasticity using the classic ImageNet dataset and reinforcement-learning problems across a wide range of variations in the network and the learning algorithm. Plasticity is maintained indefinitely only by algorithms that continually inject diversity into the network, such as our continual backpropagation algorithm, a variation of backpropagation in which a small fraction of less-used units are continually and randomly reinitialized. Our results indicate that methods based on gradient descent are not enough—that sustained deep learning requires a random, non-gradient component to maintain variability and plasticity. The pervasive problem of artificial neural networks losing plasticity in continual-learning settings is demonstrated and a simple solution called the continual backpropagation algorithm is described to prevent this issue.

260 sitasi en Computer Science, Medicine
S2 Open Access 2021
Machine Learning and Deep Learning Applications-A Vision

Neha Sharma, Reecha Sharma, N. Jindal

Abstract The application of artificial intelligence is machine learning which is one of the current topics in the computer field as well as for the new COVID-19 pandemic. Researchers have given a lot of input to enhance the precision of machine learning algorithms and lot of work is carried out rapidly to enhance the intelligence of machines. Learning, a natural process in human behaviour that also becomes a vital part of machines as well. Besides this, another concept of deep learning comes to play its major role. Deep neural network (deep learning) is a subgroup of machine learning. Deep learning had been analysed and implemented in various applications and had shown remarkable results thus this field needs wider exploration which can be helpful for further real-world applications. The main objective of this paper is to provide insight survey for machine learning along with deep learning applications in various domains. Also, some applications with new normal COVID-19 blues. A review on already present applications and currently going on applications in several domains, for machine learning along with deep neural learning are exemplified.

330 sitasi en Computer Science
S2 Open Access 2021
A Survey on ensemble learning under the era of deep learning

Yongquan Yang, Haijun Lv, Ning Chen

Due to the dominant position of deep learning (mostly deep neural networks) in various artificial intelligence applications, recently, ensemble learning based on deep neural networks (ensemble deep learning) has shown significant performances in improving the generalization of learning system. However, since modern deep neural networks usually have millions to billions of parameters, the time and space overheads for training multiple base deep learners and testing with the ensemble deep learner are far greater than that of traditional ensemble learning. Though several algorithms of fast ensemble deep learning have been proposed to promote the deployment of ensemble deep learning in some applications, further advances still need to be made for many applications in specific fields, where the developing time and computing resources are usually restricted or the data to be processed is of large dimensionality. An urgent problem needs to be solved is how to take the significant advantages of ensemble deep learning while reduce the required expenses so that many more applications in specific fields can benefit from it. For the alleviation of this problem, it is essential to know about how ensemble learning has developed under the era of deep learning. Thus, in this article, we present discussions focusing on data analyses of published works, methodologies, recent advances and unattainability of traditional ensemble learning and ensemble deep learning. We hope this article will be helpful to realize the intrinsic problems and technical challenges faced by future developments of ensemble learning under the era of deep learning.

321 sitasi en Computer Science
S2 Open Access 2021
A Tutorial on Ultrareliable and Low-Latency Communications in 6G: Integrating Domain Knowledge Into Deep Learning

Changyang She, Chengjian Sun, Zhouyou Gu et al.

As one of the key communication scenarios in the fifth-generation and also the sixth-generation (6G) mobile communication networks, ultrareliable and low-latency communications (URLLCs) will be central for the development of various emerging mission-critical applications. State-of-the-art mobile communication systems do not fulfill the end-to-end delay and overall reliability requirements of URLLCs. In particular, a holistic framework that takes into account latency, reliability, availability, scalability, and decision-making under uncertainty is lacking. Driven by recent breakthroughs in deep neural networks, deep learning algorithms have been considered as promising ways of developing enabling technologies for URLLCs in future 6G networks. This tutorial illustrates how domain knowledge (models, analytical tools, and optimization frameworks) of communications and networking can be integrated into different kinds of deep learning algorithms for URLLCs. We first provide some background of URLLCs and review promising network architectures and deep learning frameworks for 6G. To better illustrate how to improve learning algorithms with domain knowledge, we revisit model-based analytical tools and cross-layer optimization frameworks for URLLCs. Following this, we examine the potential of applying supervised/unsupervised deep learning and deep reinforcement learning in URLLCs and summarize related open problems. Finally, we provide simulation and experimental results to validate the effectiveness of different learning algorithms and discuss future directions.

315 sitasi en Computer Science
S2 Open Access 2021
Deep Learning applications for COVID-19

Connor Shorten, T. Khoshgoftaar, B. Furht

This survey explores how Deep Learning has battled the COVID-19 pandemic and provides directions for future research on COVID-19. We cover Deep Learning applications in Natural Language Processing, Computer Vision, Life Sciences, and Epidemiology. We describe how each of these applications vary with the availability of big data and how learning tasks are constructed. We begin by evaluating the current state of Deep Learning and conclude with key limitations of Deep Learning for COVID-19 applications. These limitations include Interpretability, Generalization Metrics, Learning from Limited Labeled Data, and Data Privacy. Natural Language Processing applications include mining COVID-19 research for Information Retrieval and Question Answering, as well as Misinformation Detection, and Public Sentiment Analysis. Computer Vision applications cover Medical Image Analysis, Ambient Intelligence, and Vision-based Robotics. Within Life Sciences, our survey looks at how Deep Learning can be applied to Precision Diagnostics, Protein Structure Prediction, and Drug Repurposing. Deep Learning has additionally been utilized in Spread Forecasting for Epidemiology. Our literature review has found many examples of Deep Learning systems to fight COVID-19. We hope that this survey will help accelerate the use of Deep Learning for COVID-19 research.

294 sitasi en Medicine

Halaman 10 dari 552041