Artificial Intelligence in Cardiology.
Kipp W. Johnson, Jessica Torres Soto, Benjamin S. Glicksberg
et al.
Artificial intelligence and machine learning are poised to influence nearly every aspect of the human condition, and cardiology is not an exception to this trend. This paper provides a guide for clinicians on relevant aspects of artificial intelligence and machine learning, reviews selected applications of these methods in cardiology to date, and identifies how cardiovascular medicine could incorporate artificial intelligence in the future. In particular, the paper first reviews predictive modeling concepts relevant to cardiology such as feature selection and frequent pitfalls such as improper dichotomization. Second, it discusses common algorithms used in supervised learning and reviews selected applications in cardiology and related disciplines. Third, it describes the advent of deep learning and related methods collectively called unsupervised learning, provides contextual examples both in general medicine and in cardiovascular medicine, and then explains how these methods could be applied to enable precision cardiology and improve patient outcomes.
The Case against Accuracy Estimation for Comparing Induction Algorithms
F. Provost, Tom Fawcett, Ron Kohavi
1260 sitasi
en
Computer Science
Kernel Methods for Relation Extraction
D. Zelenko, Chinatsu Aone, A. Richardella
We present an application of kernel methods to extracting relations from unstructured natural language sources. We introduce kernels defined over shallow parse representations of text, and design efficient algorithms for computing the kernels. We use the devised kernels in conjunction with Support Vector Machine and Voted Perceptron learning algorithms for the task of extracting person-affiliation and organization-location relations from text. We experimentally evaluate the proposed methods and compare them with feature-based learning algorithms, with promising results.
1304 sitasi
en
Computer Science, Mathematics
Network intrusion detection
Biswanath Mukherjee, Todd L. Heberlein, Karl N. Levitt
1412 sitasi
en
Computer Science
Prolog Programming for Artificial Intelligence
I. Bratko
1176 sitasi
en
Computer Science
Data mining in bioinformatics using Weka
E. Frank, M. Hall, Leonard E. Trigg
et al.
931 sitasi
en
Computer Science, Medicine
A System for Massively Parallel Hyperparameter Tuning
Liam Li, Kevin G. Jamieson, A. Rostamizadeh
et al.
Modern learning models are characterized by large hyperparameter spaces and long training times. These properties, coupled with the rise of parallel computing and the growing demand to productionize machine learning workloads, motivate the need to develop mature hyperparameter optimization functionality in distributed computing settings. We address this challenge by first introducing a simple and robust hyperparameter optimization algorithm called ASHA, which exploits parallelism and aggressive early-stopping to tackle large-scale hyperparameter optimization problems. Our extensive empirical results show that ASHA outperforms existing state-of-the-art hyperparameter optimization methods; scales linearly with the number of workers in distributed settings; and is suitable for massive parallelism, as demonstrated on a task with 500 workers. We then describe several design decisions we encountered, along with our associated solutions, when integrating ASHA in Determined AI's end-to-end production-quality machine learning system that offers hyperparameter tuning as a service.
492 sitasi
en
Computer Science, Mathematics
Classification using discriminative restricted Boltzmann machines
H. Larochelle, Yoshua Bengio
820 sitasi
en
Computer Science
K-Nearest Neighbors
Oliver Kramer
653 sitasi
en
Computer Science
Use of the Zero-Norm with Linear Models and Kernel Methods
J. Weston, A. Elisseeff, B. Scholkopf
et al.
878 sitasi
en
Computer Science, Mathematics
A connectionist machine for genetic hillclimbing
D. Ackley
872 sitasi
en
Mathematics
Identifying Sarcasm in Twitter: A Closer Look
Roberto I. González-Ibáñez, S. Muresan, Nina Wacholder
692 sitasi
en
Computer Science
An Introduction to Restricted Boltzmann Machines
Asja Fischer, C. Igel
630 sitasi
en
Computer Science
Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From An Explainable AI Competition
C. Rudin, Joanna Radin
In 2018, a landmark challenge in artificial intelligence (AI) took place, namely, the Explainable Machine Learning Challenge. The goal of the competition was to create a complicated black box model for the dataset and explain how it worked. One team did not follow the rules. Instead of sending in a black box, they created a model that was fully interpretable. This leads to the question of whether the real world of machine learning is similar to the Explainable Machine Learning Challenge, where black box models are used even when they are not needed. We discuss this team’s thought processes during the competition and their implications, which reach far beyond the competition itself.Keywords: interpretability, explainability, machine learning, finance
395 sitasi
en
Computer Science
Hubs in Space: Popular Nearest Neighbors in High-Dimensional Data
Miloš Radovanović, A. Nanopoulos, M. Ivanović
682 sitasi
en
Mathematics, Computer Science
The responsibility gap: Ascribing responsibility for the actions of learning automata
A. Matthias
812 sitasi
en
Computer Science
Time for a change: a tutorial for comparing multiple classifiers through Bayesian analysis
A. Benavoli, Giorgio Corani, J. Demšar
et al.
The machine learning community adopted the use of null hypothesis significance testing (NHST) in order to ensure the statistical validity of results. Many scientific fields however realized the shortcomings of frequentist reasoning and in the most radical cases even banned its use in publications. We should do the same: just as we have embraced the Bayesian paradigm in the development of new machine learning methods, so we should also use it in the analysis of our own results. We argue for abandonment of NHST by exposing its fallacies and, more importantly, offer better - more sound and useful - alternatives for it.
478 sitasi
en
Mathematics, Computer Science
AutoML to Date and Beyond: Challenges and Opportunities
Shubhra (Santu) Karmaker, Md. Mahadi Hassan, Micah J. Smith
et al.
As big data becomes ubiquitous across domains, and more and more stakeholders aspire to make the most of their data, demand for machine learning tools has spurred researchers to explore the possibilities of automated machine learning (AutoML). AutoML tools aim to make machine learning accessible for non-machine learning experts (domain experts), to improve the efficiency of machine learning, and to accelerate machine learning research. But although automation and efficiency are among AutoML’s main selling points, the process still requires human involvement at a number of vital steps, including understanding the attributes of domain-specific data, defining prediction problems, creating a suitable training dataset, and selecting a promising machine learning technique. These steps often require a prolonged back-and-forth that makes this process inefficient for domain experts and data scientists alike and keeps so-called AutoML systems from being truly automatic. In this review article, we introduce a new classification system for AutoML systems, using a seven-tiered schematic to distinguish these systems based on their level of autonomy. We begin by describing what an end-to-end machine learning pipeline actually looks like, and which subtasks of the machine learning pipeline have been automated so far. We highlight those subtasks that are still done manually—generally by a data scientist—and explain how this limits domain experts’ access to machine learning. Next, we introduce our novel level-based taxonomy for AutoML systems and define each level according to the scope of automation support provided. Finally, we lay out a roadmap for the future, pinpointing the research required to further automate the end-to-end machine learning pipeline and discussing important challenges that stand in the way of this ambitious goal.
311 sitasi
en
Computer Science
sbi: A toolkit for simulation-based inference
Álvaro Tejero-Cantero, Jan Boelts, Michael Deistler
et al.
e Equally contributing authors 1 Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of Munich 2 School of Informatics, University of Edinburgh 3 Neural Systems Analysis, Center of Advanced European Studies and Research (caesar), Bonn 4 Model-Driven Machine Learning, Centre for Materials and Coastal Research, Helmholtz-Zentrum Geesthacht 5 Machine Learning in Science, University of Tübingen 6 Empirical Inference, Max Planck Institute for Intelligent Systems, Tübingen DOI: 10.21105/joss.02505
309 sitasi
en
Computer Science
Leveraging support vector regression, radiomics and dosiomics for outcome prediction in personalized ultra-fractionated stereotactic adaptive radiotherapy (PULSAR)
Yajun Yu, Steve Jiang, Robert Timmerman
et al.
Personalized ultra-fractionated stereotactic adaptive radiotherapy (PULSAR) is a novel treatment that delivers radiation in pulses of protracted intervals. Accurate prediction of gross tumor volume (GTV) changes through regression models has substantial prognostic value. This study aims to develop a multi-omics based support vector regression (SVR) model for predicting GTV change. A retrospective cohort of 39 patients with 69 brain metastases was analyzed, based on radiomics (magnetic resonance image images) and dosiomics (dose maps) features. Delta features were computed to capture relative changes between two time points. A feature selection pipeline using least absolute shrinkage and selection operator (Lasso) algorithm with weight- or frequency-based ranking criterion was implemented. SVR models with various kernels were evaluated using the coefficient of determination ( R ^2 ) and relative root mean square error (RRMSE). Five-fold cross-validation with 10 repeats was employed to mitigate the limitation of small data size. Multi-omics models that integrate radiomics, dosiomics, and their delta counterparts outperform individual-omics models. Delta-radiomic features play a critical role in enhancing prediction accuracy relative to features at single time points. The top-performing model achieves an R ^2 of 0.743 and an RRMSE of 0.022. The proposed multi-omics SVR model shows promising performance in predicting continuous change of GTV. It provides a more quantitative and personalized approach to assist patient selection and treatment adjustment in PULSAR.
Computer engineering. Computer hardware, Electronic computers. Computer science