arXiv Open Access 2023

FocusLearn: Fully-Interpretable, High-Performance Modular Neural Networks for Time Series

Qiqi Su Christos Kloukinas Artur d'Avila Garcez
Lihat Sumber

Abstrak

Multivariate time series have many applications, from healthcare and meteorology to life science. Although deep learning models have shown excellent predictive performance for time series, they have been criticised for being "black-boxes" or non-interpretable. This paper proposes a novel modular neural network model for multivariate time series prediction that is interpretable by construction. A recurrent neural network learns the temporal dependencies in the data while an attention-based feature selection component selects the most relevant features and suppresses redundant features used in the learning of the temporal dependencies. A modular deep network is trained from the selected features independently to show the users how features influence outcomes, making the model interpretable. Experimental results show that this approach can outperform state-of-the-art interpretable Neural Additive Models (NAM) and variations thereof in both regression and classification of time series tasks, achieving a predictive performance that is comparable to the top non-interpretable methods for time series, LSTM and XGBoost.

Topik & Kata Kunci

Penulis (3)

Q

Qiqi Su

C

Christos Kloukinas

A

Artur d'Avila Garcez

Format Sitasi

Su, Q., Kloukinas, C., Garcez, A.d. (2023). FocusLearn: Fully-Interpretable, High-Performance Modular Neural Networks for Time Series. https://arxiv.org/abs/2311.16834

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2023
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓