Semantic Scholar Open Access 2020 223 sitasi

Evaluating saliency map explanations for convolutional neural networks: a user study

Ahmed Alqaraawi M. Schuessler Philipp Weiß Enrico Costanza N. Bianchi-Berthouze

Abstrak

Convolutional neural networks (CNNs) offer great machine learning performance over a range of applications, but their operation is hard to interpret, even for experts. Various explanation algorithms have been proposed to address this issue, yet limited research effort has been reported concerning their user evaluation. In this paper, we report on an online between-group user study designed to evaluate the performance of "saliency maps" - a popular explanation algorithm for image classification applications of CNNs. Our results indicate that saliency maps produced by the LRP algorithm helped participants to learn about some specific image features the system is sensitive to. However, the maps seem to provide very limited help for participants to anticipate the network's output for new images. Drawing on our findings, we highlight implications for design and further research on explainable AL In particular, we argue the HCI and AI communities should look beyond instance-level explanations.

Topik & Kata Kunci

Penulis (5)

A

Ahmed Alqaraawi

M

M. Schuessler

P

Philipp Weiß

E

Enrico Costanza

N

N. Bianchi-Berthouze

Format Sitasi

Alqaraawi, A., Schuessler, M., Weiß, P., Costanza, E., Bianchi-Berthouze, N. (2020). Evaluating saliency map explanations for convolutional neural networks: a user study. https://doi.org/10.1145/3377325.3377519

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber doi.org/10.1145/3377325.3377519
Informasi Jurnal
Tahun Terbit
2020
Bahasa
en
Total Sitasi
223×
Sumber Database
Semantic Scholar
DOI
10.1145/3377325.3377519
Akses
Open Access ✓