arXiv Open Access 2025

Birds look like cars: Adversarial analysis of intrinsically interpretable deep learning

Hubert Baniecki Przemyslaw Biecek
Lihat Sumber

Abstrak

A common belief is that intrinsically interpretable deep learning models ensure a correct, intuitive understanding of their behavior and offer greater robustness against accidental errors or intentional manipulation. However, these beliefs have not been comprehensively verified, and growing evidence casts doubt on them. In this paper, we highlight the risks related to overreliance and susceptibility to adversarial manipulation of these so-called "intrinsically (aka inherently) interpretable" models by design. We introduce two strategies for adversarial analysis with prototype manipulation and backdoor attacks against prototype-based networks, and discuss how concept bottleneck models defend against these attacks. Fooling the model's reasoning by exploiting its use of latent prototypes manifests the inherent uninterpretability of deep neural networks, leading to a false sense of security reinforced by a visual confirmation bias. The reported limitations of part-prototype networks put their trustworthiness and applicability into question, motivating further work on the robustness and alignment of (deep) interpretable models.

Topik & Kata Kunci

Penulis (2)

H

Hubert Baniecki

P

Przemyslaw Biecek

Format Sitasi

Baniecki, H., Biecek, P. (2025). Birds look like cars: Adversarial analysis of intrinsically interpretable deep learning. https://arxiv.org/abs/2503.08636

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓