Semantic Scholar Open Access 2017 3457 sitasi

Understanding Black-box Predictions via Influence Functions

Pang Wei Koh Percy Liang

Abstrak

How can we explain the predictions of a black-box model? In this paper, we use influence functions — a classic technique from robust statistics — to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. To scale up influence functions to modern machine learning settings, we develop a simple, efficient implementation that requires only oracle access to gradients and Hessian-vector products. We show that even on non-convex and non-differentiable models where the theory breaks down, approximations to influence functions can still provide valuable information. On linear models and convolutional neural networks, we demonstrate that influence functions are useful for multiple purposes: understanding model behavior, debugging models, detecting dataset errors, and even creating visually-indistinguishable training-set attacks.

Penulis (2)

P

Pang Wei Koh

P

Percy Liang

Format Sitasi

Koh, P.W., Liang, P. (2017). Understanding Black-box Predictions via Influence Functions. https://www.semanticscholar.org/paper/08ad8fad21f6ec4cda4d56be1ca5e146b7c913a1

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2017
Bahasa
en
Total Sitasi
3457×
Sumber Database
Semantic Scholar
Akses
Open Access ✓