arXiv Open Access 2024

Interactive Explainable Anomaly Detection for Industrial Settings

Daniel Gramelt Timon Höfer Ute Schmid
Lihat Sumber

Abstrak

Being able to recognise defects in industrial objects is a key element of quality assurance in production lines. Our research focuses on visual anomaly detection in RGB images. Although Convolutional Neural Networks (CNNs) achieve high accuracies in this task, end users in industrial environments receive the model's decisions without additional explanations. Therefore, it is of interest to enrich the model's outputs with further explanations to increase confidence in the model and speed up anomaly detection. In our work, we focus on (1) CNN-based classification models and (2) the further development of a model-agnostic explanation algorithm for black-box classifiers. Additionally, (3) we demonstrate how we can establish an interactive interface that allows users to further correct the model's output. We present our NearCAIPI Interaction Framework, which improves AI through user interaction, and show how this approach increases the system's trustworthiness. We also illustrate how NearCAIPI can integrate human feedback into an interactive process chain.

Topik & Kata Kunci

Penulis (3)

D

Daniel Gramelt

T

Timon Höfer

U

Ute Schmid

Format Sitasi

Gramelt, D., Höfer, T., Schmid, U. (2024). Interactive Explainable Anomaly Detection for Industrial Settings. https://arxiv.org/abs/2410.12817

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓