Semantic Scholar Open Access 2020 300 sitasi

Transparency and trust in artificial intelligence systems

Philipp Schmidt F. Biessmann Timm Teubner

Abstrak

ABSTRACT Assistive technology featuring artificial intelligence (AI) to support human decision-making has become ubiquitous. Assistive AI achieves accuracy comparable to or even surpassing that of human experts. However, often the adoption of assistive AI systems is limited by a lack of trust of humans into an AI’s prediction. This is why the AI research community has been focusing on rendering AI decisions more transparent by providing explanations of an AIs decision. To what extent these explanations really help to foster trust into an AI system remains an open question. In this paper, we report the results of a behavioural experiment in which subjects were able to draw on the support of an ML-based decision support tool for text classification. We experimentally varied the information subjects received and show that transparency can actually have a negative impact on trust. We discuss implications for decision makers employing assistive AI technology.

Topik & Kata Kunci

Penulis (3)

P

Philipp Schmidt

F

F. Biessmann

T

Timm Teubner

Format Sitasi

Schmidt, P., Biessmann, F., Teubner, T. (2020). Transparency and trust in artificial intelligence systems. https://doi.org/10.1080/12460125.2020.1819094

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber doi.org/10.1080/12460125.2020.1819094
Informasi Jurnal
Tahun Terbit
2020
Bahasa
en
Total Sitasi
300×
Sumber Database
Semantic Scholar
DOI
10.1080/12460125.2020.1819094
Akses
Open Access ✓