arXiv Open Access 2024

Regulation of Language Models With Interpretability Will Likely Result In A Performance Trade-Off

Eoin M. Kenny Julie A. Shah
Lihat Sumber

Abstrak

Regulation is increasingly cited as the most important and pressing concern in machine learning. However, it is currently unknown how to implement this, and perhaps more importantly, how it would effect model performance alongside human collaboration if actually realized. In this paper, we attempt to answer these questions by building a regulatable large-language model (LLM), and then quantifying how the additional constraints involved affect (1) model performance, alongside (2) human collaboration. Our empirical results reveal that it is possible to force an LLM to use human-defined features in a transparent way, but a "regulation performance trade-off" previously not considered reveals itself in the form of a 7.34% classification performance drop. Surprisingly however, we show that despite this, such systems actually improve human task performance speed and appropriate confidence in a realistic deployment setting compared to no AI assistance, thus paving a way for fair, regulatable AI, which benefits users.

Topik & Kata Kunci

Penulis (2)

E

Eoin M. Kenny

J

Julie A. Shah

Format Sitasi

Kenny, E.M., Shah, J.A. (2024). Regulation of Language Models With Interpretability Will Likely Result In A Performance Trade-Off. https://arxiv.org/abs/2412.12169

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓