arXiv Open Access 2023

An Evaluation on Large Language Model Outputs: Discourse and Memorization

Adrian de Wynter Xun Wang Alex Sokolov Qilong Gu Si-Qing Chen
Lihat Sumber

Abstrak

We present an empirical evaluation of various outputs generated by nine of the most widely-available large language models (LLMs). Our analysis is done with off-the-shelf, readily-available tools. We find a correlation between percentage of memorized text, percentage of unique text, and overall output quality, when measured with respect to output pathologies such as counterfactual and logically-flawed statements, and general failures like not staying on topic. Overall, 80.0% of the outputs evaluated contained memorized data, but outputs containing the most memorized content were also more likely to be considered of high quality. We discuss and evaluate mitigation strategies, showing that, in the models evaluated, the rate of memorized text being output is reduced. We conclude with a discussion on potential implications around what it means to learn, to memorize, and to evaluate quality text.

Topik & Kata Kunci

Penulis (5)

A

Adrian de Wynter

X

Xun Wang

A

Alex Sokolov

Q

Qilong Gu

S

Si-Qing Chen

Format Sitasi

Wynter, A.d., Wang, X., Sokolov, A., Gu, Q., Chen, S. (2023). An Evaluation on Large Language Model Outputs: Discourse and Memorization. https://arxiv.org/abs/2304.08637

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2023
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓