arXiv Open Access 2024

A Novel Metric for Measuring the Robustness of Large Language Models in Non-adversarial Scenarios

Samuel Ackerman Ella Rabinovich Eitan Farchi Ateret Anaby-Tavor
Lihat Sumber

Abstrak

We evaluate the robustness of several large language models on multiple datasets. Robustness here refers to the relative insensitivity of the model's answers to meaning-preserving variants of their input. Benchmark datasets are constructed by introducing naturally-occurring, non-malicious perturbations, or by generating semantically equivalent paraphrases of input questions or statements. We further propose a novel metric for assessing a model robustness, and demonstrate its benefits in the non-adversarial scenario by empirical evaluation of several models on the created datasets.

Topik & Kata Kunci

Penulis (4)

S

Samuel Ackerman

E

Ella Rabinovich

E

Eitan Farchi

A

Ateret Anaby-Tavor

Format Sitasi

Ackerman, S., Rabinovich, E., Farchi, E., Anaby-Tavor, A. (2024). A Novel Metric for Measuring the Robustness of Large Language Models in Non-adversarial Scenarios. https://arxiv.org/abs/2408.01963

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓