Semantic Scholar Open Access 2021 460 sitasi

Measuring and Improving Consistency in Pretrained Language Models

Yanai Elazar Nora Kassner Shauli Ravfogel Abhilasha Ravichander E. Hovy +2 lainnya

Abstrak

Abstract Consistency of a model—that is, the invariance of its behavior under meaning-preserving alternations in its input—is a highly desirable property in natural language processing. In this paper we study the question: Are Pretrained Language Models (PLMs) consistent with respect to factual knowledge? To this end, we create ParaRel🤘, a high-quality resource of cloze-style query English paraphrases. It contains a total of 328 paraphrases for 38 relations. Using ParaRel🤘, we show that the consistency of all PLMs we experiment with is poor— though with high variance between relations. Our analysis of the representational spaces of PLMs suggests that they have a poor structure and are currently not suitable for representing knowledge robustly. Finally, we propose a method for improving model consistency and experimentally demonstrate its effectiveness.1

Topik & Kata Kunci

Penulis (7)

Y

Yanai Elazar

N

Nora Kassner

S

Shauli Ravfogel

A

Abhilasha Ravichander

E

E. Hovy

H

Hinrich Schütze

Y

Yoav Goldberg

Format Sitasi

Elazar, Y., Kassner, N., Ravfogel, S., Ravichander, A., Hovy, E., Schütze, H. et al. (2021). Measuring and Improving Consistency in Pretrained Language Models. https://doi.org/10.1162/tacl_a_00410

Akses Cepat

Lihat di Sumber doi.org/10.1162/tacl_a_00410
Informasi Jurnal
Tahun Terbit
2021
Bahasa
en
Total Sitasi
460×
Sumber Database
Semantic Scholar
DOI
10.1162/tacl_a_00410
Akses
Open Access ✓