Semantic Scholar Open Access 2017 225 sitasi

RUBER: An Unsupervised Method for Automatic Evaluation of Open-Domain Dialog Systems

Chongyang Tao Lili Mou Dongyan Zhao Rui Yan

Abstrak

Open-domain human-computer conversation has been attracting increasing attention over the past few years. However, there does not exist a standard automatic evaluation metric for open-domain dialog systems; researchers usually resort to human annotation for model evaluation, which is time- and labor-intensive. In this paper, we propose RUBER, a Referenced metric and Unreferenced metric Blended Evaluation Routine, which evaluates a reply by taking into consideration both a groundtruth reply and a query (previous user-issued utterance). Our metric is learnable, but its training does not require labels of human satisfaction. Hence, RUBER is flexible and extensible to different datasets and languages. Experiments on both retrieval and generative dialog systems show that RUBER has a high correlation with human annotation, and that RUBER has fair transferability over different datasets.

Topik & Kata Kunci

Penulis (4)

C

Chongyang Tao

L

Lili Mou

D

Dongyan Zhao

R

Rui Yan

Format Sitasi

Tao, C., Mou, L., Zhao, D., Yan, R. (2017). RUBER: An Unsupervised Method for Automatic Evaluation of Open-Domain Dialog Systems. https://doi.org/10.1609/aaai.v32i1.11321

Akses Cepat

Lihat di Sumber doi.org/10.1609/aaai.v32i1.11321
Informasi Jurnal
Tahun Terbit
2017
Bahasa
en
Total Sitasi
225×
Sumber Database
Semantic Scholar
DOI
10.1609/aaai.v32i1.11321
Akses
Open Access ✓