arXiv Open Access 2019

Knowledge Distillation from Internal Representations

Gustavo Aguilar Yuan Ling Yu Zhang Benjamin Yao Xing Fan +1 lainnya
Lihat Sumber

Abstrak

Knowledge distillation is typically conducted by training a small model (the student) to mimic a large and cumbersome model (the teacher). The idea is to compress the knowledge from the teacher by using its output probabilities as soft-labels to optimize the student. However, when the teacher is considerably large, there is no guarantee that the internal knowledge of the teacher will be transferred into the student; even if the student closely matches the soft-labels, its internal representations may be considerably different. This internal mismatch can undermine the generalization capabilities originally intended to be transferred from the teacher to the student. In this paper, we propose to distill the internal representations of a large model such as BERT into a simplified version of it. We formulate two ways to distill such representations and various algorithms to conduct the distillation. We experiment with datasets from the GLUE benchmark and consistently show that adding knowledge distillation from internal representations is a more powerful method than only using soft-label distillation.

Topik & Kata Kunci

Penulis (6)

G

Gustavo Aguilar

Y

Yuan Ling

Y

Yu Zhang

B

Benjamin Yao

X

Xing Fan

C

Chenlei Guo

Format Sitasi

Aguilar, G., Ling, Y., Zhang, Y., Yao, B., Fan, X., Guo, C. (2019). Knowledge Distillation from Internal Representations. https://arxiv.org/abs/1910.03723

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2019
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓