Semantic Scholar Open Access 2019 1745 sitasi

How to Fine-Tune BERT for Text Classification?

Chi Sun Xipeng Qiu Yige Xu Xuanjing Huang

Abstrak

Language model pre-training has proven to be useful in learning universal language representations. As a state-of-the-art language model pre-training model, BERT (Bidirectional Encoder Representations from Transformers) has achieved amazing results in many language understanding tasks. In this paper, we conduct exhaustive experiments to investigate different fine-tuning methods of BERT on text classification task and provide a general solution for BERT fine-tuning. Finally, the proposed solution obtains new state-of-the-art results on eight widely-studied text classification datasets.

Topik & Kata Kunci

Penulis (4)

C

Chi Sun

X

Xipeng Qiu

Y

Yige Xu

X

Xuanjing Huang

Format Sitasi

Sun, C., Qiu, X., Xu, Y., Huang, X. (2019). How to Fine-Tune BERT for Text Classification?. https://doi.org/10.1007/978-3-030-32381-3_16

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber doi.org/10.1007/978-3-030-32381-3_16
Informasi Jurnal
Tahun Terbit
2019
Bahasa
en
Total Sitasi
1745×
Sumber Database
Semantic Scholar
DOI
10.1007/978-3-030-32381-3_16
Akses
Open Access ✓