Semantic Scholar
Open Access
2019
1745 sitasi
How to Fine-Tune BERT for Text Classification?
Chi Sun
Xipeng Qiu
Yige Xu
Xuanjing Huang
Abstrak
Language model pre-training has proven to be useful in learning universal language representations. As a state-of-the-art language model pre-training model, BERT (Bidirectional Encoder Representations from Transformers) has achieved amazing results in many language understanding tasks. In this paper, we conduct exhaustive experiments to investigate different fine-tuning methods of BERT on text classification task and provide a general solution for BERT fine-tuning. Finally, the proposed solution obtains new state-of-the-art results on eight widely-studied text classification datasets.
Topik & Kata Kunci
Penulis (4)
C
Chi Sun
X
Xipeng Qiu
Y
Yige Xu
X
Xuanjing Huang
Akses Cepat
PDF tidak tersedia langsung
Cek di sumber asli →Informasi Jurnal
- Tahun Terbit
- 2019
- Bahasa
- en
- Total Sitasi
- 1745×
- Sumber Database
- Semantic Scholar
- DOI
- 10.1007/978-3-030-32381-3_16
- Akses
- Open Access ✓