Semantic Scholar Open Access 2020 835 sitasi

Revisiting Pre-Trained Models for Chinese Natural Language Processing

Yiming Cui Wanxiang Che Ting Liu Bing Qin Shijin Wang +1 lainnya

Abstrak

Bidirectional Encoder Representations from Transformers (BERT) has shown marvelous improvements across various NLP tasks, and consecutive variants have been proposed to further improve the performance of the pre-trained language models. In this paper, we target on revisiting Chinese pre-trained language models to examine their effectiveness in a non-English language and release the Chinese pre-trained language model series to the community. We also propose a simple but effective model called MacBERT, which improves upon RoBERTa in several ways, especially the masking strategy that adopts MLM as correction (Mac). We carried out extensive experiments on eight Chinese NLP tasks to revisit the existing pre-trained language models as well as the proposed MacBERT. Experimental results show that MacBERT could achieve state-of-the-art performances on many NLP tasks, and we also ablate details with several findings that may help future research. https://github.com/ymcui/MacBERT

Topik & Kata Kunci

Penulis (6)

Y

Yiming Cui

W

Wanxiang Che

T

Ting Liu

B

Bing Qin

S

Shijin Wang

G

Guoping Hu

Format Sitasi

Cui, Y., Che, W., Liu, T., Qin, B., Wang, S., Hu, G. (2020). Revisiting Pre-Trained Models for Chinese Natural Language Processing. https://doi.org/10.18653/v1/2020.findings-emnlp.58

Akses Cepat

Informasi Jurnal
Tahun Terbit
2020
Bahasa
en
Total Sitasi
835×
Sumber Database
Semantic Scholar
DOI
10.18653/v1/2020.findings-emnlp.58
Akses
Open Access ✓