Semantic Scholar Open Access 2022 105 sitasi

A Survey of Multi-task Learning in Natural Language Processing: Regarding Task Relatedness and Training Methods

Zhihan Zhang W. Yu Mengxia Yu Zhichun Guo Meng Jiang

Abstrak

Multi-task learning (MTL) has become increasingly popular in natural language processing (NLP) because it improves the performance of related tasks by exploiting their commonalities and differences. Nevertheless, it is still not understood very well how multi-task learning can be implemented based on the relatedness of training tasks. In this survey, we review recent advances of multi-task learning methods in NLP, with the aim of summarizing them into two general multi-task training methods based on their task relatedness: (i) joint training and (ii) multi-step training. We present examples in various NLP downstream applications, summarize the task relationships and discuss future directions of this promising topic.

Topik & Kata Kunci

Penulis (5)

Z

Zhihan Zhang

W

W. Yu

M

Mengxia Yu

Z

Zhichun Guo

M

Meng Jiang

Format Sitasi

Zhang, Z., Yu, W., Yu, M., Guo, Z., Jiang, M. (2022). A Survey of Multi-task Learning in Natural Language Processing: Regarding Task Relatedness and Training Methods. https://doi.org/10.48550/arXiv.2204.03508

Akses Cepat

Lihat di Sumber doi.org/10.48550/arXiv.2204.03508
Informasi Jurnal
Tahun Terbit
2022
Bahasa
en
Total Sitasi
105×
Sumber Database
Semantic Scholar
DOI
10.48550/arXiv.2204.03508
Akses
Open Access ✓