DOAJ Open Access 2022

A Multi-Level Optimization Framework for End-to-End Text Augmentation

Sai Ashish Somayajula Linfeng Song Pengtao Xie

Abstrak

AbstractText augmentation is an effective technique in alleviating overfitting in NLP tasks. In existing methods, text augmentation and downstream tasks are mostly performed separately. As a result, the augmented texts may not be optimal to train the downstream model. To address this problem, we propose a three-level optimization framework to perform text augmentation and the downstream task end-to- end. The augmentation model is trained in a way tailored to the downstream task. Our framework consists of three learning stages. A text summarization model is trained to perform data augmentation at the first stage. Each summarization example is associated with a weight to account for its domain difference with the text classification data. At the second stage, we use the model trained at the first stage to perform text augmentation and train a text classification model on the augmented texts. At the third stage, we evaluate the text classification model trained at the second stage and update weights of summarization examples by minimizing the validation loss. These three stages are performed end-to-end. We evaluate our method on several text classification datasets where the results demonstrate the effectiveness of our method. Code is available at https://github.com/Sai-Ashish/End-to-End-Text-Augmentation.

Penulis (3)

S

Sai Ashish Somayajula

L

Linfeng Song

P

Pengtao Xie

Format Sitasi

Somayajula, S.A., Song, L., Xie, P. (2022). A Multi-Level Optimization Framework for End-to-End Text Augmentation. https://doi.org/10.1162/tacl_a_00464

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber doi.org/10.1162/tacl_a_00464
Informasi Jurnal
Tahun Terbit
2022
Sumber Database
DOAJ
DOI
10.1162/tacl_a_00464
Akses
Open Access ✓