arXiv Open Access 2021

Supporting Undotted Arabic with Pre-trained Language Models

Aviad Rom Kfir Bar
Lihat Sumber

Abstrak

We observe a recent behaviour on social media, in which users intentionally remove consonantal dots from Arabic letters, in order to bypass content-classification algorithms. Content classification is typically done by fine-tuning pre-trained language models, which have been recently employed by many natural-language-processing applications. In this work we study the effect of applying pre-trained Arabic language models on "undotted" Arabic texts. We suggest several ways of supporting undotted texts with pre-trained models, without additional training, and measure their performance on two Arabic natural-language-processing downstream tasks. The results are encouraging; in one of the tasks our method shows nearly perfect performance.

Topik & Kata Kunci

Penulis (2)

A

Aviad Rom

K

Kfir Bar

Format Sitasi

Rom, A., Bar, K. (2021). Supporting Undotted Arabic with Pre-trained Language Models. https://arxiv.org/abs/2111.09791

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2021
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓