arXiv Open Access 2024

Towards Linguistically-Aware and Language-Independent Tokenization for Large Language Models (LLMs)

Abrar Rahman Garry Bowlin Binit Mohanty Sean McGunigal
Lihat Sumber

Abstrak

This paper presents a comprehensive study on the tokenization techniques employed by state-of-the-art large language models (LLMs) and their implications on the cost and availability of services across different languages, especially low resource languages. The analysis considers multiple LLMs, including GPT-4 (using cl100k_base embeddings), GPT-3 (with p50k_base embeddings), and DaVinci (employing r50k_base embeddings), as well as the widely used BERT base tokenizer. The study evaluates the tokenization variability observed across these models and investigates the challenges of linguistic representation in subword tokenization. The research underscores the importance of fostering linguistically-aware development practices, especially for languages that are traditionally under-resourced. Moreover, this paper introduces case studies that highlight the real-world implications of tokenization choices, particularly in the context of electronic health record (EHR) systems. This research aims to promote generalizable Internationalization (I18N) practices in the development of AI services in this domain and beyond, with a strong emphasis on inclusivity, particularly for languages traditionally underrepresented in AI applications.

Topik & Kata Kunci

Penulis (4)

A

Abrar Rahman

G

Garry Bowlin

B

Binit Mohanty

S

Sean McGunigal

Format Sitasi

Rahman, A., Bowlin, G., Mohanty, B., McGunigal, S. (2024). Towards Linguistically-Aware and Language-Independent Tokenization for Large Language Models (LLMs). https://arxiv.org/abs/2410.03568

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓