Comprehensive Study on German Language Models for Clinical and Biomedical Text Understanding
Abstrak
Recent advances in natural language processing (NLP) can be largely attributed to the advent of pre-trained language models such as BERT and RoBERTa. While these models demonstrate remarkable performance on general datasets, they can struggle in specialized domains such as medicine, where unique domain-specific terminologies, domain-specific abbreviations, and varying document structures are common. This paper explores strategies for adapting these models to domain-specific requirements, primarily through continuous pre-training on domain-specific data. We pre-trained several German medical language models on 2.4B tokens derived from translated public English medical data and 3B tokens of German clinical data. The resulting models were evaluated on various German downstream tasks, including named entity recognition (NER), multi-label classification, and extractive question answering. Our results suggest that models augmented by clinical and translation-based pre-training typically outperform general domain models in medical contexts. We conclude that continuous pre-training has demonstrated the ability to match or even exceed the performance of clinical models trained from scratch. Furthermore, pre-training on clinical data or leveraging translated texts have proven to be reliable methods for domain adaptation in medical NLP tasks.
Penulis (20)
Ahmad Idrissi-Yaghir
Amin Dada
Henning Schäfer
Kamyar Arzideh
Giulia Baldini
Jan Trienes
Max Hasin
Jeanette Bewersdorff
Cynthia S. Schmidt
Marie Bauer
Kaleb E. Smith
Jiang Bian
Yonghui Wu
Jörg Schlötterer
Torsten Zesch
Peter A. Horn
Christin Seifert
Felix Nensa
Jens Kleesiek
Christoph M. Friedrich
Akses Cepat
- Tahun Terbit
- 2024
- Bahasa
- en
- Sumber Database
- arXiv
- Akses
- Open Access ✓