arXiv Open Access 2023

Considerations for health care institutions training large language models on electronic health records

Weipeng Zhou Danielle Bitterman Majid Afshar Timothy A. Miller
Lihat Sumber

Abstrak

Large language models (LLMs) like ChatGPT have excited scientists across fields; in medicine, one source of excitement is the potential applications of LLMs trained on electronic health record (EHR) data. But there are tough questions we must first answer if health care institutions are interested in having LLMs trained on their own data; should they train an LLM from scratch or fine-tune it from an open-source model? For healthcare institutions with a predefined budget, what are the biggest LLMs they can afford? In this study, we take steps towards answering these questions with an analysis on dataset sizes, model sizes, and costs for LLM training using EHR data. This analysis provides a framework for thinking about these questions in terms of data scale, compute scale, and training budgets.

Topik & Kata Kunci

Penulis (4)

W

Weipeng Zhou

D

Danielle Bitterman

M

Majid Afshar

T

Timothy A. Miller

Format Sitasi

Zhou, W., Bitterman, D., Afshar, M., Miller, T.A. (2023). Considerations for health care institutions training large language models on electronic health records. https://arxiv.org/abs/2309.12339

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2023
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓