LLM For Automated Dental EMR Quality Assessment
Abstrak
Aim or purpose: High-quality Electronic Medical Records (EMRs) are crucial for digital dentistry data analysis and applications. Manual EMR quality assessment is resource-intensive and inconsistent, limiting data utility. Automated methods are needed for efficient data integrity. This study evaluated a Large Language Model (LLM) automating quality assessment for outpatient dental EMRs based on group standards. Materials and methods: 100 typical outpatient dental EMRs collected from February to December 2024 in hospital with known errors, manually de-identified, were randomly split into training (80) and testing (20) sets. Records were annotated by 3 senior quality control experts, providing the reference standard. The DeepSeek-r1 model assessed record quality based on group standard criteria, using evaluation prompts iteratively refined on the training set via expert feedback. Performance on the testing set was compared against the expert consensus reference using metrics including Cohen's Kappa for agreement, precision, recall, and F1-score (p<0.05). Evaluation time was also compared. Results: On the testing set, the LLM achieved 98.0% recall, 100% precision, and an F1-score of 0.990 for identifying annotated quality deficiencies. The LLM demonstrated strong alignment with expert consensus on the test set. Furthermore, automated assessment significantly reduced evaluation time per record compared to manual review. Conclusions: LLMs show significant promise as effective, efficient tools for automated dental EMR quality assessment. This technology can enhance data quality within digital dentistry workflows, improving applications such as clinical research and practice analytics, supporting digital transformation in dentistry.
Topik & Kata Kunci
Penulis (2)
Jiakun Fang
Wang Xiaoying
Akses Cepat
PDF tidak tersedia langsung
Cek di sumber asli →- Tahun Terbit
- 2025
- Sumber Database
- DOAJ
- DOI
- 10.1016/j.identj.2025.104196
- Akses
- Open Access ✓