FeynTune: large language models for high-energy theory
Abstrak
We present specialized large language models (LLMs) for theoretical high-energy physics, obtained as 20 fine-tuned variants of the 8 billion parameter Llama-3.1 model. Each variant was trained on arXiv abstracts (through August 2024) from different combinations of hep-th, hep-ph and gr-qc. For a comparative study, we also trained models on datasets that contained abstracts from disparate fields such as the q-bio and cs categories. All models were fine-tuned using two distinct low-rank adaptation fine-tuning approaches and varying dataset sizes, and outperformed the base model on hep-th abstract completion tasks. We compare performance against leading commercial LLMs (ChatGPT, Claude, Gemini, DeepSeek) and derive insights for further developing specialized language models for high-energy theoretical physics.
Topik & Kata Kunci
Penulis (5)
Paul Richmond
Constantinos Papageorgakis
Vasilis Niarchos
Borun Chowdhury
Prarit Agarwal
Akses Cepat
- Tahun Terbit
- 2026
- Sumber Database
- DOAJ
- DOI
- 10.1088/2632-2153/ae47bb
- Akses
- Open Access ✓