arXiv Open Access 2025

FeynTune: Large Language Models for High-Energy Theory

Paul Richmond Prarit Agarwal Borun Chowdhury Vasilis Niarchos Constantinos Papageorgakis
Lihat Sumber

Abstrak

We present specialized Large Language Models for theoretical High-Energy Physics, obtained as 20 fine-tuned variants of the 8-billion parameter Llama-3.1 model. Each variant was trained on arXiv abstracts (through August 2024) from different combinations of hep-th, hep-ph and gr-qc. For a comparative study, we also trained models on datasets that contained abstracts from disparate fields such as the q-bio and cs categories. All models were fine-tuned using two distinct Low-Rank Adaptation fine-tuning approaches and varying dataset sizes, and outperformed the base model on hep-th abstract completion tasks. We compare performance against leading commercial LLMs (ChatGPT, Claude, Gemini, DeepSeek) and derive insights for further developing specialized language models for High-Energy Theoretical Physics.

Topik & Kata Kunci

Penulis (5)

P

Paul Richmond

P

Prarit Agarwal

B

Borun Chowdhury

V

Vasilis Niarchos

C

Constantinos Papageorgakis

Format Sitasi

Richmond, P., Agarwal, P., Chowdhury, B., Niarchos, V., Papageorgakis, C. (2025). FeynTune: Large Language Models for High-Energy Theory. https://arxiv.org/abs/2508.03716

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓