CrossRef Open Access 2025 1 sitasi

A curriculum learning approach to training antibody language models

Sarah M. Burbach Bryan Briney

Abstrak

There is growing interest in pre-training antibody language models ( AbLMs ) with a mixture of unpaired and natively paired sequences, seeking to combine the proven benefits of training with natively paired sequences with the massive scale of unpaired antibody sequence datasets. However, given the novelty of this strategy, the field lacks a systematic evaluation of data processing methods and training strategies that maximize the benefits of mixed training data while accommodating the significant imbalance in the size of existing paired and unpaired datasets. Here, we introduce a method of curriculum learning for AbLMs, which facilitates a gradual transition from unpaired to paired sequences during training. We optimize this method and compare it to other data sampling strategies for AbLMs, including a constant mix and a fine-tuning approach. We observe that the curriculum and constant approaches show improved performance compared to the fine-tuning approach in large-scale models, likely due to their ability to prevent catastrophic forgetting and slow overfitting. Finally, we show that a 650M-parameter curriculum model, CurrAb, outperforms existing mixed AbLMs in downstream residue prediction and classification tasks.

Penulis (2)

S

Sarah M. Burbach

B

Bryan Briney

Format Sitasi

Burbach, S.M., Briney, B. (2025). A curriculum learning approach to training antibody language models. https://doi.org/10.1371/journal.pcbi.1013473

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber doi.org/10.1371/journal.pcbi.1013473
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Total Sitasi
Sumber Database
CrossRef
DOI
10.1371/journal.pcbi.1013473
Akses
Open Access ✓