arXiv Open Access 2025

SCALE: Upscaled Continual Learning of Large Language Models

Jin-woo Lee Junhwa Choi Bongkyu Hwang Jinho Choo Bogun Kim +6 lainnya
Lihat Sumber

Abstrak

We revisit continual pre-training for large language models and argue that progress now depends more on scaling the right structure than on scaling parameters alone. We introduce SCALE, a width upscaling architecture that inserts lightweight expansion into linear modules while freezing all pre-trained parameters. This preserves the residual and attention topologies and increases capacity without perturbing the base model's original functionality. SCALE is guided by two principles: Persistent Preservation, which maintains the base model's behavior via preservation-oriented initialization and freezing of the pre-trained weights, and Collaborative Adaptation, which selectively trains a subset of expansion components to acquire new knowledge with minimal interference. We instantiate these ideas as SCALE-Preserve (preservation-first), SCALE-Adapt (adaptation-first), and SCALE-Route, an optional routing extension that performs token-level routing between preservation and adaptation heads. On a controlled synthetic biography benchmark, SCALE mitigates the severe forgetting observed with depth expansion while still acquiring new knowledge. In continual pre-training on a Korean corpus, SCALE variants achieve less forgetting on English evaluations and competitive gains on Korean benchmarks, with these variants offering the best overall stability-plasticity trade-off. Accompanying analysis clarifies when preservation provably holds and why the interplay between preservation and adaptation stabilizes optimization compared to standard continual learning setups.

Topik & Kata Kunci

Penulis (11)

J

Jin-woo Lee

J

Junhwa Choi

B

Bongkyu Hwang

J

Jinho Choo

B

Bogun Kim

J

JeongSeon Yi

J

Joonseok Lee

D

DongYoung Jung

J

Jaeseon Park

K

Kyoungwon Park

S

Suk-hoon Jung

Format Sitasi

Lee, J., Choi, J., Hwang, B., Choo, J., Kim, B., Yi, J. et al. (2025). SCALE: Upscaled Continual Learning of Large Language Models. https://arxiv.org/abs/2511.03270

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓