arXiv Open Access 2025

Learning Modular Exponentiation with Transformers

David Demitri Africa Sara M. Kapoor Theo Simon Sorg Challenger Mishra
Lihat Sumber

Abstrak

Modular exponentiation is crucial to number theory and cryptography, yet remains largely unexplored from a mechanistic interpretability standpoint. We train a 4-layer encoder-decoder Transformer model to perform this operation and investigate the emergence of numerical reasoning during training. Utilizing principled sampling strategies, PCA-based embedding analysis, and activation patching, we examine how number-theoretic properties are encoded within the model. We find that reciprocal operand training leads to strong performance gains, with sudden generalization across related moduli. These synchronized accuracy surges reflect grokking-like dynamics, suggesting the model internalizes shared arithmetic structure. We also find a subgraph consisting entirely of attention heads in the final layer sufficient to achieve full performance on the task of regular exponentiation. These results suggest that transformer models learn modular arithmetic through specialized computational circuits, paving the way for more interpretable and efficient neural approaches to modular exponentiation.

Topik & Kata Kunci

Penulis (4)

D

David Demitri Africa

S

Sara M. Kapoor

T

Theo Simon Sorg

C

Challenger Mishra

Format Sitasi

Africa, D.D., Kapoor, S.M., Sorg, T.S., Mishra, C. (2025). Learning Modular Exponentiation with Transformers. https://arxiv.org/abs/2506.23679

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓