arXiv Open Access 2024

Understanding Language Model Circuits through Knowledge Editing

Huaizhi Ge Frank Rudzicz Zining Zhu
Lihat Sumber

Abstrak

Recent advances in language model interpretability have identified circuits, critical subnetworks that replicate model behaviors, yet how knowledge is structured within these crucial subnetworks remains opaque. To gain an understanding toward the knowledge in the circuits, we conduct systematic knowledge editing experiments on the circuits of the GPT-2 language model. Our analysis reveals intriguing patterns in how circuits respond to editing attempts, the extent of knowledge distribution across network components, and the architectural composition of knowledge-bearing circuits. These findings offer insights into the complex relationship between model circuits and knowledge representation, deepening the understanding of how information is organized within language models. Our findings offer novel insights into the ``meanings'' of the circuits, and introduce directions for further interpretability and safety research of language models.

Topik & Kata Kunci

Penulis (3)

H

Huaizhi Ge

F

Frank Rudzicz

Z

Zining Zhu

Format Sitasi

Ge, H., Rudzicz, F., Zhu, Z. (2024). Understanding Language Model Circuits through Knowledge Editing. https://arxiv.org/abs/2406.17241

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓