arXiv Open Access 2025

Fine-Grained Interpretation of Political Opinions in Large Language Models

Jingyu Hu Mengyue Yang Mengnan Du Weiru Liu
Lihat Sumber

Abstrak

Studies of LLMs' political opinions mainly rely on evaluations of their open-ended responses. Recent work indicates that there is a misalignment between LLMs' responses and their internal intentions. This motivates us to probe LLMs' internal mechanisms and help uncover their internal political states. Additionally, we found that the analysis of LLMs' political opinions often relies on single-axis concepts, which can lead to concept confounds. In this work, we extend the single-axis to multi-dimensions and apply interpretable representation engineering techniques for more transparent LLM political concept learning. Specifically, we designed a four-dimensional political learning framework and constructed a corresponding dataset for fine-grained political concept vector learning. These vectors can be used to detect and intervene in LLM internals. Experiments are conducted on eight open-source LLMs with three representation engineering techniques. Results show these vectors can disentangle political concept confounds. Detection tasks validate the semantic meaning of the vectors and show good generalization and robustness in OOD settings. Intervention Experiments show these vectors can intervene in LLMs to generate responses with different political leanings.

Topik & Kata Kunci

Penulis (4)

J

Jingyu Hu

M

Mengyue Yang

M

Mengnan Du

W

Weiru Liu

Format Sitasi

Hu, J., Yang, M., Du, M., Liu, W. (2025). Fine-Grained Interpretation of Political Opinions in Large Language Models. https://arxiv.org/abs/2506.04774

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓