arXiv Open Access 2025

An Attack to Break Permutation-Based Private Third-Party Inference Schemes for LLMs

Rahul Thomas Louai Zahran Erica Choi Akilesh Potti Micah Goldblum +1 lainnya
Lihat Sumber

Abstrak

Recent advances in Large Language Models (LLMs) have led to the widespread adoption of third-party inference services, raising critical privacy concerns. Existing methods of performing private third-party inference, such as Secure Multiparty Computation (SMPC), often rely on cryptographic methods. However, these methods are thousands of times slower than standard unencrypted inference, and fail to scale to large modern LLMs. Therefore, recent lines of work have explored the replacement of expensive encrypted nonlinear computations in SMPC with statistical obfuscation methods - in particular, revealing permuted hidden states to the third parties, with accompanying strong claims of the difficulty of reversal into the unpermuted states. In this work, we begin by introducing a novel reconstruction technique that can recover original prompts from hidden states with nearly perfect accuracy across multiple state-of-the-art LLMs. We then show that extensions of our attack are nearly perfectly effective in reversing permuted hidden states of LLMs, demonstrating the insecurity of three recently proposed privacy schemes. We further dissect the shortcomings of prior theoretical `proofs' of permuation security which allow our attack to succeed. Our findings highlight the importance of rigorous security analysis in privacy-preserving LLM inference.

Topik & Kata Kunci

Penulis (6)

R

Rahul Thomas

L

Louai Zahran

E

Erica Choi

A

Akilesh Potti

M

Micah Goldblum

A

Arka Pal

Format Sitasi

Thomas, R., Zahran, L., Choi, E., Potti, A., Goldblum, M., Pal, A. (2025). An Attack to Break Permutation-Based Private Third-Party Inference Schemes for LLMs. https://arxiv.org/abs/2505.18332

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓