arXiv Open Access 2025

Importing Phantoms: Measuring LLM Package Hallucination Vulnerabilities

Arjun Krishna Erick Galinkin Leon Derczynski Jeffrey Martin
Lihat Sumber

Abstrak

Large Language Models (LLMs) have become an essential tool in the programmer's toolkit, but their tendency to hallucinate code can be used by malicious actors to introduce vulnerabilities to broad swathes of the software supply chain. In this work, we analyze package hallucination behaviour in LLMs across popular programming languages examining both existing package references and fictional dependencies. By analyzing this package hallucination behaviour we find potential attacks and suggest defensive strategies to defend against these attacks. We discover that package hallucination rate is predicated not only on model choice, but also programming language, model size, and specificity of the coding task request. The Pareto optimality boundary between code generation performance and package hallucination is sparsely populated, suggesting that coding models are not being optimized for secure code. Additionally, we find an inverse correlation between package hallucination rate and the HumanEval coding benchmark, offering a heuristic for evaluating the propensity of a model to hallucinate packages. Our metrics, findings and analyses provide a base for future models, securing AI-assisted software development workflows against package supply chain attacks.

Topik & Kata Kunci

Penulis (4)

A

Arjun Krishna

E

Erick Galinkin

L

Leon Derczynski

J

Jeffrey Martin

Format Sitasi

Krishna, A., Galinkin, E., Derczynski, L., Martin, J. (2025). Importing Phantoms: Measuring LLM Package Hallucination Vulnerabilities. https://arxiv.org/abs/2501.19012

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓