arXiv Open Access 2023

(Security) Assertions by Large Language Models

Rahul Kande Hammond Pearce Benjamin Tan Brendan Dolan-Gavitt Shailja Thakur +2 lainnya
Lihat Sumber

Abstrak

The security of computer systems typically relies on a hardware root of trust. As vulnerabilities in hardware can have severe implications on a system, there is a need for techniques to support security verification activities. Assertion-based verification is a popular verification technique that involves capturing design intent in a set of assertions that can be used in formal verification or testing-based checking. However, writing security-centric assertions is a challenging task. In this work, we investigate the use of emerging large language models (LLMs) for code generation in hardware assertion generation for security, where primarily natural language prompts, such as those one would see as code comments in assertion files, are used to produce SystemVerilog assertions. We focus our attention on a popular LLM and characterize its ability to write assertions out of the box, given varying levels of detail in the prompt. We design an evaluation framework that generates a variety of prompts, and we create a benchmark suite comprising real-world hardware designs and corresponding golden reference assertions that we want to generate with the LLM.

Topik & Kata Kunci

Penulis (7)

R

Rahul Kande

H

Hammond Pearce

B

Benjamin Tan

B

Brendan Dolan-Gavitt

S

Shailja Thakur

R

Ramesh Karri

J

Jeyavijayan Rajendran

Format Sitasi

Kande, R., Pearce, H., Tan, B., Dolan-Gavitt, B., Thakur, S., Karri, R. et al. (2023). (Security) Assertions by Large Language Models. https://arxiv.org/abs/2306.14027

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2023
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓