arXiv Open Access 2026

Audit Trails for Accountability in Large Language Models

Victor Ojewale Harini Suresh Suresh Venkatasubramanian
Lihat Sumber

Abstrak

Large language models (LLMs) are increasingly embedded in consequential decisions across healthcare, finance, employment, and public services. Yet accountability remains fragile because process transparency is rarely recorded in a durable and reviewable form. We propose LLM audit trails as a sociotechnical mechanism for continuous accountability. An audit trail is a chronological, tamper-evident, context-rich ledger of lifecycle events and decisions that links technical provenance (models, data, training and evaluation runs, deployments, monitoring) with governance records (approvals, waivers, and attestations), so organizations can reconstruct what changed, when, and who authorized it. This paper contributes: (1) a lifecycle framework that specifies event types, required metadata, and governance rationales; (2) a reference architecture with lightweight emitters, append only audit stores, and an auditor interface supporting cross organizational traceability; and (3) a reusable, open-source Python implementation that instantiates this audit layer in LLM workflows with minimal integration effort. We conclude by discussing limitations and directions for adoption.

Topik & Kata Kunci

Penulis (3)

V

Victor Ojewale

H

Harini Suresh

S

Suresh Venkatasubramanian

Format Sitasi

Ojewale, V., Suresh, H., Venkatasubramanian, S. (2026). Audit Trails for Accountability in Large Language Models. https://arxiv.org/abs/2601.20727

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2026
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓