DOAJ Open Access 2026

Integrating Adversarial Scenarios into LLM Security Labs: An Experience Report on a Hands-On Approach

Dominic A Wilson

Abstrak

<p>This paper presents an exploratory case study detailed as a pedagogical experience report on integrating adversarial Large Language Model (LLM) scenarios into a graduate cybersecurity curriculum. In addition to prompt injection, sophisticated techniques such as jailbreaking and model inversion pose emerging threats that traditional computer security curricula often lack. We present the design and implementation of a structured, hands-on module addressing this gap, utilizing a custom Retrieval-Augmented Generation (RAG) platform with local open-source LLMs. A cohort of 16 graduate students participated in this two-week pilot module, engaging in "red team" activities to actively exploit model alignment and privacy vulnerabilities. The module achieved an average post-module quiz score of 88%, and 90% of students reported increased confidence, demonstrating measurable learning outcomes. This report illustrates instructional strategies for translating complex LLM exploits into accessible educational exercises, providing an example educators may adapt to prepare future professionals for the challenges of securing real-world AI systems.</p>

Penulis (1)

D

Dominic A Wilson

Format Sitasi

Wilson, D.A. (2026). Integrating Adversarial Scenarios into LLM Security Labs: An Experience Report on a Hands-On Approach. https://doi.org/10.62915/2472-2707.1268

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber doi.org/10.62915/2472-2707.1268
Informasi Jurnal
Tahun Terbit
2026
Sumber Database
DOAJ
DOI
10.62915/2472-2707.1268
Akses
Open Access ✓