arXiv Open Access 2023

Learning the Wrong Lessons: Inserting Trojans During Knowledge Distillation

Leonard Tang Tom Shlomi Alexander Cai
Lihat Sumber

Abstrak

In recent years, knowledge distillation has become a cornerstone of efficiently deployed machine learning, with labs and industries using knowledge distillation to train models that are inexpensive and resource-optimized. Trojan attacks have contemporaneously gained significant prominence, revealing fundamental vulnerabilities in deep learning models. Given the widespread use of knowledge distillation, in this work we seek to exploit the unlabelled data knowledge distillation process to embed Trojans in a student model without introducing conspicuous behavior in the teacher. We ultimately devise a Trojan attack that effectively reduces student accuracy, does not alter teacher performance, and is efficiently constructible in practice.

Topik & Kata Kunci

Penulis (3)

L

Leonard Tang

T

Tom Shlomi

A

Alexander Cai

Format Sitasi

Tang, L., Shlomi, T., Cai, A. (2023). Learning the Wrong Lessons: Inserting Trojans During Knowledge Distillation. https://arxiv.org/abs/2303.05593

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2023
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓