arXiv Open Access 2026

Selective Fine-Tuning for Targeted and Robust Concept Unlearning

Mansi Avinash Kori Francesca Toni Soteris Demetriou
Lihat Sumber

Abstrak

Text guided diffusion models are used by millions of users, but can be easily exploited to produce harmful content. Concept unlearning methods aim at reducing the models' likelihood of generating harmful content. Traditionally, this has been tackled at an individual concept level, with only a handful of recent works considering more realistic concept combinations. However, state of the art methods depend on full finetuning, which is computationally expensive. Concept localisation methods can facilitate selective finetuning, but existing techniques are static, resulting in suboptimal utility. In order to tackle these challenges, we propose TRUST (Targeted Robust Selective fine Tuning), a novel approach for dynamically estimating target concept neurons and unlearning them through selective finetuning, empowered by a Hessian based regularization. We show experimentally, against a number of SOTA baselines, that TRUST is robust against adversarial prompts, preserves generation quality to a significant degree, and is also significantly faster than the SOTA. Our method achieves unlearning of not only individual concepts but also combinations of concepts and conditional concepts, without any specific regularization.

Topik & Kata Kunci

Penulis (4)

Mansi

A

Avinash Kori

F

Francesca Toni

S

Soteris Demetriou

Format Sitasi

Mansi, Kori, A., Toni, F., Demetriou, S. (2026). Selective Fine-Tuning for Targeted and Robust Concept Unlearning. https://arxiv.org/abs/2602.07919

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2026
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓