arXiv Open Access 2025

SemEval-2025 Task 4: Unlearning sensitive content from Large Language Models

Anil Ramakrishna Yixin Wan Xiaomeng Jin Kai-Wei Chang Zhiqi Bu +4 lainnya
Lihat Sumber

Abstrak

We introduce SemEval-2025 Task 4: unlearning sensitive content from Large Language Models (LLMs). The task features 3 subtasks for LLM unlearning spanning different use cases: (1) unlearn long form synthetic creative documents spanning different genres; (2) unlearn short form synthetic biographies containing personally identifiable information (PII), including fake names, phone number, SSN, email and home addresses, and (3) unlearn real documents sampled from the target model's training dataset. We received over 100 submissions from over 30 institutions and we summarize the key techniques and lessons in this paper.

Topik & Kata Kunci

Penulis (9)

A

Anil Ramakrishna

Y

Yixin Wan

X

Xiaomeng Jin

K

Kai-Wei Chang

Z

Zhiqi Bu

B

Bhanukiran Vinzamuri

V

Volkan Cevher

M

Mingyi Hong

R

Rahul Gupta

Format Sitasi

Ramakrishna, A., Wan, Y., Jin, X., Chang, K., Bu, Z., Vinzamuri, B. et al. (2025). SemEval-2025 Task 4: Unlearning sensitive content from Large Language Models. https://arxiv.org/abs/2504.02883

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓