Semantic Scholar Open Access 2025 41 sitasi

Rethinking Vision-Language Model in Face Forensics: Multi-Modal Interpretable Forged Face Detector

Xiao Guo Xiufeng Song Yue Zhang Xiaohong Liu Xiaoming Liu

Abstrak

Deepfake detection is a long-established research topic vital for mitigating the spread of malicious misinformation. Unlike prior methods that provide either binary classification results or textual explanations separately, we introduce a novel method capable of generating both simultaneously. Our method harnesses the multi-modal learning capability of the pre-trained CLIP and the unprecedented interpretability of large language models (LLMs) to enhance both the generalization and explainability of deep-fake detection. Specifically, we introduce a multi-modal face forgery detector (M2F2-Det) that employs tailored face forgery prompt learning, incorporating the pre-trained CLIP to improve generalization to unseen forgeries. Also, M2F2-Det incorporates an LLM to provide detailed textual explanations of its detection decisions, enhancing interpretability by bridging the gap between natural language and subtle cues of facial forgeries. Empirically, we evaluate M2F2-Det on both detection and explanation generation tasks, where it achieves state-of-the-art performance, demonstrating its effectiveness in identifying and explaining diverse forgeries. Source code is available at $\color{magenta}{link}$.

Topik & Kata Kunci

Penulis (5)

X

Xiao Guo

X

Xiufeng Song

Y

Yue Zhang

X

Xiaohong Liu

X

Xiaoming Liu

Format Sitasi

Guo, X., Song, X., Zhang, Y., Liu, X., Liu, X. (2025). Rethinking Vision-Language Model in Face Forensics: Multi-Modal Interpretable Forged Face Detector. https://doi.org/10.1109/CVPR52734.2025.00019

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber doi.org/10.1109/CVPR52734.2025.00019
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Total Sitasi
41×
Sumber Database
Semantic Scholar
DOI
10.1109/CVPR52734.2025.00019
Akses
Open Access ✓