arXiv Open Access 2025

Attacking Attention of Foundation Models Disrupts Downstream Tasks

Hondamunige Prasanna Silva Federico Becattini Lorenzo Seidenari
Lihat Sumber

Abstrak

Foundation models represent the most prominent and recent paradigm shift in artificial intelligence. Foundation models are large models, trained on broad data that deliver high accuracy in many downstream tasks, often without fine-tuning. For this reason, models such as CLIP , DINO or Vision Transfomers (ViT), are becoming the bedrock of many industrial AI-powered applications. However, the reliance on pre-trained foundation models also introduces significant security concerns, as these models are vulnerable to adversarial attacks. Such attacks involve deliberately crafted inputs designed to deceive AI systems, jeopardizing their reliability. This paper studies the vulnerabilities of vision foundation models, focusing specifically on CLIP and ViTs, and explores the transferability of adversarial attacks to downstream tasks. We introduce a novel attack, targeting the structure of transformer-based architectures in a task-agnostic fashion. We demonstrate the effectiveness of our attack on several downstream tasks: classification, captioning, image/text retrieval, segmentation and depth estimation. Code available at:https://github.com/HondamunigePrasannaSilva/attack-attention

Topik & Kata Kunci

Penulis (3)

H

Hondamunige Prasanna Silva

F

Federico Becattini

L

Lorenzo Seidenari

Format Sitasi

Silva, H.P., Becattini, F., Seidenari, L. (2025). Attacking Attention of Foundation Models Disrupts Downstream Tasks. https://arxiv.org/abs/2506.05394

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓