arXiv Open Access 2025

Text-driven Online Action Detection

Manuel Benavent-Lledo David Mulero-Pérez David Ortiz-Perez Jose Garcia-Rodriguez
Lihat Sumber

Abstrak

Detecting actions as they occur is essential for applications like video surveillance, autonomous driving, and human-robot interaction. Known as online action detection, this task requires classifying actions in streaming videos, handling background noise, and coping with incomplete actions. Transformer architectures are the current state-of-the-art, yet the potential of recent advancements in computer vision, particularly vision-language models (VLMs), remains largely untapped for this problem, partly due to high computational costs. In this paper, we introduce TOAD: a Text-driven Online Action Detection architecture that supports zero-shot and few-shot learning. TOAD leverages CLIP (Contrastive Language-Image Pretraining) textual embeddings, enabling efficient use of VLMs without significant computational overhead. Our model achieves 82.46% mAP on the THUMOS14 dataset, outperforming existing methods, and sets new baselines for zero-shot and few-shot performance on the THUMOS14 and TVSeries datasets.

Topik & Kata Kunci

Penulis (4)

M

Manuel Benavent-Lledo

D

David Mulero-Pérez

D

David Ortiz-Perez

J

Jose Garcia-Rodriguez

Format Sitasi

Benavent-Lledo, M., Mulero-Pérez, D., Ortiz-Perez, D., Garcia-Rodriguez, J. (2025). Text-driven Online Action Detection. https://arxiv.org/abs/2501.13518

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓