arXiv Open Access 2024

Diving Deep into the Motion Representation of Video-Text Models

Chinmaya Devaraj Cornelia Fermuller Yiannis Aloimonos
Lihat Sumber

Abstrak

Videos are more informative than images because they capture the dynamics of the scene. By representing motion in videos, we can capture dynamic activities. In this work, we introduce GPT-4 generated motion descriptions that capture fine-grained motion descriptions of activities and apply them to three action datasets. We evaluated several video-text models on the task of retrieval of motion descriptions. We found that they fall far behind human expert performance on two action datasets, raising the question of whether video-text models understand motion in videos. To address it, we introduce a method of improving motion understanding in video-text models by utilizing motion descriptions. This method proves to be effective on two action datasets for the motion description retrieval task. The results draw attention to the need for quality captions involving fine-grained motion information in existing datasets and demonstrate the effectiveness of the proposed pipeline in understanding fine-grained motion during video-text retrieval.

Topik & Kata Kunci

Penulis (3)

C

Chinmaya Devaraj

C

Cornelia Fermuller

Y

Yiannis Aloimonos

Format Sitasi

Devaraj, C., Fermuller, C., Aloimonos, Y. (2024). Diving Deep into the Motion Representation of Video-Text Models. https://arxiv.org/abs/2406.05075

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓