arXiv Open Access 2022

Combined CNN Transformer Encoder for Enhanced Fine-grained Human Action Recognition

Mei Chee Leong Haosong Zhang Hui Li Tan Liyuan Li Joo Hwee Lim
Lihat Sumber

Abstrak

Fine-grained action recognition is a challenging task in computer vision. As fine-grained datasets have small inter-class variations in spatial and temporal space, fine-grained action recognition model requires good temporal reasoning and discrimination of attribute action semantics. Leveraging on CNN's ability in capturing high level spatial-temporal feature representations and Transformer's modeling efficiency in capturing latent semantics and global dependencies, we investigate two frameworks that combine CNN vision backbone and Transformer Encoder to enhance fine-grained action recognition: 1) a vision-based encoder to learn latent temporal semantics, and 2) a multi-modal video-text cross encoder to exploit additional text input and learn cross association between visual and text semantics. Our experimental results show that both our Transformer encoder frameworks effectively learn latent temporal semantics and cross-modality association, with improved recognition performance over CNN vision model. We achieve new state-of-the-art performance on the FineGym benchmark dataset for both proposed architectures.

Topik & Kata Kunci

Penulis (5)

M

Mei Chee Leong

H

Haosong Zhang

H

Hui Li Tan

L

Liyuan Li

J

Joo Hwee Lim

Format Sitasi

Leong, M.C., Zhang, H., Tan, H.L., Li, L., Lim, J.H. (2022). Combined CNN Transformer Encoder for Enhanced Fine-grained Human Action Recognition. https://arxiv.org/abs/2208.01897

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2022
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓