Semantic Scholar Open Access 2021 251 sitasi

Referring Transformer: A One-step Approach to Multi-task Visual Grounding

Muchen Li L. Sigal

Abstrak

As an important step towards visual reasoning, visual grounding (e.g., phrase localization, referring expression comprehension/segmentation) has been widely explored Previous approaches to referring expression comprehension (REC) or segmentation (RES) either suffer from limited performance, due to a two-stage setup, or require the designing of complex task-specific one-stage architectures. In this paper, we propose a simple one-stage multi-task framework for visual grounding tasks. Specifically, we leverage a transformer architecture, where two modalities are fused in a visual-lingual encoder. In the decoder, the model learns to generate contextualized lingual queries which are then decoded and used to directly regress the bounding box and produce a segmentation mask for the corresponding referred regions. With this simple but highly contextualized model, we outperform state-of-the-arts methods by a large margin on both REC and RES tasks. We also show that a simple pre-training schedule (on an external dataset) further improves the performance. Extensive experiments and ablations illustrate that our model benefits greatly from contextualized information and multi-task training.

Topik & Kata Kunci

Penulis (2)

M

Muchen Li

L

L. Sigal

Format Sitasi

Li, M., Sigal, L. (2021). Referring Transformer: A One-step Approach to Multi-task Visual Grounding. https://www.semanticscholar.org/paper/9dcaf5ab101ba551ac334f3ede177a444e154643

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2021
Bahasa
en
Total Sitasi
251×
Sumber Database
Semantic Scholar
Akses
Open Access ✓