Semantic Scholar Open Access 2017 1685 sitasi

Vision-and-Language Navigation: Interpreting Visually-Grounded Navigation Instructions in Real Environments

Peter Anderson Qi Wu Damien Teney Jake Bruce Mark Johnson +4 lainnya

Abstrak

A robot that can carry out a natural-language instruction has been a dream since before the Jetsons cartoon series imagined a life of leisure mediated by a fleet of attentive robot helpers. It is a dream that remains stubbornly distant. However, recent advances in vision and language methods have made incredible progress in closely related areas. This is significant because a robot interpreting a natural-language navigation instruction on the basis of what it sees is carrying out a vision and language process that is similar to Visual Question Answering. Both tasks can be interpreted as visually grounded sequence-to-sequence translation problems, and many of the same methods are applicable. To enable and encourage the application of vision and language methods to the problem of interpreting visually-grounded navigation instructions, we present the Matter-port3D Simulator - a large-scale reinforcement learning environment based on real imagery [11]. Using this simulator, which can in future support a range of embodied vision and language tasks, we provide the first benchmark dataset for visually-grounded natural language navigation in real buildings - the Room-to-Room (R2R) dataset1.

Topik & Kata Kunci

Penulis (9)

P

Peter Anderson

Q

Qi Wu

D

Damien Teney

J

Jake Bruce

M

Mark Johnson

N

Niko Sünderhauf

I

I. Reid

S

Stephen Gould

A

Anton van den Hengel

Format Sitasi

Anderson, P., Wu, Q., Teney, D., Bruce, J., Johnson, M., Sünderhauf, N. et al. (2017). Vision-and-Language Navigation: Interpreting Visually-Grounded Navigation Instructions in Real Environments. https://doi.org/10.1109/CVPR.2018.00387

Akses Cepat

Lihat di Sumber doi.org/10.1109/CVPR.2018.00387
Informasi Jurnal
Tahun Terbit
2017
Bahasa
en
Total Sitasi
1685×
Sumber Database
Semantic Scholar
DOI
10.1109/CVPR.2018.00387
Akses
Open Access ✓