Semantic Scholar Open Access 2021 33 sitasi

OpenNEEDS: A Dataset of Gaze, Head, Hand, and Scene Signals During Exploration in Open-Ended VR Environments

Kara J Emery Marina Zannoli James Warren Lei Xiao S. Talathi

Abstrak

We present OpenNEEDS, the first large-scale, high frame rate, comprehensive, and open-source dataset of Non-Eye (head, hand, and scene) and Eye (3D gaze vectors) data captured for 44 participants as they freely explored two virtual environments with many potential tasks (i.e., reading, drawing, shooting, object manipulation, etc.). With this dataset, we aim to enable research on the relationship between head, hand, scene, and gaze spatiotemporal statistics and its applications to gaze estimation. To demonstrate the power of OpenNEEDS, we show that gaze estimation models using individual non-eye sensors and an early fusion model combining all non-eye sensors outperform all baseline gaze estimation models considered, suggesting the possibility of considering non-eye sensors in the design of robust eye trackers. We anticipate that this dataset will support research progress in many areas and applications such as gaze estimation and prediction, sensor fusion, human-computer interaction, intent prediction, perceptuo-motor control, and machine learning.

Topik & Kata Kunci

Penulis (5)

K

Kara J Emery

M

Marina Zannoli

J

James Warren

L

Lei Xiao

S

S. Talathi

Format Sitasi

Emery, K.J., Zannoli, M., Warren, J., Xiao, L., Talathi, S. (2021). OpenNEEDS: A Dataset of Gaze, Head, Hand, and Scene Signals During Exploration in Open-Ended VR Environments. https://doi.org/10.1145/3448018.3457996

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber doi.org/10.1145/3448018.3457996
Informasi Jurnal
Tahun Terbit
2021
Bahasa
en
Total Sitasi
33×
Sumber Database
Semantic Scholar
DOI
10.1145/3448018.3457996
Akses
Open Access ✓