HRI-confusion: A multimodal dataset for modelling and detecting user confusion in situated human-robot interaction
Abstrak
The dataset was collected from 28 participants (17 female, 9 male, and 1 non-binary) for a study aimed at modelling and detecting user social behaviours with different confusion states in task-oriented situated human-robot interaction (HRI). The dataset consists of user facial body video recordings synchronised with user speech across three designed experiment scenarios (Tasks 1 - 3). Each experiment lasted approximately one hour per participant. The videos are segmented into individual clips corresponding to specific experimental conversations under predefined conditions: general confusion and non-confusion for Task 1 and 3; and productive confusion, unproductive confusion, and non-confusion for Task 2.In total, the dataset contains 789 video clips (body: 392, face: 397). Each video is recorded in high-definition RGB format, capturing user facial expressions or body language along with their speech. These multimodal data provide a valuable resource for studying user cognitive and mental states in human-robot interaction and human-computer interaction.The data collected for Task 2 was used in [9]. In compliance with GDPR (General Data Protection Regulation) and DPIA (data protection impact assessment) guidelines, the dataset is freely available upon request at https://sites.google.com/view/hridatarequst/home.
Topik & Kata Kunci
Penulis (3)
Na Li
Jane Courtney
Robert Ross
Akses Cepat
- Tahun Terbit
- 2025
- Sumber Database
- DOAJ
- DOI
- 10.1016/j.dib.2025.112047
- Akses
- Open Access ✓