arXiv Open Access 2025

A tutorial note on collecting simulated data for vision-language-action models

Heran Wu Zirun Zhou Jingfeng Zhang
Lihat Sumber

Abstrak

Traditional robotic systems typically decompose intelligence into independent modules for computer vision, natural language processing, and motion control. Vision-Language-Action (VLA) models fundamentally transform this approach by employing a single neural network that can simultaneously process visual observations, understand human instructions, and directly output robot actions -- all within a unified framework. However, these systems are highly dependent on high-quality training datasets that can capture the complex relationships between visual observations, language instructions, and robotic actions. This tutorial reviews three representative systems: the PyBullet simulation framework for flexible customized data generation, the LIBERO benchmark suite for standardized task definition and evaluation, and the RT-X dataset collection for large-scale multi-robot data acquisition. We demonstrated dataset generation approaches in PyBullet simulation and customized data collection within LIBERO, and provide an overview of the characteristics and roles of the RT-X dataset for large-scale multi-robot data acquisition.

Topik & Kata Kunci

Penulis (3)

H

Heran Wu

Z

Zirun Zhou

J

Jingfeng Zhang

Format Sitasi

Wu, H., Zhou, Z., Zhang, J. (2025). A tutorial note on collecting simulated data for vision-language-action models. https://arxiv.org/abs/2508.06547

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓