Vehicle Localization in IoV Environments: A Vision-LSTM Approach with Synthetic Data Simulation
Abstrak
With the rapid development of the Internet of Vehicles (IoV) and autonomous driving technologies, robust and accurate visual pose perception has become critical for enabling smart connected vehicles. Traditional deep learning-based localization methods face persistent challenges in real-world vehicular environments, including occlusion, lighting variations, and the prohibitive cost of collecting diverse real-world datasets. To address these limitations, this study introduces a novel approach by combining Vision-LSTM (ViL) with synthetic image data generated from high-fidelity 3D models. Unlike traditional methods reliant on costly and labor-intensive real-world data, synthetic datasets enable controlled, scalable, and efficient training under diverse environmental conditions. Vision-LSTM enhances feature extraction and classification performance through its matrix-based mLSTM modules and advanced feature aggregation strategy, effectively capturing both global and local information. Experimental evaluations in independent target scenes with distinct features and structured indoor environments demonstrate significant performance gains, achieving matching accuracies of 91.25% and 95.87%, respectively, and outperforming state-of-the-art models. These findings underscore the innovative advantages of integrating Vision-LSTM with synthetic data, highlighting its potential to overcome real-world limitations, reduce costs, and enhance accuracy and reliability for connected vehicle applications such as autonomous navigation and environmental perception.
Topik & Kata Kunci
Penulis (3)
Yi Liu
Jiade Jiang
Zijian Tian
Akses Cepat
- Tahun Terbit
- 2025
- Sumber Database
- DOAJ
- DOI
- 10.3390/vehicles7010012
- Akses
- Open Access ✓