Semantic Scholar Open Access 2019 387 sitasi

In Defense of Pre-Trained ImageNet Architectures for Real-Time Semantic Segmentation of Road-Driving Images

Marin Orsic Ivan Kreso Petra Bevandic Sinisa Segvic

Abstrak

Recent success of semantic segmentation approaches on demanding road driving datasets has spurred interest in many related application fields. Many of these applications involve real-time prediction on mobile platforms such as cars, drones and various kinds of robots. Real-time setup is challenging due to extraordinary computational complexity involved. Many previous works address the challenge with custom lightweight architectures which decrease computational complexity by reducing depth, width and layer capacity with respect to general purpose architectures. We propose an alternative approach which achieves a significantly better performance across a wide range of computing budgets. First, we rely on a light-weight general purpose architecture as the main recognition engine. Then, we leverage light-weight upsampling with lateral connections as the most cost-effective solution to restore the prediction resolution. Finally, we propose to enlarge the receptive field by fusing shared features at multiple resolutions in a novel fashion. Experiments on several road driving datasets show a substantial advantage of the proposed approach, either with ImageNet pre-trained parameters or when we learn from scratch. Our Cityscapes test submission entitled SwiftNetRN-18 delivers 75.5% MIoU and achieves 39.9 Hz on 1024×2048 images on GTX1080Ti.

Topik & Kata Kunci

Penulis (4)

M

Marin Orsic

I

Ivan Kreso

P

Petra Bevandic

S

Sinisa Segvic

Format Sitasi

Orsic, M., Kreso, I., Bevandic, P., Segvic, S. (2019). In Defense of Pre-Trained ImageNet Architectures for Real-Time Semantic Segmentation of Road-Driving Images. https://doi.org/10.1109/CVPR.2019.01289

Akses Cepat

Lihat di Sumber doi.org/10.1109/CVPR.2019.01289
Informasi Jurnal
Tahun Terbit
2019
Bahasa
en
Total Sitasi
387×
Sumber Database
Semantic Scholar
DOI
10.1109/CVPR.2019.01289
Akses
Open Access ✓