CNN, RNN, or ViT? An Evaluation of Different Deep Learning Architectures for Spatio-Temporal Representation of Sentinel Time Series
Abstrak
Rich information in multitemporal satellite images can facilitate pixel-level land cover classification. However, what is the most suitable deep learning architecture for high-dimension spatio-temporal representation of remote sensing time series remains unclear. In this study, we theoretically analyzed the different mechanisms of the different deep learning structures, including the commonly used convolutional neural network (CNN), the high-dimension CNN [three-dimensional (3-D) CNN], the recurrent neural network, and the newest vision transformer (ViT), with regard to learning and representing the temporal information for spatio-temporal data. The performance of the different models was comprehensively evaluated on large-scale Sentinel-1 and Sentinel-2 time-series images covering the whole of Slovenia. First, the 3-D CNN, long short-term memory (LSTM), and ViT, which all have specific structures that preserve temporal information, can effectively extract the spatio-temporal information, with the 3-D CNN and ViT showing the best performance. Second, the performance of the 2-D CNN, in which the temporal information is collapsed, is lower than that of the 3-D CNN, LSTM, and ViT but outperforms the conventional methods. Thirdly,using both optical and synthetic aperture radar (SAR) images performs almost the same as using only optical images, indicating that the information that can be extracted from optical images is sufficient for land-cover classification. However, when optical images are unavailable, SAR imagescan provide satisfactorily classification results. Finally, the modern deep learning methods can effectively overcome the disadvantages in imaging conditions where parts of an image or images of some periods are missing. The testing data are available at <italic><uri>gpcv.whu.edu.cn/data</uri></italic>.
Topik & Kata Kunci
Penulis (2)
Linying Zhao
Shunping Ji
Akses Cepat
PDF tidak tersedia langsung
Cek di sumber asli →- Tahun Terbit
- 2023
- Sumber Database
- DOAJ
- DOI
- 10.1109/JSTARS.2022.3219816
- Akses
- Open Access ✓