DOAJ Open Access 2025

Hyperspectral Image Classification With Re-Attention Agent Transformer and Multiscale Partial Convolution

Junding Sun Hongyuan Zhang Jianlong Wang Haifeng Sima Shuanggen Jin

Abstrak

Convolutional neural networks (CNNs) focus solely on extracting local features, lacking the ability to capture global spectral-spatial information. Meanwhile, Transformers effectively learn the overall distribution and mutual relationships of spectral features but overlook the extraction of local spatial features. To fully leverage the complementary advantages of both techniques, the article proposes a re-attention agent transformer and multiscale partial convolution (RAT-MPC) for hyperspectral image classification. It effectively utilizes the local learning capability of CNNs and the long-range modeling ability of Transformers. Specifically, the multiscale spatial-spectral feature learning module employs a strategy of split, refactoring, fusion to extract shallow feature information. Subsequently, the dual branch feature processing module handles the obtained features from both local and global perspectives. On one hand, the re-attention agent transformer branch is employed to learn complex global spectral relationships. On the other hand, multiscale partial convolutions are utilized to further learn abstract spatial features. Finally, the multilevel feature fusion attention module is designed to fully use features from different receptive fields and depths. In addition, it incorporates an enhanced coordinate attention mechanism to reinforce spatial detail features. To evaluation the proposed RAT-MPC effectiveness, 5%, 0.7%, and 0.1% of labeled samples are selected from the Indian Pines (IP), Pavia University (PU), and WHU-Hi-LongKou (LK) datasets, respectively. The experimental results demonstrate that the proposed network exhibited exceptional classification performance, achieving overall accuracies of 96.66%, 98.20%, and 98.44% on the IP, PU, and LK datasets, respectively. Compared with the latest CNN-Transformer related method DBCTNet, the proposed method achieves improvements of 1.36%, 0.68%, and 1.38% in overall accuracies, respectively.

Penulis (5)

J

Junding Sun

H

Hongyuan Zhang

J

Jianlong Wang

H

Haifeng Sima

S

Shuanggen Jin

Format Sitasi

Sun, J., Zhang, H., Wang, J., Sima, H., Jin, S. (2025). Hyperspectral Image Classification With Re-Attention Agent Transformer and Multiscale Partial Convolution. https://doi.org/10.1109/JSTARS.2025.3593885

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber doi.org/10.1109/JSTARS.2025.3593885
Informasi Jurnal
Tahun Terbit
2025
Sumber Database
DOAJ
DOI
10.1109/JSTARS.2025.3593885
Akses
Open Access ✓