CrossRef Open Access 2022 16 sitasi

Multi-scale fusion for RGB-D indoor semantic segmentation

Shiyi Jiang Yang Xu Danyang Li Runze Fan

Abstrak

AbstractIn computer vision, convolution and pooling operations tend to lose high-frequency information, and the contour details will also disappear with the deepening of the network, especially in image semantic segmentation. For RGB-D image semantic segmentation, all the effective information of RGB and depth image can not be used effectively, while the form of wavelet transform can retain the low and high frequency information of the original image perfectly. In order to solve the information losing problems, we proposed an RGB-D indoor semantic segmentation network based on multi-scale fusion: designed a wavelet transform fusion module to retain contour details, a nonsubsampled contourlet transform to replace the pooling operation, and a multiple pyramid module to aggregate multi-scale information and context global information. The proposed method can retain the characteristics of multi-scale information with the help of wavelet transform, and make full use of the complementarity of high and low frequency information. As the depth of the convolutional neural network increases without losing the multi-frequency characteristics, the segmentation accuracy of image edge contour details is also improved. We evaluated our proposed efficient method on commonly used indoor datasets NYUv2 and SUNRGB-D, and the results showed that we achieved state-of-the-art performance and real-time inference.

Penulis (4)

S

Shiyi Jiang

Y

Yang Xu

D

Danyang Li

R

Runze Fan

Format Sitasi

Jiang, S., Xu, Y., Li, D., Fan, R. (2022). Multi-scale fusion for RGB-D indoor semantic segmentation. https://doi.org/10.1038/s41598-022-24836-9

Akses Cepat

Lihat di Sumber doi.org/10.1038/s41598-022-24836-9
Informasi Jurnal
Tahun Terbit
2022
Bahasa
en
Total Sitasi
16×
Sumber Database
CrossRef
DOI
10.1038/s41598-022-24836-9
Akses
Open Access ✓