arXiv Open Access 2024

Boosting Image Restoration via Priors from Pre-trained Models

Xiaogang Xu Shu Kong Tao Hu Zhe Liu Hujun Bao
Lihat Sumber

Abstrak

Pre-trained models with large-scale training data, such as CLIP and Stable Diffusion, have demonstrated remarkable performance in various high-level computer vision tasks such as image understanding and generation from language descriptions. Yet, their potential for low-level tasks such as image restoration remains relatively unexplored. In this paper, we explore such models to enhance image restoration. As off-the-shelf features (OSF) from pre-trained models do not directly serve image restoration, we propose to learn an additional lightweight module called Pre-Train-Guided Refinement Module (PTG-RM) to refine restoration results of a target restoration network with OSF. PTG-RM consists of two components, Pre-Train-Guided Spatial-Varying Enhancement (PTG-SVE), and Pre-Train-Guided Channel-Spatial Attention (PTG-CSA). PTG-SVE enables optimal short- and long-range neural operations, while PTG-CSA enhances spatial-channel attention for restoration-related learning. Extensive experiments demonstrate that PTG-RM, with its compact size ($<$1M parameters), effectively enhances restoration performance of various models across different tasks, including low-light enhancement, deraining, deblurring, and denoising.

Topik & Kata Kunci

Penulis (5)

X

Xiaogang Xu

S

Shu Kong

T

Tao Hu

Z

Zhe Liu

H

Hujun Bao

Format Sitasi

Xu, X., Kong, S., Hu, T., Liu, Z., Bao, H. (2024). Boosting Image Restoration via Priors from Pre-trained Models. https://arxiv.org/abs/2403.06793

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓