arXiv Open Access 2024

LMM-driven Semantic Image-Text Coding for Ultra Low-bitrate Learned Image Compression

Shimon Murai Heming Sun Jiro Katto
Lihat Sumber

Abstrak

Supported by powerful generative models, low-bitrate learned image compression (LIC) models utilizing perceptual metrics have become feasible. Some of the most advanced models achieve high compression rates and superior perceptual quality by using image captions as sub-information. This paper demonstrates that using a large multi-modal model (LMM), it is possible to generate captions and compress them within a single model. We also propose a novel semantic-perceptual-oriented fine-tuning method applicable to any LIC network, resulting in a 41.58\% improvement in LPIPS BD-rate compared to existing methods. Our implementation and pre-trained weights are available at https://github.com/tokkiwa/ImageTextCoding.

Topik & Kata Kunci

Penulis (3)

S

Shimon Murai

H

Heming Sun

J

Jiro Katto

Format Sitasi

Murai, S., Sun, H., Katto, J. (2024). LMM-driven Semantic Image-Text Coding for Ultra Low-bitrate Learned Image Compression. https://arxiv.org/abs/2411.13033

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓