arXiv Open Access 2025

ObjMST: An Object-Focused Multimodal Style Transfer Framework

Chanda Grover Kamra Indra Deep Mastan Debayan Gupta
Lihat Sumber

Abstrak

We propose ObjMST, an object-focused multimodal style transfer framework that provides separate style supervision for salient objects and surrounding elements while addressing alignment issues in multimodal representation learning. Existing image-text multimodal style transfer methods face the following challenges: (1) generating non-aligned and inconsistent multimodal style representations; and (2) content mismatch, where identical style patterns are applied to both salient objects and their surrounding elements. Our approach mitigates these issues by: (1) introducing a Style-Specific Masked Directional CLIP Loss, which ensures consistent and aligned style representations for both salient objects and their surroundings; and (2) incorporating a salient-to-key mapping mechanism for stylizing salient objects, followed by image harmonization to seamlessly blend the stylized objects with their environment. We validate the effectiveness of ObjMST through experiments, using both quantitative metrics and qualitative visual evaluations of the stylized outputs. Our code is available at: https://github.com/chandagrover/ObjMST.

Topik & Kata Kunci

Penulis (3)

C

Chanda Grover Kamra

I

Indra Deep Mastan

D

Debayan Gupta

Format Sitasi

Kamra, C.G., Mastan, I.D., Gupta, D. (2025). ObjMST: An Object-Focused Multimodal Style Transfer Framework. https://arxiv.org/abs/2503.04353

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓