Mitigating inappropriate concepts in text-to-image generation with attention-guided Image editing
Abstrak
Text-to-image generative models have recently garnered a significant surge due to their ability to produce diverse images based on given text prompts. However, concerns regarding the occasional generation of inappropriate, offensive, or explicit content have arisen. To address this, we propose a simple yet effective method that leverages attention map to selectively suppress inappropriate concepts during image generation. Unlike existing approaches that often sacrifice original image context or demand substantial computational overhead, our method preserves image integrity without requiring additional model training or extensive engineering effort. To evaluate our method, we conducted comprehensive quantitative assessments on inappropriateness reduction, text fidelity, image consistency, and computational cost, alongside an online human perceptual study involving 20 participants. The results from our statistical analysis demonstrated that our method effectively removes inappropriate content while preserving the integrity of the original images with high computational efficiency.
Penulis (4)
Jiyeon Oh
Jae-Yeop Jeong
Yeong-Gi Hong
Jin-Woo Jeong
Akses Cepat
- Tahun Terbit
- 2025
- Bahasa
- en
- Sumber Database
- CrossRef
- DOI
- 10.7717/peerj-cs.3170
- Akses
- Open Access ✓