DOAJ Open Access 2025

Diffusion-Based Text-To-Image Generation: Impact Analysis of Textual Features

Wan Ziyang

Abstrak

With the rapid development of deep learning, text-to-image generation technology has demonstrated significant application value in multiple domains, including content creation, automated design, and virtual reality. However, the quality of generated images is influenced by numerous factors, among which text input length and syntactic structure may play critical roles in generation efficiency and final image quality. This study aims to comprehensively investigate the impact of textual length and grammatical structures on text-to-image generation tasks, with the goal of optimizing the practical performance of the RAT-Diffusion model. Based on baseline methods for text-to-image generation in small-scale datasets, the paper designed and conducted a series of targeted experiments to evaluate the performance of short, medium-length, and long texts, as well as texts with varying syntactic structures, using quantitative metrics such as Fréchet Inception Distance (FID) and Inception Score (IS). The findings reveal that moderate text length and well-structured syntax enhance generation quality, while excessively long texts or overly complex grammatical structures may degrade output quality. These insights provide novel approaches for textual optimization, effectively improving the controllability and practicality of text-to-image generation, thereby offering valuable references for research and applications in related fields.

Topik & Kata Kunci

Penulis (1)

W

Wan Ziyang

Format Sitasi

Ziyang, W. (2025). Diffusion-Based Text-To-Image Generation: Impact Analysis of Textual Features. https://doi.org/10.1051/itmconf/20257804010

Akses Cepat

Lihat di Sumber doi.org/10.1051/itmconf/20257804010
Informasi Jurnal
Tahun Terbit
2025
Sumber Database
DOAJ
DOI
10.1051/itmconf/20257804010
Akses
Open Access ✓