Text to Image Generation: A Literature Review Focus on the Diffusion Model

This paper reviews the progress in text-to-image generation, which enables the creation of images from textual descriptions. This technology holds promise across various fields, including creative arts, gaming, and healthcare. The main approaches in this area are Generative Adversarial Networks (GAN...

Full description

Saved in:
Bibliographic Details
Main Author: Zhou Jingxi
Format: Article
Language:English
Published: EDP Sciences 2025-01-01
Series:ITM Web of Conferences
Online Access:https://www.itm-conferences.org/articles/itmconf/pdf/2025/04/itmconf_iwadi2024_02037.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper reviews the progress in text-to-image generation, which enables the creation of images from textual descriptions. This technology holds promise across various fields, including creative arts, gaming, and healthcare. The main approaches in this area are Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and Diffusion Models (DM). While GANs initially made significant advancements in realistic image generation, they faced issues with stability and diversity. VAEs introduced a probabilistic approach, allowing for diverse outputs but often at the cost of image quality. The development of DM, like Stable Diffusion, Imagen, and DALL-E 2, has addressed many limitations, producing high-quality, coherent images through iterative denoising. DM stands out for its stability and ability to generate detailed, semantically accurate images. This review explores the strengths and limitations of each approach, with an emphasis on the advantages of DM. It also discusses future directions, including improving efficiency, enhancing multimodal capabilities, and reducing data requirements to make these models more accessible and versatile for various applications.
ISSN:2271-2097