VolumeDiffusion: Feed-forward text-to-3D generation with efficient volumetric encoder

This work presents VolumeDiffusion, a novel feed-forward text-to-3D generation framework that directly synthesizes 3D objects from textual descriptions. It bypasses the conventional score distillation loss based or text-to-image-to-3D approaches. To scale up the training data for the diffusion model...

Full description

Saved in:
Bibliographic Details
Main Authors: Zhicong Tang, Shuyang Gu, Chunyu Wang, Ting Zhang, Jianmin Bao, Dong Chen, Baining Guo
Format: Article
Language:English
Published: Elsevier 2025-08-01
Series:Graphical Models
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S1524070325000219
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849728863745081344
author Zhicong Tang
Shuyang Gu
Chunyu Wang
Ting Zhang
Jianmin Bao
Dong Chen
Baining Guo
author_facet Zhicong Tang
Shuyang Gu
Chunyu Wang
Ting Zhang
Jianmin Bao
Dong Chen
Baining Guo
author_sort Zhicong Tang
collection DOAJ
description This work presents VolumeDiffusion, a novel feed-forward text-to-3D generation framework that directly synthesizes 3D objects from textual descriptions. It bypasses the conventional score distillation loss based or text-to-image-to-3D approaches. To scale up the training data for the diffusion model, a novel 3D volumetric encoder is developed to efficiently acquire feature volumes from multi-view images. The 3D volumes are then trained on a diffusion model for text-to-3D generation using a 3D U-Net. This research further addresses the challenges of inaccurate object captions and high-dimensional feature volumes. The proposed model, trained on the public Objaverse dataset, demonstrates promising outcomes in producing diverse and recognizable samples from text prompts. Notably, it empowers finer control over object part characteristics through textual cues, fostering model creativity by seamlessly combining multiple concepts within a single object. This research significantly contributes to the progress of 3D generation by introducing an efficient, flexible, and scalable representation methodology.
format Article
id doaj-art-752fd40211754ae3aa0bfee327fe1fa9
institution DOAJ
issn 1524-0703
language English
publishDate 2025-08-01
publisher Elsevier
record_format Article
series Graphical Models
spelling doaj-art-752fd40211754ae3aa0bfee327fe1fa92025-08-20T03:09:25ZengElsevierGraphical Models1524-07032025-08-0114010127410.1016/j.gmod.2025.101274VolumeDiffusion: Feed-forward text-to-3D generation with efficient volumetric encoderZhicong Tang0Shuyang Gu1Chunyu Wang2Ting Zhang3Jianmin Bao4Dong Chen5Baining Guo6Institute for Advanced Study, Tsinghua University, Beijing, 100084, China; Corresponding author.School of Information Science and Technology, University of Science and Technology of China, Hefei, 230026, ChinaMicrosoft Research Asia, Beijing, 100080, ChinaSchool of Artificial Intelligence, Beijing Normal University, Beijing, 100875, ChinaMicrosoft Research Asia, Beijing, 100080, ChinaMicrosoft Research Asia, Beijing, 100080, ChinaInstitute for Advanced Study, Tsinghua University, Beijing, 100084, China; Microsoft Research Asia, Beijing, 100080, ChinaThis work presents VolumeDiffusion, a novel feed-forward text-to-3D generation framework that directly synthesizes 3D objects from textual descriptions. It bypasses the conventional score distillation loss based or text-to-image-to-3D approaches. To scale up the training data for the diffusion model, a novel 3D volumetric encoder is developed to efficiently acquire feature volumes from multi-view images. The 3D volumes are then trained on a diffusion model for text-to-3D generation using a 3D U-Net. This research further addresses the challenges of inaccurate object captions and high-dimensional feature volumes. The proposed model, trained on the public Objaverse dataset, demonstrates promising outcomes in producing diverse and recognizable samples from text prompts. Notably, it empowers finer control over object part characteristics through textual cues, fostering model creativity by seamlessly combining multiple concepts within a single object. This research significantly contributes to the progress of 3D generation by introducing an efficient, flexible, and scalable representation methodology.http://www.sciencedirect.com/science/article/pii/S1524070325000219Text-to-3D3D generationDiffusion models
spellingShingle Zhicong Tang
Shuyang Gu
Chunyu Wang
Ting Zhang
Jianmin Bao
Dong Chen
Baining Guo
VolumeDiffusion: Feed-forward text-to-3D generation with efficient volumetric encoder
Graphical Models
Text-to-3D
3D generation
Diffusion models
title VolumeDiffusion: Feed-forward text-to-3D generation with efficient volumetric encoder
title_full VolumeDiffusion: Feed-forward text-to-3D generation with efficient volumetric encoder
title_fullStr VolumeDiffusion: Feed-forward text-to-3D generation with efficient volumetric encoder
title_full_unstemmed VolumeDiffusion: Feed-forward text-to-3D generation with efficient volumetric encoder
title_short VolumeDiffusion: Feed-forward text-to-3D generation with efficient volumetric encoder
title_sort volumediffusion feed forward text to 3d generation with efficient volumetric encoder
topic Text-to-3D
3D generation
Diffusion models
url http://www.sciencedirect.com/science/article/pii/S1524070325000219
work_keys_str_mv AT zhicongtang volumediffusionfeedforwardtextto3dgenerationwithefficientvolumetricencoder
AT shuyanggu volumediffusionfeedforwardtextto3dgenerationwithefficientvolumetricencoder
AT chunyuwang volumediffusionfeedforwardtextto3dgenerationwithefficientvolumetricencoder
AT tingzhang volumediffusionfeedforwardtextto3dgenerationwithefficientvolumetricencoder
AT jianminbao volumediffusionfeedforwardtextto3dgenerationwithefficientvolumetricencoder
AT dongchen volumediffusionfeedforwardtextto3dgenerationwithefficientvolumetricencoder
AT bainingguo volumediffusionfeedforwardtextto3dgenerationwithefficientvolumetricencoder