LDM: Large tensorial SDF model for textured mesh generation

Previous efforts have managed to generate production-ready 3D assets from text or images. However, these methods primarily employ NeRF or 3D Gaussian representations, which are not adept at producing smooth, high-quality geometries required by modern rendering pipelines. In this paper, we propose LD...

Full description

Saved in:
Bibliographic Details
Main Authors: Rengan Xie, Kai Huang, Xiaoliang Luo, Yizheng Chen, Lvchun Wang, Qi Wang, Qi Ye, Wei Chen, Wenting Zheng, Yuchi Huo
Format: Article
Language:English
Published: Elsevier 2025-08-01
Series:Graphical Models
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S1524070325000189
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Previous efforts have managed to generate production-ready 3D assets from text or images. However, these methods primarily employ NeRF or 3D Gaussian representations, which are not adept at producing smooth, high-quality geometries required by modern rendering pipelines. In this paper, we propose LDM, a Large tensorial SDF Model, which introduces a novel feed-forward framework capable of generating high-fidelity, illumination-decoupled textured mesh from a single image or text prompts. We firstly utilize a multi-view diffusion model to generate sparse multi-view inputs from single images or text prompts, and then a transformer-based model is trained to predict a tensorial SDF field from these sparse multi-view image inputs. Finally, we employ a gradient-based mesh optimization layer to refine this model, enabling it to produce an SDF field from which high-quality textured meshes can be extracted. Extensive experiments demonstrate that our method can generate diverse, high-quality 3D mesh assets with corresponding decomposed RGB textures within seconds. The project code is available at https://github.com/rgxie/LDM.
ISSN:1524-0703