Recent CLIP-guided 3D optimization methods, e.g., DreamFields and PureCLIPNeRF achieve great success in zero-shot text-guided 3D synthesis. However, due to the scratch training and random initialization without any prior knowledge, these methods usually fail to generate accurate and faithful 3D structures that conform to the corresponding text. In this paper, we make the first attempt to introduce the explicit 3D shape prior to CLIP-guided 3D optimization methods. Specifically, we first generate a high-quality 3D shape from input texts in the text-to-shape stage as the 3D shape prior. We then utilize it as the initialization of a neural radiance field and then optimize it with the full prompt. For the text-to-shape generation, we present a simple yet effective approach that directly bridges the text and image modalities with a powerful text-to-image diffusion model. To narrow the style domain gap between images synthesized by the text-to-image model and shape renderings used to train the image-to-shape generator, we further propose to jointly optimize a learnable text prompt and fine-tune the text-to-image diffusion model for rendering-style image generation. Our method, namely, Dream3D, is capable of generating imaginative 3D content with better visual quality and shape accuracy than state-of-the-art methods.
Overview of our text-guided 3D synthesis framework. (a) In the first text-to-shape stage, we leverage a fine-tuned off-the-shelf text-to-image diffusion model $G_I$ to synthesize a rendering-style image $\boldsymbol{I}_r$ from the input text prompt $y$. Then we generate a latent shape embedding $\boldsymbol{e}_S$ from $\boldsymbol{I}_r$ using a diffusion-model-based shape embedding generation network $G_M$. Finally, we feed $\boldsymbol{e}_S$ into a high-quality 3D shape generator $G_S$ to synthesize a 3D shape $S$, which we use as an explicit 3D shape prior. (b) In the second optimization stage, we use the 3D shape prior $S$ to initialize a neural radiance field and further optimize it with the CLIP guidance to synthesize 3D content consistent with the input text prompt $y$.
CLIP-Mesh | Dream Fields | PureCLIPNeRF | Ours |
@inproceedings{xu2023dream3d,
title = {Dream3d: Zero-shot text-to-3d synthesis using 3d shape prior and text-to-image diffusion models},
author = {Xu, Jiale and Wang, Xintao and Cheng, Weihao and Cao, Yan-Pei and Shan, Ying and Qie, Xiaohu and Gao, Shenghua},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages = {20908--20918},
year = {2023}
}