Our 3D denoising diffusion probabilistic model learns to synthesize diverse radiance fields that enable high-quality rendering with accurate geometry.
We see future applications of our radiance field diffusion method in the generation of scene assets where the accurately synthesized geometry can enable physics-based interaction.
For some more 3D diffusion-based work, please also check out
DreamFusion: Text-to-3D using 2D Diffusion performs text-guided NeRF generation by 2D Diffusion. They propose Score Distillation Sampling in order to optimize samples via diffusion which could potentially also been applied to other modalities than text.
LION: Latent Point Diffusion Models for 3D Shape Generation introduces a hierarchical approach to learn high-quality point cloud synthesis that can be augmented with mdoern surface reconstruction techniques to generate smooth 3D meshes.Video Diffusion Models With the third dimension as time, this paper propose a natural extension of image architectures to tackle the task of video diffusion. They introduce a novel conditioning technique for long and high-resolution videos and achieve state-of-the-art results on unconditional video generation.
@inproceedings{muller2023diffrf,
title={Diffrf: Rendering-guided 3d radiance field diffusion},
author={M{\"u}ller, Norman and Siddiqui, Yawar and Porzi, Lorenzo and Bulo, Samuel Rota and Kontschieder, Peter and Nie{\ss}ner, Matthias},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={4328--4338},
year={2023}
}