A Generative Model of High Quality 3D Textured Shapes Learned from Images

Plans
  • Free
Platforms
Social Links

Description

We generate a 3D SDF and a texture field via two latent codes. We utilize DMTet to extract a 3D surface mesh from the SDF, and query the texture field at surface points to get colors. We train with adversarial losses defined on 2D images. In particular, we use a rasterization-based differentiable renderer to obtain RGB images and silhouettes. We utilize two 2D discriminators, each on RGB image, and silhouette, respectively, to classify whether the inputs are real or fake. The whole model is end-to-end trainable.

Rating and Reviews

Add Rating and Review
0
(0 Reviews)
5  
0%
4  
0%
3  
0%
2  
0%
1  
0%