Papers

3D-aware Conditional Image Synthesis Post feature image

3D-aware Conditional Image Synthesis

This paper describes a 3D-aware conditional generative model for controllable photorealistic image synthesis. It integrates 3D representations with conditional generative modeling, i.e., enabling controllable high-resolution 3D-aware rendering by conditioning on user inputs.

SinMDM: Single Motion Diffusion Post feature image

SinMDM: Single Motion Diffusion

This paper presents a Single Motion Diffusion Model, dubbed SinMDM, a model designed to learn the internal motifs of a single motion sequence with arbitrary topology and synthesize motions of arbitrary length that are faithful to them. Check it out!

OmniObject3D Post feature image

OmniObject3D

OmniObject3D is a large vocabulary 3D object dataset featuring massive high-quality real-scanned 3D objects that can be used to facilitate the development of 3D perception, reconstruction, and generation in the real world. Give it a try!

MEGANE: Morphable Eyeglass and Avatar Network Post feature image

MEGANE: Morphable Eyeglass and Avatar Network

Megane is a 3D compositional morphable model of eyeglasses that incorporates high-fidelity geometric and photometric interaction effects. To support the variation in eyeglass topology, a hybrid representation of surface geometry and a volumetric representation is employed.

RoDynRF: Robust Dynamic Radiance Fields Post feature image

RoDynRF: Robust Dynamic Radiance Fields

In this work, the authors address the robustness issue of dynamic radiance field reconstruction methods by jointly estimating the static and dynamic radiance fields along with the camera parameters (poses and focal length). Learn how they do it!