VR in Recreating Real World Object 

By Yuanbo Li, 2023 Spring

Background

As Virtual Reality allow users to immerse themselves in a computer-generated environment that simulates the real world, it can be used to help users visualize 3D objects in a realistic way. By wearing a VR headset, users can interact with digital objects in a way that feels natural and intuitive, allowing them to gain a deeper understanding of their shape, size, and spatial relationships. This is particularly useful in fields such as engineering, architecture, and design.

VR in recreating real world objects has also a strong application in everyday life.  Instead of sharing 2D images or videos of objects, individuals can share fully-rendered 3D models that can be rotated, examined, and even manipulated in VR. This allows for a more interactive and engaging sharing experience, particularly when it comes to physical objects that are difficult to convey through 2D media. 


Technology: NeRF 

Neural Radiance Fields (NeRF) is a recent breakthrough in computer vision and graphics that enables the synthesis of photo-realistic 3D models from 2D images. NeRF models the volumetric density and color of a 3D scene using a deep neural network, which can then be rendered from any viewpoint with high fidelity.

Read more about NeRF research paper here.

Implementation: Nvidia Instant NeRF 

NVIDIA Instant NeRF is a real-time, GPU-accelerated implementation of NeRF. It is designed to make it easier and faster to generate high-quality 3D models from 2D images, by leveraging the power of NVIDIA's RTX GPUs. It also has an UI that can enable VR mode


Read more about Nvidia Instant NeRF here.

Technology: Photogrammetry 

Photogrammetry is a process of creating 3D models from photographs. By taking multiple photographs of an object or a scene from different angles, photogrammetry software can triangulate the position of each point in 3D space and reconstruct a 3D model. 


Read more about Photogrammetry here.

Implementation: Apple Object Capture 

Apple's Object Capture is a tool that allows users to generate high-quality 3D models from photos taken on their iPhone or iPad. This is achieved through the use of photogrammetry techniques, which enable the software to analyze the images and create 3D models in the form of USDZ files.

Read more about Apple Object Capture here.

Technology: Voxel Carving 

Voxel carving involves converting the 2D images into a 3D volume of voxels (3D pixels), each representing a small cubic element of the object being modeled. The voxels are then labeled as either "inside" or "outside" the object based on whether they lie within the object boundaries.

We apply carving algorithm to the 3D volume to remove the voxels that lie outside the object boundaries. This can be done by iteratively removing voxels that are deemed to be outside the object, based on a set of criteria such as distance from the object surface or density of the voxel data.

Once the voxel carving process is complete, the remaining voxels form a 3D representation of the object being modeled. This representation can then be converted into a polygonal mesh or other suitable format for further processing or rendering.

Read more about Voxel Carving research paper here.

Real World Applications

Google Maps

Google 3D Map is a great example of recreating real world 3D data. Google collects geographic data from a variety of sources, including satellite and aerial imagery, street-level photography, and data from local governments and other sources and use 3D recreation algorithm to stitch 2D images together into 3D.

Google is also planning to use NeRF to further make their map more immersive. Here is a discussion on how NeRF might help. 

Find out whether you should use Nvidia Instant NeRF or Apple Object Capture here!


Related Papers:

Original NeRF paper : introduce the idea of NeRF

Nvidia Instant NeRF : how to accelerate NeRF training

Photogrammetry : introduce the idea/ application of photogrammetry