This AI Creates Detailed 3D Renderings from Thousands of Tourist Photos

This AI Creates Detailed 3D Renderings from Thousands of Tourist Photos

A team of researchers at Google have come up with a technique that can combine thousands of tourist photos into detailed 3D renderings that take you inside a scene… even if the original photos used vary wildly in terms of lighting or include other problematic elements like people or cars.

The tech is called “NeRF in the Wild” or “NeRF-W” because it takes Google Brain’s Neural Radiance Fields (NeRF) technology and applies it to “unstructured and uncontrolled photo collections” like the thousands of tourist photos used to create the demo you see below, and the samples in the video above.

It’s basically an advanced, neural network-driven interpolation that manages to include geometric info about the scene while removing ‘transient occluders’ like people or cars and smoothing out changes in lighting.

“While [standard] NeRF works well on images of static subjects captured under controlled settings, it is incapable of modeling many ubiquitous, real-world phenomena in uncontrolled images, such as variable illumination or transient occluders,” reads the full research paper. “In this work, we introduce a series of extensions to NeRF to address these issues, thereby allowing for accurate reconstructions from unstructured image collections taken from the internet.”

The result is pretty mind-blowing when you consider how it was created, and all of the elements that had to be left out or smoothed out to make it happen. As the technology develops further, it could revolutionize how 3D renderings are created by allowing for much more variation in the source imagery than has ever been possible before.

To learn more about this technology, watch the introductory video up top, visit the NeRF-W github, or download the research paper from this link.

Source link

Leave a Reply

Your email address will not be published.