Pre-NeRF 360:
Enriching Unbounded Appearances for Neural Radiance Fields

* These authors contributed equally to this work.

Rendered images and depths from our model.

Abstract

Neural radiance fields (NeRF) appeared recently as a powerful tool to generate realistic views of objects and confined areas. Still, they face serious challenges with open scenes, where the camera has unrestricted movement and content can appear at any distance. In such scenarios, current NeRF-inspired models frequently yield hazy or pixelated outputs, suffer slow training times, and might display irregularities, because of the challenging task of reconstructing an extensive scene from a limited number of images. We propose a new framework to boost the performance of NeRF-based architectures yielding significantly superior outcomes compared to the prior work. Our solution overcomes several obstacles that plagued earlier versions of NeRF, including handling multiple video inputs, selecting keyframes, and extracting poses from real-world frames that are ambiguous and symmetrical. Furthermore, we applied our framework, dubbed as "Pre-NeRF 360", to enable the use of the Nutrition5k dataset in NeRF and introduce an updated version of this dataset, known as the N5k360 dataset.

proposed method

Our proposed framework diagram, which outlines the entire workflow from a set of videos to any NeRF-like application. The data representation used in the diagram consists of four input videos that were taken from the Nutrition5k dataset

Rendering and Depth

Dish(50710793)

Dish(50712459)

Dish(50777256)

Rendered images and depths from our model.

Citation

Acknowledgements

The website template was borrowed from Jon Barron's public academic website.