|
|
|
|
|
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() |
We present a novel framework to regularize Neural Radiance Field (NeRF) in a few-shot setting with a geometry-aware consistency regularization. The proposed approach leverages a rendered depth map at unobserved viewpoint to warp sparse input images to the unobserved viewpoint and impose them as pseudo ground truths to facilitate learning of NeRF. By encouraging such geometry-aware consistency at a feature-level instead of using pixel-level reconstruction loss, we regularize the NeRF at semantic and structural levels while allowing for modeling view-dependent radiance to account for color variations across viewpoints. We also propose an effective method to filter out erroneous warped solutions, along with training strategies to stabilize training during optimization. We show that our model achieves competitive results compared to state-of-the-art few-shot NeRF models. |
| ![]() |
Given an image |
Qualitative comparison results in NeRF-Synthetic dataset demonstrate
that our model shows superior performance to baseline mip-NeRF and previous state-of-the-art model,
RegNeRF, in 3-view settings.
We observe that our warping-based consistency enables GeCoNeRF to capture fine details that mip-NeRF and RegNeRF struggle to capture
in same sparse view scenarios, as demonstrated with the |
Qualitative comparison on LLFF dataset with baseline mip-NeRF shows that our model learns of coherent depth and geometry in extremely sparse 3-view setting. |
We validate the design choices in our model by performing both an ablative study. We observe that without the consistency modeling loss, our model suffers a sharp decrease in reconstruction fidelity both quantitatively and qualitatively. We also validate the inclusion of our occlusion mask, progressive modeling method and disparity regularization loss. |
![]() |
M. Kwak, J. Song, S. Kim GeCoNeRF: Few-shot Neural Radiance Fields via Geometric Consistency (hosted on ArXiv) |
Acknowledgements |