Neural Radiance Fields for View Synthesis and Beyond
Neural Radiance Fields have emerged as a new paradigm, not only for the original goal of view synthesis from input images, but as a novel volumetric representation of object geometry in a range of applications in computer vision, graphics, robotics and beyond. This talk will first describe our journey in finding the correct representation for deep learning-based view synthesis, culminating in the original NeRF paper, then discuss some of the exciting practical extensions today to nearly instant training and rendering, text-based editing, real-time radiance fields from portraits and many others, including practical adoption in the metaverse, for digital twins and streetview.