Shading-based Refinement on Volumetric Signed Distance Functions

Shading-based Refinement on Volumetric Signed Distance Functions

Veröffentlicht 2015

We present a novel method to obtain fine-scale detail in 3D reconstructions generated with RGB-D cameras or other commodity scanning devices. As the depth data of these sensors is noisy, truncated signed distance fields are typically used to regularize out this noise in the reconstructions, which unfortunately over-smooths results. In our approach, we leverage RGB data to refine these reconstructions through inverse shading cues, as color input is typically of much higher resolution than the depth data. As a result, we obtain reconstructions with high geometric detail — far beyond the depth resolution of the camera itself — as well as highly-accurate surface albedo, at high computational efficiency. Our core contribution is shading-based refinement directly on the implicit surface representation, which is generated from globally-aligned RGB-D images. We formulate the inverse shading problem on the volumetric distance field, and present a novel objective function which jointly optimizes for fine-scale surface geometry and spatially-varying surface reflectance. In addition, we solve for incident illumination, allowing application in general and unconstrained environments. In order to enable the efficient reconstruction of sub-millimeter detail, we store and process our surface using a sparse voxel hashing scheme that we augmented by introducing a grid hierarchy. A tailored GPU-based Gauss-Newton solver enables us to refine large shape models to previously unseen resolution within only a few seconds. Non-linear shape optimization directly on the implicit shape model allows for a highly-efficient parallelization, and enables much higher reconstruction detail. Our method is versatile and can be combined with a sea of scanning approaches based on implicit surfaces.


ACM Transactions on Graphics (TOG) 34
Nr. 4; 2015; S. 96:1-14;