Shading through Defocus

old_uid4904
titleShading through Defocus
start_date2008/05/23
schedule14h30
onlineno
summaryMuch of my previous work in Computer Vision has dealt with proposing alternative approaches to shape from shading (SFS), by trying to extend, to single-input estimation, processes traditionally based on multiple images. My collaborators and I have already proposed to map the multiple-image photometric-stereo and photometric-motion processes into SFS, through the use of Green's functions of matching equations. Lately, we have been trying to relate shape from shading and shape from defocus. Traditional approaches to shape from defocus, starting with Pentland, have been based on modeling the defocusing process through a normalized point-spread function (PSF). We have found that, in the general case, the appropriate PSF will carry a dependence on the depth map of the imaged scene, what would preclude shape estimation. If the camera is focused at far distances, however, such dependence can be neglected, and an unnormalized PSF can be employed. Based on this, we have reformulated Pentland's shape from defocus approach using unnormalized gaussians, and we have proven that such model will in principle allow the estimation of a dense depth map from a single input image. Moreover, we have shown that the use of unnormalized Gabor functions as PSF provides an approximation for any signal as being formed by a series of frequency-dependent defocusing processes. Such approximation proves suitable for shading images, and, based on it, we have been able to obtain good shape-from-shading estimates essentially through a shape-from-defocus approach, without resorting to the reflectance map concept.
oncancelNouveau
responsiblesMothe, Lemarié, Debats