Joint Rendering and Segmentation of Free-viewpoint Video

Masaot ISHII    Keita TAKAHASHI    Takeshi NAEMURA
sequence

Abstract

This paper presents a method that jointly performs synthesis and object segmentation of free-viewpoint video using multi-view video as the input. This method works efficiently and online by sharing a calculation process between the synthesis and segmentation steps. The matching costs calculated through the synthesis step are adaptively fused with other cues depending on the reliability in the segmentation step. Since the segmentation is performed for arbitrary viewpoints directly, the extracted object can be superimposed onto another 3-D scene with geometric consistency. We can observe that the object and new background move naturally along with the viewpoint change as if they existed together in the same space. Experimental results using a 25-camera array show the effectiveness of our method.

Videos (MPEG1)

Publications