Our camera array system
|
Output synthesized images and depth maps, calculated
at various viewpoints using the same input frames. |
We present a real-time video-based rendering system using a network camera array. Our system consists of 64 commodity network cameras that are connected to a single PC through a Gigabit Ethernet. To render a high-quality novel view, we estimate a view-dependent per-pixel depth map in real-time by using a layered representation. The rendering algorithm is fully implemented on a GPU, which allows our system to efficiently use CPU and GPU independently and in parallel. Using QVGA input video resolution, our system renders a free-viewpoint video at up to 30 fps depending on the output video resolution and the number of depth layers. Experimental results show high-quality images synthesized from various scenes.
TransCAIP: Live Transmission of Light Field from a Camera Array to an Integral Photography Display
The videos rendered at a fixed viewpoint show the effect of the temporal depth smoothing more clearly, while the videos rendered at a moving viewpoint confirm that the temporal smoothing can be used even when the rendering viewpoint smoothly moves.