Image-based methods for novel view synthesis in general offer an effective way of producing realistic images. However, unless a large number of reference images or explicit geometry information is provided, unconstrained viewing is not possible. Computer vision research attempts to automatically reconstruct explicit geometry models of real-world objects from images. However, the occlusion problem and the correspondence matching problem limit its applications to modeling a single approximate object rather than a complete environment. Image-based Rendering (IBR) research focuses on the rendering process and bypasses the difficult modeling process. However, the tradeoff is that it assumes some geometry information of the object to be available, requires a very high spatial sampling rate, or supports only a constrained viewpoint.
In this project, we are developing image-based techniques to support the walkthrough of scalable environments. In order to support unconstrained navigation while reducing the sampling rate, we consider the modeling and rendering processes as a whole. Our method is based on two novel techniques:
Recently, we are extending the above method to support walkthroughs of scalable environments based on panoramic images. A walkthrough environment is first triangulated. A panoramic image is prepared for each node of the environment. The panoramic image nearest to the viewer plus the surrounding six panoramic images will be used to construct arbitrary views according to the location and viewing orientation of the viewer. To demonstrate the performance of this method, we have created an environment with 15 panoramic images. The following show a group of seven images and the video shows a walkthrough of the environments.
|Output video demo 1 (14MB)|
|Output video demo 2 (1.3MB)|