View Synthesis for Virtual Walk through in Real Scene Based on Catadioptric Omnidirectional Images

Author(s):  
Chen Wang ◽  
Xu Wei ◽  
Xiong Zhihui ◽  
Zhang Maojun
2009 ◽  
Vol 8 (4) ◽  
pp. 87-92
Author(s):  
Wang Chen ◽  
Wei Xu ◽  
Zhihui Xiong ◽  
Maojun Zhang

Virtual walk through can be widely applied in many industries such as virtual environment construction, history heritage conservation and scenic site exhibition etc. This paper proposes a more convenient and efficient approach to create realistic virtual walk through from catadioptric omni-directional images via view synthesis technique. Our innovation mainly lies in three aspects: omni-directional image preprocessing, image rectification and novel view interpolation. Acquisition and unwarping of omni-directional image are discussed firstly. Then, for specialty of cylindrical panoramic imaging, epiline-sampling method is adopted for rectification, which samples reference images along epilines as much as possible grounding on epipolar geometry. In this way, it can detract the rectified images from image deformation and resolution degeneration, which usually take place due to perspective transformation when using some other algorithms. As to novel view generation, a corresponding interpolation algorithm is developed. Pixels on novel view are formulated according to the cylindrical panoramic imaging model. Experiments carried out on both synthetic and real scene are given at the end of this paper, with a demonstration of the method's application in realistic virtual walk through.


Author(s):  
HONG-CHANG SHIN ◽  
Gwangsoon Lee ◽  
Ho min Eum ◽  
Jeong-Il Seo

Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3327
Author(s):  
Vicente Román ◽  
Luis Payá ◽  
Adrián Peidró ◽  
Mónica Ballesta ◽  
Oscar Reinoso

Over the last few years, mobile robotics has experienced a great development thanks to the wide variety of problems that can be solved with this technology. An autonomous mobile robot must be able to operate in a priori unknown environments, planning its trajectory and navigating to the required target points. With this aim, it is crucial solving the mapping and localization problems with accuracy and acceptable computational cost. The use of omnidirectional vision systems has emerged as a robust choice thanks to the big quantity of information they can extract from the environment. The images must be processed to obtain relevant information that permits solving robustly the mapping and localization problems. The classical frameworks to address this problem are based on the extraction, description and tracking of local features or landmarks. However, more recently, a new family of methods has emerged as a robust alternative in mobile robotics. It consists of describing each image as a whole, what leads to conceptually simpler algorithms. While methods based on local features have been extensively studied and compared in the literature, those based on global appearance still merit a deep study to uncover their performance. In this work, a comparative evaluation of six global-appearance description techniques in localization tasks is carried out, both in terms of accuracy and computational cost. Some sets of images captured in a real environment are used with this aim, including some typical phenomena such as changes in lighting conditions, visual aliasing, partial occlusions and noise.


2021 ◽  
Vol 11 (5) ◽  
pp. 2174
Author(s):  
Xiaoguang Li ◽  
Feifan Yang ◽  
Jianglu Huang ◽  
Li Zhuo

Images captured in a real scene usually suffer from complex non-uniform degradation, which includes both global and local blurs. It is difficult to handle the complex blur variances by a unified processing model. We propose a global-local blur disentangling network, which can effectively extract global and local blur features via two branches. A phased training scheme is designed to disentangle the global and local blur features, that is the branches are trained with task-specific datasets, respectively. A branch attention mechanism is introduced to dynamically fuse global and local features. Complex blurry images are used to train the attention module and the reconstruction module. The visualized feature maps of different branches indicated that our dual-branch network can decouple the global and local blur features efficiently. Experimental results show that the proposed dual-branch blur disentangling network can improve both the subjective and objective deblurring effects for real captured images.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4719
Author(s):  
Huei-Yung Lin ◽  
Yuan-Chi Chung ◽  
Ming-Liang Wang

This paper presents a novel self-localization technique for mobile robots using a central catadioptric camera. A unified sphere model for the image projection is derived by the catadioptric camera calibration. The geometric property of the camera projection model is utilized to obtain the intersections of the vertical lines and ground plane in the scene. Different from the conventional stereo vision techniques, the feature points are projected onto a known planar surface, and the plane equation is used for depth computation. The 3D coordinates of the base points on the ground are calculated using the consecutive image frames. The derivation of motion trajectory is then carried out based on the computation of rotation and translation between the robot positions. We develop an algorithm for feature correspondence matching based on the invariability of the structure in the 3D space. The experimental results obtained using the real scene images have demonstrated the feasibility of the proposed method for mobile robot localization applications.


2021 ◽  
Vol 1852 (2) ◽  
pp. 022080
Author(s):  
Rui Liu ◽  
Liwu Yao ◽  
Lei Yan ◽  
Heping Li ◽  
Xiaolong Liu ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document