scholarly journals Local depth image enhancement scheme for view synthesis

Author(s):  
Yongzhe Wang ◽  
Dong Tian ◽  
Anthony Vetro
Nutrients ◽  
2018 ◽  
Vol 10 (12) ◽  
pp. 2005 ◽  
Author(s):  
Frank Lo ◽  
Yingnan Sun ◽  
Jianing Qiu ◽  
Benny Lo

An objective dietary assessment system can help users to understand their dietary behavior and enable targeted interventions to address underlying health problems. To accurately quantify dietary intake, measurement of the portion size or food volume is required. For volume estimation, previous research studies mostly focused on using model-based or stereo-based approaches which rely on manual intervention or require users to capture multiple frames from different viewing angles which can be tedious. In this paper, a view synthesis approach based on deep learning is proposed to reconstruct 3D point clouds of food items and estimate the volume from a single depth image. A distinct neural network is designed to use a depth image from one viewing angle to predict another depth image captured from the corresponding opposite viewing angle. The whole 3D point cloud map is then reconstructed by fusing the initial data points with the synthesized points of the object items through the proposed point cloud completion and Iterative Closest Point (ICP) algorithms. Furthermore, a database with depth images of food object items captured from different viewing angles is constructed with image rendering and used to validate the proposed neural network. The methodology is then evaluated by comparing the volume estimated by the synthesized 3D point cloud with the ground truth volume of the object items.


Author(s):  
MICHAEL SCHMEING ◽  
XIAOYI JIANG

In this paper, we address the disocclusion problem that occurs during view synthesis in depth image-based rendering (DIBR). We propose a method that can recover faithful texture information for disoccluded areas. In contrast to common disocclusion filling methods, which usually work frame-by-frame, our algorithm can take information from temporally neighboring frames into account. This way, we are able to reconstruct a faithful filling for the disocclusion regions and not just an approximate or plausible one. Our method avoids artifacts that occur with common approaches and can additionally reduce compression artifacts at object boundaries.


2020 ◽  
Vol 10 (5) ◽  
pp. 1562 ◽  
Author(s):  
Xiaodong Chen ◽  
Haitao Liang ◽  
Huaiyuan Xu ◽  
Siyu Ren ◽  
Huaiyu Cai ◽  
...  

Depth image-based rendering (DIBR) plays an important role in 3D video and free viewpoint video synthesis. However, artifacts might occur in the synthesized view due to viewpoint changes and stereo depth estimation errors. Holes are usually out-of-field regions and disocclusions, and filling them appropriately becomes a challenge. In this paper, a virtual view synthesis approach based on asymmetric bidirectional DIBR is proposed. A depth image preprocessing method is applied to detect and correct unreliable depth values around the foreground edges. For the primary view, all pixels are warped to the virtual view by the modified DIBR method. For the auxiliary view, only the selected regions are warped, which contain the contents that are not visible in the primary view. This approach reduces the computational cost and prevents irrelevant foreground pixels from being warped to the holes. During the merging process, a color correction approach is introduced to make the result appear more natural. In addition, a depth-guided inpainting method is proposed to handle the remaining holes in the merged image. Experimental results show that, compared with bidirectional DIBR, the proposed rendering method can reduce about 37% rendering time and achieve 97% hole reduction. In terms of visual quality and objective evaluation, our approach performs better than the previous methods.


Author(s):  
Chia-Ming Cheng ◽  
Shu-Jyuan Lin ◽  
Shang-Hong Lai ◽  
Jinn-Cherng Yang
Keyword(s):  

2020 ◽  
Author(s):  
Guoliang Liu

In this paper, we propose a deep neural networkthat can estimate camera poses and reconstruct thefull resolution depths of the environment simultaneously usingonly monocular consecutive images. In contrast to traditionalmonocular visual odometry methods, which cannot estimatescaled depths, we here demonstrate the recovery of the scaleinformation using a sparse depth image as a supervision signalin the training step. In addition, based on the scaled depth,the relative poses between consecutive images can be estimatedusing the proposed deep neural network. Another novelty liesin the deployment of view synthesis, which can synthesize anew image of the scene from a different view (camera pose)given an input image. The view synthesis is the core techniqueused for constructing a loss function for the proposed neuralnetwork, which requires the knowledge of the predicted depthsand relative poses, such that the proposed method couples thevisual odometry and depth prediction together. In this way,both the estimated poses and the predicted depths from theneural network are scaled using the sparse depth image as thesupervision signal during training. The experimental results onthe KITTI dataset show competitive performance of our methodto handle challenging environments.<br>


Electronics ◽  
2020 ◽  
Vol 9 (6) ◽  
pp. 906
Author(s):  
Hui-Yu Huang ◽  
Shao-Yu Huang

The recent emergence of three-dimensional (3D) movies and 3D television (TV) indicates an increasing interest in 3D content. Stereoscopic displays have enabled visual experiences to be enhanced, allowing the world to be viewed in 3D. Virtual view synthesis is the key technology to present 3D content, and depth image-based rendering (DIBR) is a classic virtual view synthesis method. With a texture image and its corresponding depth map, a virtual view can be generated using the DIBR technique. The depth and camera parameters are used to project the entire pixel in the image to the 3D world coordinate system. The results in the world coordinates are then reprojected into the virtual view, based on 3D warping. However, these projections will result in cracks (holes). Hence, we herein propose a new method of DIBR for free viewpoint videos to solve the hole problem due to these projection processes. First, the depth map is preprocessed to reduce the number of holes, which does not produce large-scale geometric distortions; subsequently, improved 3D warping projection is performed collectively to create the virtual view. A median filter is used to filter the hole regions in the virtual view, followed by 3D inverse warping blending to remove the holes. Next, brightness adjustment and adaptive image blending are performed. Finally, the synthesized virtual view is obtained using the inpainting method. Experimental results verify that our proposed method can produce a pleasant visibility of the synthetized virtual view, maintain a high peak signal-to-noise ratio (PSNR) value, and efficiently decrease execution time compared with state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document