scholarly journals DepthComp: Real-time Depth Image Completion Based on Prior Semantic Scene Segmentation

Author(s):  
Amir Atapour Atapour ◽  
Toby Breckon
2021 ◽  
Vol 40 (3) ◽  
pp. 1-12
Author(s):  
Hao Zhang ◽  
Yuxiao Zhou ◽  
Yifei Tian ◽  
Jun-Hai Yong ◽  
Feng Xu

Reconstructing hand-object interactions is a challenging task due to strong occlusions and complex motions. This article proposes a real-time system that uses a single depth stream to simultaneously reconstruct hand poses, object shape, and rigid/non-rigid motions. To achieve this, we first train a joint learning network to segment the hand and object in a depth image, and to predict the 3D keypoints of the hand. With most layers shared by the two tasks, computation cost is saved for the real-time performance. A hybrid dataset is constructed here to train the network with real data (to learn real-world distributions) and synthetic data (to cover variations of objects, motions, and viewpoints). Next, the depth of the two targets and the keypoints are used in a uniform optimization to reconstruct the interacting motions. Benefitting from a novel tangential contact constraint, the system not only solves the remaining ambiguities but also keeps the real-time performance. Experiments show that our system handles different hand and object shapes, various interactive motions, and moving cameras.


Entropy ◽  
2021 ◽  
Vol 23 (5) ◽  
pp. 546
Author(s):  
Zhenni Li ◽  
Haoyi Sun ◽  
Yuliang Gao ◽  
Jiao Wang

Depth maps obtained through sensors are often unsatisfactory because of their low-resolution and noise interference. In this paper, we propose a real-time depth map enhancement system based on a residual network which uses dual channels to process depth maps and intensity maps respectively and cancels the preprocessing process, and the algorithm proposed can achieve real-time processing speed at more than 30 fps. Furthermore, the FPGA design and implementation for depth sensing is also introduced. In this FPGA design, intensity image and depth image are captured by the dual-camera synchronous acquisition system as the input of neural network. Experiments on various depth map restoration shows our algorithms has better performance than existing LRMC, DE-CNN and DDTF algorithms on standard datasets and has a better depth map super-resolution, and our FPGA completed the test of the system to ensure that the data throughput of the USB 3.0 interface of the acquisition system is stable at 226 Mbps, and support dual-camera to work at full speed, that is, 54 fps@ (1280 × 960 + 328 × 248 × 3).


2011 ◽  
Vol 271-273 ◽  
pp. 229-234
Author(s):  
Yun Ling ◽  
Hai Tao Sun ◽  
Jian Wei Han ◽  
Xun Wang

Image completion techniques can be used to repair unknown image regions. However, existing techniques are too slow for real-time applications. In this paper, an image completion technique based on randomized correspondence is presented to accelerate the completing process. Some good patch matches are found via random sampling and propagated to surrounding areas. Approximate nearest neighbor matches between image patches can be found in real-time. For images with strong structure, straight lines or curves across unknown regions can be manually specified to preserve the important structures. In such case, search is only performed on specified lines or curves. Finally, the remaining unknown regions can be filled using randomized correspondence with structural constraint. The experiments show that the quality and speed of presented technique are much better than that of existing methods.


2010 ◽  
Vol 10 (1) ◽  
pp. 98-105 ◽  
Author(s):  
Songhao Zhu ◽  
Zhiwei Liang

2020 ◽  
pp. 1-13
Author(s):  
Fei Wang ◽  
Yan Zhuang ◽  
Hong Zhang ◽  
Hong Gu

2014 ◽  
Vol 41 (2) ◽  
pp. 473-486 ◽  
Author(s):  
Dong-Luong Dinh ◽  
Myeong-Jun Lim ◽  
Nguyen Duc Thang ◽  
Sungyoung Lee ◽  
Tae-Seong Kim

2018 ◽  
Vol 318 ◽  
pp. 182-195 ◽  
Author(s):  
Liang Zhang ◽  
Le Wang ◽  
Xiangdong Zhang ◽  
Peiyi Shen ◽  
Mohammed Bennamoun ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document