scholarly journals Adaptive Block-based Depth-map Coding Method

2009 ◽  
Vol 14 (5) ◽  
pp. 601-615
Author(s):  
Kyung-Yong Kim ◽  
Gwang-Hoon Park ◽  
Doug-Young Suh
Keyword(s):  
Author(s):  
Minghui WANG ◽  
Xun HE ◽  
Xin JIN ◽  
Satoshi GOTO
Keyword(s):  

2019 ◽  
Vol 5 (9) ◽  
pp. 73 ◽  
Author(s):  
Wen-Nung Lie ◽  
Chia-Che Ho

In this paper, a multi-focus image stack captured by varying positions of the imaging plane is processed to synthesize an all-in-focus (AIF) image and estimate its corresponding depth map. Compared with traditional methods (e.g., pixel- and block-based techniques), our focus-based measures are calculated based on irregularly shaped regions that have been refined or split in an iterative manner, to adapt to different image contents. An initial all-focus image is first computed, which is then segmented to get a region map. Spatial-focal property for each region is then analyzed to determine whether a region should be iteratively split into sub-regions. After iterative splitting, the final region map is used to perform regionally best focusing, based on the Winner-take-all (WTA) strategy, i.e., choosing the best focused pixels from image stack. The depth image can be easily converted from the resulting label image, where the label for each pixel represents the image index from which the pixel with the best focus is chosen. Regions whose focus profiles are not confident in getting a winner of the best focus will resort to spatial propagation from neighboring confident regions. Our experiments show that the adaptive region-splitting algorithm outperforms other state-of-the-art methods or commercial software in synthesis quality (in terms of a well-known Q metric), depth maps (in terms of subjective quality), and processing speed (with a gain of 17.81~40.43%).


2009 ◽  
Vol 14 (5) ◽  
pp. 551-560
Author(s):  
Kyung-Yong Kim ◽  
Gwang-Hoon Park ◽  
Doug-Young Suh
Keyword(s):  

2011 ◽  
Vol 16 (2) ◽  
pp. 274-292
Author(s):  
Kyung-Yong Kim ◽  
Gwang-Hoon Park

2021 ◽  
Vol 9 ◽  
Author(s):  
Yunpeng Liu ◽  
Xingpeng Yan ◽  
Xinlei Liu ◽  
Xi Wang ◽  
Tao Jing ◽  
...  

In this paper, an optical field coding method for the fusion of real and virtual scenes is proposed to implement an augmented reality (AR)-based holographic stereogram. The occlusion relationship between the real and virtual scenes is analyzed, and a fusion strategy based on instance segmentation and depth determination is proposed. A real three-dimensional (3D) scene sampling system is built, and the foreground contour of the sampled perspective image is extracted by the Mask R-CNN instance segmentation algorithm. The virtual 3D scene is rendered by a computer to obtain the virtual sampled images as well as their depth maps. According to the occlusion relation of the fusion scenes, the pseudo-depth map of the real scene is derived, and the fusion coding of 3D real and virtual scenes information is implemented by the depth information comparison. The optical experiment indicates that AR-based holographic stereogram fabricated by our coding method can reconstruct real and virtual fused 3D scenes with correct occlusion and depth cues on full parallax.


2010 ◽  
Vol 15 (2) ◽  
pp. 232-235
Author(s):  
Kyung-Yong Kim ◽  
Gwang-Hoon Park
Keyword(s):  

2012 ◽  
Vol 15 (4) ◽  
pp. 492-500
Author(s):  
Jin-Mi Kang ◽  
Hye-Jeong Jeong ◽  
Ki-Dong Chung

Sign in / Sign up

Export Citation Format

Share Document