scene depth
Recently Published Documents


TOTAL DOCUMENTS

72
(FIVE YEARS 23)

H-INDEX

9
(FIVE YEARS 2)

2021 ◽  
Vol 38 (6) ◽  
pp. 1719-1726
Author(s):  
Tanbo Zhu ◽  
Die Wang ◽  
Yuhua Li ◽  
Wenjie Dong

In real training, the training conditions are often undesirable, and the use of equipment is severely limited. These problems can be solved by virtual practical training, which breaks the limit of space, lowers the training cost, while ensuring the training quality. However, the existing methods work poorly in image reconstruction, because they fail to consider the fact that the environmental perception of actual scene is strongly regular by nature. Therefore, this paper investigates the three-dimensional (3D) image reconstruction for virtual talent training scene. Specifically, a fusion network model was deigned, and the deep-seated correlation between target detection and semantic segmentation was discussed for images shot in two-dimensional (2D) scenes, in order to enhance the extraction effect of image features. Next, the vertical and horizontal parallaxes of the scene were solved, and the depth-based virtual talent training scene was reconstructed three dimensionally, based on the continuity of scene depth. Finally, the proposed algorithm was proved effective through experiments.


2021 ◽  
Vol 19 (6) ◽  
pp. 644-652
Author(s):  
Emanuel Trabes ◽  
Luis Avila ◽  
Julio Dondo Gazzano ◽  
Carlos Sosa Páez

This work presents a novel approach for monocular dense Simultaneous Localization and Mapping. The surface to be estimated is represented as a piecewise planar surface, defined as a group of surfels each having as parameters its position and normal. These parameters are then directly estimated from the raw camera pixels measurements, by a Gauss-Newton iterative process. The representation of the surface as a group of surfels has several advantages. It allows the recovery of robust and accurate pixel depths, without the need to use a computationally demanding depth regularization schema. This has the further advantage of avoiding the use of a physically unlikely surface smoothness prior. New surfels can be correctly initialized from the information present in nearby surfels, avoiding also the need to use an expensive initialization routine commonly needed in Gauss-Newton methods. The method was written in the GLSL shading language, allowing the usage of GPU thus achieving real-time. The method was tested against several datasets, showing both its depth and normal estimation correctness, and its scene reconstruction quality. The results presented here showcase the usefulness of the more physically grounded piecewise planar scene depth prior, instead of the more commonly pixel depth independence and smoothness prior.


2021 ◽  
Vol 15 ◽  
Author(s):  
Qiuzhuo Liu ◽  
Yaqin Luo ◽  
Ke Li ◽  
Wenfeng Li ◽  
Yi Chai ◽  
...  

Bad weather conditions (such as fog, haze) seriously affect the visual quality of images. According to the scene depth information, physical model-based methods are used to improve image visibility for further image restoration. However, the unstable acquisition of the scene depth information seriously affects the defogging performance of physical model-based methods. Additionally, most of image enhancement-based methods focus on the global adjustment of image contrast and saturation, and lack the local details for image restoration. So, this paper proposes a single image defogging method based on image patch decomposition and multi-exposure fusion. First, a single foggy image is processed by gamma correction to obtain a set of underexposed images. Then the saturation of the obtained underexposed and original images is enhanced. Next, each image in the multi-exposure image set (including the set of underexposed images and the original image) is decomposed into the base and detail layers by a guided filter. The base layers are first decomposed into image patches, and then the fusion weight maps of the image patches are constructed. For detail layers, the exposure features are first extracted from the luminance components of images, and then the extracted exposure features are evaluated by constructing gaussian functions. Finally, both base and detail layers are combined to obtain the defogged image. The proposed method is compared with the state-of-the-art methods. The comparative experimental results confirm the effectiveness of the proposed method and its superiority over the state-of-the-art methods.


Author(s):  
Lei Chen ◽  
Zongqing Lu ◽  
Qingmin Liao ◽  
Haoyu Ma ◽  
Jing-Hao Xue

2021 ◽  
Author(s):  
Bo Jin ◽  
Leandro Cruz ◽  
Nuno Goncalves
Keyword(s):  

2021 ◽  
Vol 13 (13) ◽  
pp. 2432
Author(s):  
Zhiqin Zhu ◽  
Yaqin Luo ◽  
Hongyan Wei ◽  
Yong Li ◽  
Guanqiu Qi ◽  
...  

Remote sensing images are widely used in object detection and tracking, military security, and other computer vision tasks. However, remote sensing images are often degraded by suspended aerosol in the air, especially under poor weather conditions, such as fog, haze, and mist. The quality of remote sensing images directly affect the normal operations of computer vision systems. As such, haze removal is a crucial and indispensable pre-processing step in remote sensing image processing. Additionally, most of the existing image dehazing methods are not applicable to all scenes, so the corresponding dehazed images may have varying degrees of color distortion. This paper proposes a novel atmospheric light estimation based dehazing algorithm to obtain high visual-quality remote sensing images. First, a differentiable function is used to train the parameters of a linear scene depth model for the scene depth map generation of remote sensing images. Second, the atmospheric light of each hazy remote sensing image is estimated by the corresponding scene depth map. Then, the corresponding transmission map is estimated on the basis of the estimated atmospheric light by a haze-lines model. Finally, according to the estimated atmospheric light and transmission map, an atmospheric scattering model is applied to remove haze from remote sensing images. The colors of the images dehazed by the proposed method are in line with the perception of human eyes in different scenes. A dataset with 100 remote sensing images from hazy scenes was built for testing. The performance of the proposed image dehazing method is confirmed by theoretical analysis and comparative experiments.


Author(s):  
Junning Zhang ◽  
Qunxing Su ◽  
Bo Tang ◽  
Cheng Wang ◽  
Yining Li

Sensors ◽  
2020 ◽  
Vol 20 (13) ◽  
pp. 3737
Author(s):  
Lu Xiong ◽  
Yongkun Wen ◽  
Yuyao Huang ◽  
Junqiao Zhao ◽  
Wei Tian

We propose a completely unsupervised approach to simultaneously estimate scene depth, ego-pose, ground segmentation and ground normal vector from only monocular RGB video sequences. In our approach, estimation for different scene structures can mutually benefit each other by the joint optimization. Specifically, we use the mutual information loss to pre-train the ground segmentation network and before adding the corresponding self-learning label obtained by a geometric method. By using the static nature of the ground and its normal vector, the scene depth and ego-motion can be efficiently learned by the self-supervised learning procedure. Extensive experimental results on both Cityscapes and KITTI benchmark demonstrate the significant improvement on the estimation accuracy for both scene depth and ego-pose by our approach. We also achieve an average error of about 3 ∘ for estimated ground normal vectors. By deploying our proposed geometric constraints, the IOU accuracy of unsupervised ground segmentation is increased by 35% on the Cityscapes dataset.


Sign in / Sign up

Export Citation Format

Share Document