Velodyne LiDAR and monocular camera data fusion for depth map and 3D reconstruction

Author(s):  
Rayyan Akhtar ◽  
Huabiao Qin ◽  
Guancheng Chen
Author(s):  
Y. Song ◽  
K. Köser ◽  
T. Kwasnitschka ◽  
R. Koch

<p><strong>Abstract.</strong> With the rapid development and availability of underwater imaging technologies, underwater visual recording is widely used for a variety of tasks. However, quantitative imaging and photogrammetry in the underwater case has a lot of challenges (strong geometry distortion and radiometry issues) that limit the traditional photogrammetric workflow in underwater applications. This paper presents an iterative refinement approach to cope with refraction induced distortion while building on top of a standard photogrammetry pipeline. The approach uses approximate geometry to compensate for water refraction effects in images and then brings the new images into the next iteration of 3D reconstruction until the update of resulting depth maps becomes neglectable. Afterwards, the corrected depth map can also be used to compensate the attenuation effect in order to get a more realistic color for the 3D model. To verify the geometry improvement of the proposed approach, a set of images with air-water refraction effect were rendered from a ground truth model and the iterative refinement approach was applied to improve the 3D reconstruction. At the end, this paper also shows its application results for 3D reconstruction of a dump site for underwater munition in the Baltic Sea for which a visual monitoring approach is desired.</p>


2021 ◽  
Vol 1920 (1) ◽  
pp. 012075
Author(s):  
Tiansheng Wu ◽  
Hui Wang ◽  
Yanling Wang ◽  
Min Liang ◽  
Jie Li

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Li-fen Tu ◽  
Qi Peng

Robot detection, recognition, positioning, and other applications require not only real-time video image information but also the distance from the target to the camera, that is, depth information. This paper proposes a method to automatically generate any monocular camera depth map based on RealSense camera data. By using this method, any current single-camera detection system can be upgraded online. Without changing the original system, the depth information of the original monocular camera can be obtained simply, and the transition from 2D detection to 3D detection can be realized. In order to verify the effectiveness of the proposed method, a hardware system was constructed using the Micro-vision RS-A14K-GC8 industrial camera and the Intel RealSense D415 depth camera, and the depth map fitting algorithm proposed in this paper was used to test the system. The results show that, except for a few depth-missing areas, the results of other areas with depth are still good, which can basically describe the distance difference between the target and the camera. In addition, in order to verify the scalability of the method, a new hardware system was constructed with different cameras, and images were collected in a complex farmland environment. The generated depth map was good, which could basically describe the distance difference between the target and the camera.


Sign in / Sign up

Export Citation Format

Share Document