A method of 3D reconstruction based on panoramic cameras

Author(s):  
Hua-Gang Liang ◽  
Wen-Xiu Qian ◽  
Yong-Kui Liu ◽  
Feng Ru

In this paper, a method of 3D reconstruction from two images acquired by two panoramic cameras is presented. Firstly, the features of the reconstruction object detected in each image are matched through the DP matching method. Secondly, optical correction is carried out on two cameras, and the internal parameters of panoramic cameras can be calculated. Finally, according to the calibration method, the geometric relationship between corresponding points in space and in two panoramic images is deduced. The results indicate that the method of 3D reconstruction based on two panoramic cameras is simple, and the accuracy can reach 98.82%.

2021 ◽  
Vol 71 ◽  
pp. 102136
Author(s):  
Mingyang Li ◽  
Zhijiang Du ◽  
Xiaoxing Ma ◽  
Wei Dong ◽  
Yongzhuo Gao

Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 765
Author(s):  
Hugo Álvarez ◽  
Marcos Alonso ◽  
Jairo R. Sánchez ◽  
Alberto Izaguirre

This paper describes a method for calibrating multi camera and multi laser 3D triangulation systems, particularly for those using Scheimpflug adapters. Under this configuration, the focus plane of the camera is located at the laser plane, making it difficult to use traditional calibration methods, such as chessboard pattern-based strategies. Our method uses a conical calibration object whose intersections with the laser planes generate stepped line patterns that can be used to calculate the camera-laser homographies. The calibration object has been designed to calibrate scanners for revolving surfaces, but it can be easily extended to linear setups. The experiments carried out show that the proposed system has a precision of 0.1 mm.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Di Jia ◽  
Yuxiu Li ◽  
Si Wu ◽  
Ying Liu

The 3D reconstruction technique using the straight-line segments as features has high precision and low computational cost. The method is especially suitable for large-scale urban datasets. However, the line matching step in the existing method has a mismatching problem. The two main reasons for this problem are the line detection result is not located at the true edge of the image and there is no consistency check of the matching pair. In order to solve this problem, a linear correction and matching method for 3D reconstruction of target line structure is proposed in this paper. Firstly, the edge features of the image are extracted to obtain a binarized edge map. Then, the extended gradient map is calculated using the edge map and the gradient to establish the gradient gravitational map. Secondly, the straight-line detection method is used to extract all the linear features used for the 3D reconstruction image, and the linear position is corrected by the gradient gravitational map. Finally, the point feature matching result is used to calculate the polar line, and the line matching results of the adjacent three images are used to determine the final partial check feature area. Then, random sampling is used to obtain the feature similarity check line matching result in the small neighborhood. The aforementioned steps can eliminate the mismatched lines. The experimental results demonstrate that the 3D model obtained using the proposed method has higher integrity and accuracy than the existing methods.


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Yanping Mui ◽  
Youzheng Zhang ◽  
Guitao Cao

In this paper, a new geometric structure of projective invariants is proposed. Compared with the traditional invariant calculation method based on 3D reconstruction, this method is comparable in the reliability of invariant calculation. According to this method, the only thing needed to find out is the geometric relationship between 3D points and 2D points, and the invariant can be obtained by using a single frame image. In the method based on 3D reconstruction, the basic matrix of two images is estimated first, and then, the 3D projective invariants are calculated according to the basic matrix. Therefore, in terms of algorithm complexity, the method proposed in this paper is superior to the traditional method. In this paper, we also study the projection transformation from a 3D point to a 2D point in space. According to this relationship, the geometric invariant relationships of other point structures can be easily derived, which have important applications in model-based object recognition. At the same time, the experimental results show that the eight-point structure invariants proposed in this paper can effectively describe the essential characteristics of the 3D structure of the target, without the influence of view, scaling, lighting, and other link factors, and have good stability and reliability.


2006 ◽  
Vol 39 (4) ◽  
pp. 282-288 ◽  
Author(s):  
LiMei Song ◽  
DaNv Wang

Author(s):  
Huanbing Gao ◽  
Lei Liu ◽  
Ya Tian ◽  
Shouyin Lu

This paper presented 3D reconstruction method for road scene with the help of obstacle detection. 3D reconstruction for road scene can be used in autonomous driving, driver assistance system, car navigation systems. However, some errors often rose when 3D reconstructing due to the shade from the moving object in the road scene. The presented 3D reconstruction method with obstacle detection feedback can avoid this problem. Firstly, this paper offers a framework for the 3D reconstruction of road scene by laser scanning and vision. A calibration method based on the location of horizon is proposed, and a method of attitude angle measuring based on vanishing point is proposed to revise the 3D reconstruction result. Secondly, the reconstruction framework is extended by integrating with an object recognition that can automatically detect and discriminate obstacles in the input video streams by a RANSAC approach and threshold filter, and localizes them in the 3D model. 3D reconstruction and obstacle detection are tightly integrated and benefit from each other. The experiment result verified the feasibility and practicability of the proposed method.


Author(s):  
W. Wahbeh ◽  
S. Nebiker ◽  
G. Fangi

This paper exploits the potential of dense multi-image 3d reconstruction of destroyed cultural heritage monuments by either using public domain touristic imagery only or by combining the public domain imagery with professional panoramic imagery. The focus of our work is placed on the reconstruction of the temple of Bel, one of the Syrian heritage monuments, which was destroyed in September 2015 by the so called "Islamic State". The great temple of Bel is considered as one of the most important religious buildings of the 1st century AD in the East with a unique design. The investigations and the reconstruction were carried out using two types of imagery. The first are freely available generic touristic photos collected from the web. The second are panoramic images captured in 2010 for documenting those monuments. In the paper we present a 3d reconstruction workflow for both types of imagery using state-of-the art dense image matching software, addressing the non-trivial challenges of combining uncalibrated public domain imagery with panoramic images with very wide base-lines. We subsequently investigate the aspects of accuracy and completeness obtainable from the public domain touristic images alone and from the combination with spherical panoramas. We furthermore discuss the challenges of co-registering the weakly connected 3d point cloud fragments resulting from the limited coverage of the touristic photos. We then describe an approach using spherical photogrammetry as a virtual topographic survey allowing the co-registration of a detailed and accurate single 3d model of the temple interior and exterior.


2019 ◽  
Vol 16 (5) ◽  
pp. 172988141986446
Author(s):  
Xiaojun Wu ◽  
XingCan Tang

Light changes its direction of propagation before entering a camera enclosed in a waterproof housing owing to refraction, which means that perspective imaging models in the air cannot be directly used underwater. In this article, we propose an accurate binocular stereo measurement system in an underwater environment. First, based on the physical underwater imaging model without approximation and Tsai’s calibration method, the proposed system is calibrated to acquire the extrinsic parameters, as the internal parameters can be pre-calibrated in air. Then, based on the calibrated camera parameters, an image correction method is proposed to convert the underwater images to air images. Thus, the epipolar constraint can be used to search the matching point directly. The experimental results show that the proposed method in this article can effectively eliminate the effect of refraction in the binocular vision and the measurement accuracy can be compared with the measurement result in air.


Sign in / Sign up

Export Citation Format

Share Document