Robust parameter estimation from point cloud data with noises for augmented reality

Author(s):  
Yingzi Wei ◽  
Tianhao Zhang ◽  
Kanfeng Gu ◽  
Zhengjin Shi
Author(s):  
S. Gupta ◽  
B. Lohani

Mobile augmented reality system is the next generation technology to visualise 3D real world intelligently. The technology is expanding at a fast pace to upgrade the status of a smart phone to an intelligent device. The research problem identified and presented in the current work is to view actual dimensions of various objects that are captured by a smart phone in real time. The methodology proposed first establishes correspondence between LiDAR point cloud, that are stored in a server, and the image t hat is captured by a mobile. This correspondence is established using the exterior and interior orientation parameters of the mobile camera and the coordinates of LiDAR data points which lie in the viewshed of the mobile camera. A pseudo intensity image is generated using LiDAR points and their intensity. Mobile image and pseudo intensity image are then registered using image registration method SIFT thereby generating a pipeline to locate a point in point cloud corresponding to a point (pixel) on the mobile image. The second part of the method uses point cloud data for computing dimensional information corresponding to the pairs of points selected on mobile image and fetch the dimensions on top of the image. This paper describes all steps of the proposed method. The paper uses an experimental setup to mimic the mobile phone and server system and presents some initial but encouraging results


Electronics ◽  
2020 ◽  
Vol 9 (5) ◽  
pp. 836 ◽  
Author(s):  
Young-Hoon Jin ◽  
In-Tae Hwang ◽  
Won-Hyung Lee

Augmented reality (AR) is a useful visualization technology that displays information by adding virtual images to the real world. In AR systems that require three-dimensional information, point cloud data is easy to use after real-time acquisition, however, it is difficult to measure and visualize real-time objects due to the large amount of data and a matching process. In this paper we explored a method of estimating pipes from point cloud data and visualizing them in real-time through augmented reality devices. In general, pipe estimation in a point cloud uses a Hough transform and is performed through a preprocessing process, such as noise filtering, normal estimation, or segmentation. However, there is a disadvantage in that the execution time is slow due to a large amount of computation. Therefore, for the real-time visualization in augmented reality devices, the fast cylinder matching method using random sample consensus (RANSAC) is required. In this paper, we proposed parallel processing, multiple frames, adjustable scale, and error correction for real-time visualization. The real-time visualization method through the augmented reality device obtained a depth image from the sensor and configured a uniform point cloud using a voxel grid algorithm. The constructed data was analyzed according to the fast cylinder matching method using RANSAC. The real-time visualization method through augmented reality devices is expected to be used to identify problems, such as the sagging of pipes, through real-time measurements at plant sites due to the spread of various AR devices.


Author(s):  
Rafael Radkowski

The paper introduces a method for an augmented reality (AR) assembly assistance application that allows one to quantify the alignment of two parts. Point cloud-based tracking is one method to recognize and to track physical parts. However, the correct fitting of two parts cannot be determined with high fidelity from point cloud tracking data due to occlusion and other challenges. A Maximum Likelihood Estimate (MLE) of an error model is suggested to quantify the probability that two parts are correctly aligned. An initial solution was investigated. The results of an offline-simulation with point cloud data are promising and indicate the efficacy of the suggested method.


Symmetry ◽  
2019 ◽  
Vol 11 (10) ◽  
pp. 1325 ◽  
Author(s):  
Kumar ◽  
Patil ◽  
Kang ◽  
Chai

Augmented reality (AR) systems are becoming next-generation technologies to intelligently visualize the real world in 3D. This research proposes a sensor fusion based pipeline inspection and retrofitting for the AR system, which can be used in pipeline inspection and retrofitting processes in industrial plants. The proposed methodology utilizes a prebuilt 3D point cloud data of the environment, real-time Light Detection and Ranging (LiDAR) scan and image sequence from the camera. First, we estimate the current pose of the sensors platform by matching the LiDAR scan and the prebuilt point cloud data from the current pose prebuilt point cloud data augmented on to the camera image by utilizing the LiDAR and camera calibration parameters. Next, based on the user selection in the augmented view, geometric parameters of a pipe are estimated. In addition to pipe parameter estimation, retrofitting in the existing plant using augmented scene are illustrated. Finally, step-by-step procedure of the proposed method was experimentally verified at a water treatment plant. Result shows that the integration of AR with building information modelling (BIM) greatly benefits the post-occupancy evaluation process or pre-retrofitting and renovation process for identifying, evaluating, and updating the geometric specifications of a construction environment.


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Author(s):  
Keisuke YOSHIDA ◽  
Shiro MAENO ◽  
Syuhei OGAWA ◽  
Sadayuki ISEKI ◽  
Ryosuke AKOH

Sign in / Sign up

Export Citation Format

Share Document