scholarly journals REFINEMENT OF COLORED MOBILE MAPPING DATA USING INTENSITY IMAGES

Author(s):  
T. Yamakawa ◽  
K. Fukano ◽  
R. Onodera ◽  
H. Masuda

Mobile mapping systems (MMS) can capture dense point-clouds of urban scenes. For visualizing realistic scenes using point-clouds, RGB colors have to be added to point-clouds. To generate colored point-clouds in a post-process, each point is projected onto camera images and a RGB color is copied to the point at the projected position. However, incorrect colors are often added to point-clouds because of the misalignment of laser scanners, the calibration errors of cameras and laser scanners, or the failure of GPS acquisition. In this paper, we propose a new method to correct RGB colors of point-clouds captured by a MMS. In our method, RGB colors of a point-cloud are corrected by comparing intensity images and RGB images. However, since a MMS outputs sparse and anisotropic point-clouds, regular images cannot be obtained from intensities of points. Therefore, we convert a point-cloud into a mesh model and project triangle faces onto image space, on which regular lattices are defined. Then we extract edge features from intensity images and RGB images, and detect their correspondences. In our experiments, our method worked very well for correcting RGB colors of point-clouds captured by a MMS.

Author(s):  
T. Yamakawa ◽  
K. Fukano ◽  
R. Onodera ◽  
H. Masuda

Mobile mapping systems (MMS) can capture dense point-clouds of urban scenes. For visualizing realistic scenes using point-clouds, RGB colors have to be added to point-clouds. To generate colored point-clouds in a post-process, each point is projected onto camera images and a RGB color is copied to the point at the projected position. However, incorrect colors are often added to point-clouds because of the misalignment of laser scanners, the calibration errors of cameras and laser scanners, or the failure of GPS acquisition. In this paper, we propose a new method to correct RGB colors of point-clouds captured by a MMS. In our method, RGB colors of a point-cloud are corrected by comparing intensity images and RGB images. However, since a MMS outputs sparse and anisotropic point-clouds, regular images cannot be obtained from intensities of points. Therefore, we convert a point-cloud into a mesh model and project triangle faces onto image space, on which regular lattices are defined. Then we extract edge features from intensity images and RGB images, and detect their correspondences. In our experiments, our method worked very well for correcting RGB colors of point-clouds captured by a MMS.


Author(s):  
C. Wen ◽  
S. Lin ◽  
C. Wang ◽  
J. Li

Point clouds acquired by RGB-D camera-based indoor mobile mapping system suffer the problems of being noisy, exhibiting an uneven distribution, and incompleteness, which are the problems that introduce difficulties for point cloud planar surface segmentation. This paper presents a novel color-enhanced hybrid planar surface segmentation model for RGB-D camera-based indoor mobile mapping point clouds based on region growing method, and the model specially addresses the planar surface extraction task over point cloud according to the noisy and incomplete indoor mobile mapping point clouds. The proposed model combines the color moments features with the curvature feature to select the seed points better. Additionally, a more robust growing criteria based on the hybrid features is developed to avoid the generation of excessive over-segmentation debris. A segmentation evaluation process with a small set of labeled segmented data is used to determine the optimal hybrid weight. Several comparative experiments were conducted to evaluate the segmentation model, and the experimental results demonstrate the effectiveness and efficiency of the proposed hybrid segmentation method for indoor mobile mapping three-dimensional (3D) point cloud data.


2020 ◽  
Vol 12 (18) ◽  
pp. 2923
Author(s):  
Tengfei Zhou ◽  
Xiaojun Cheng ◽  
Peng Lin ◽  
Zhenlun Wu ◽  
Ensheng Liu

Due to the existence of environmental or human factors, and because of the instrument itself, there are many uncertainties in point clouds, which directly affect the data quality and the accuracy of subsequent processing, such as point cloud segmentation, 3D modeling, etc. In this paper, to address this problem, stochastic information of point cloud coordinates is taken into account, and on the basis of the scanner observation principle within the Gauss–Helmert model, a novel general point-based self-calibration method is developed for terrestrial laser scanners, incorporating both five additional parameters and six exterior orientation parameters. For cases where the instrument accuracy is different from the nominal ones, the variance component estimation algorithm is implemented for reweighting the outliers after the residual errors of observations obtained. Considering that the proposed method essentially is a nonlinear model, the Gauss–Newton iteration method is applied to derive the solutions of additional parameters and exterior orientation parameters. We conducted experiments using simulated and real data and compared them with those two existing methods. The experimental results showed that the proposed method could improve the point accuracy from 10−4 to 10−8 (a priori known) and 10−7 (a priori unknown), and reduced the correlation among the parameters (approximately 60% of volume). However, it is undeniable that some correlations increased instead, which is the limitation of the general method.


2018 ◽  
Vol 106 (1) ◽  
pp. 19-24
Author(s):  
Damian Biel ◽  
Tomasz Lipecki

Abstract Nowadays, the growing popularity of terrestrial laser scanners (TLS) allows to obtain a point cloud of many industrial objects along with classic surveying. However, the quality and model’s accuracy in comparison to a real shape seem to be a question, that must be further researched. It is crucial especially for Finite Element Method (FEM) analysis, which, being a part of technical design, estimate the values of construction’s dislocation and deformation. The article describes objects such as headgear with steel support and 4-post headframe with steel sheers. Both supports and sheers were modelled basing on point clouds. All the models were compared to the point cloud. The differences in models’ shape were calculated and the maximal values were determined. The results’ usefulness in FEM analysis was described.


Author(s):  
K. Liu ◽  
J. Boehm

Point cloud segmentation is a fundamental problem in point processing. Segmenting a point cloud fully automatically is very challenging due to the property of point cloud as well as different requirements of distinct users. In this paper, an interactive segmentation method for point clouds is proposed. Only two strokes need to be drawn intuitively to indicate the target object and the background respectively. The draw strokes are sparse and don't necessarily cover the whole object. Given the strokes, a weighted graph is built and the segmentation is formulated as a minimization problem. The problem is solved efficiently by using the Max Flow Min Cut algorithm. In the experiments, the mobile mapping data of a city area is utilized. The resulting segmentations demonstrate the efficiency of the method that can be potentially applied for general point clouds.


Author(s):  
H. A. Lauterbach ◽  
D. Borrmann ◽  
A. Nüchter

3D laser scanners are typically not able to collect color information. Therefore coloring is often done by projecting photos of an additional camera to the 3D scans. The capturing process is time consuming and therefore prone to changes in the environment. The appearance of the colored point cloud is mainly effected by changes of lighting conditions and corresponding camera settings. In case of panorama images these exposure variations are typically corrected by radiometrical aligning the input images to each other. In this paper we adopt existing methods for panorama optimization in order to correct the coloring of point clouds. Therefore corresponding pixels from overlapping images are selected by using geometrically closest points of the registered 3D scans and their neighboring pixels in the images. The dynamic range of images in raw format allows for correction of large exposure differences. Two experiments demonstrate the abilities of the approach.


Author(s):  
S. Hofmann ◽  
C. Brenner

Mobile mapping data is widely used in various applications, what makes it especially important for data users to get a statistically verified quality statement on the geometric accuracy of the acquired point clouds or its processed products. The accuracy of point clouds can be divided into an absolute and a relative quality, where the absolute quality describes the position of the point cloud in a world coordinate system such as WGS84 or UTM, whereas the relative accuracy describes the accuracy within the point cloud itself. Furthermore, the quality of processed products such as segmented features depends on the global accuracy of the point cloud but mainly on the quality of the processing steps. Several data sources with different characteristics and quality can be thought of as potential reference data, such as cadastral maps, orthophoto, artificial control objects or terrestrial surveys using a total station. In this work a test field in a selected residential area was acquired as reference data in a terrestrial survey using a total station. In order to reach high accuracy the stationing of the total station was based on a newly made geodetic network with a local accuracy of less than 3 mm. The global position of the network was determined using a long time GNSS survey reaching an accuracy of 8 mm. Based on this geodetic network a 3D test field with facades and street profiles was measured with a total station, each point with a two-dimensional position and altitude. In addition, the surface of poles of street lights, traffic signs and trees was acquired using the scanning mode of the total station. <br><br> Comparing this reference data to the acquired mobile mapping point clouds of several measurement campaigns a detailed quality statement on the accuracy of the point cloud data is made. Additionally, the advantages and disadvantages of the described reference data source concerning availability, cost, accuracy and applicability are discussed.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6043
Author(s):  
Yujun Jiao ◽  
Zhishuai Yin

A two-phase cross-modality fusion detector is proposed in this study for robust and high-precision 3D object detection with RGB images and LiDAR point clouds. First, a two-stream fusion network is built into the framework of Faster RCNN to perform accurate and robust 2D detection. The visible stream takes the RGB images as inputs, while the intensity stream is fed with the intensity maps which are generated by projecting the reflection intensity of point clouds to the front view. A multi-layer feature-level fusion scheme is designed to merge multi-modal features across multiple layers in order to enhance the expressiveness and robustness of the produced features upon which region proposals are generated. Second, a decision-level fusion is implemented by projecting 2D proposals to the space of the point cloud to generate 3D frustums, on the basis of which the second-phase 3D detector is built to accomplish instance segmentation and 3D-box regression on the filtered point cloud. The results on the KITTI benchmark show that features extracted from RGB images and intensity maps complement each other, and our proposed detector achieves state-of-the-art performance on 3D object detection with a substantially lower running time as compared to available competitors.


Author(s):  
Gülhan Benli

Since the 2000s, terrestrial laser scanning, as one of the methods used to document historical edifices in protected areas, has taken on greater importance because it mitigates the difficulties associated with working on large areas and saves time while also making it possible to better understand all the particularities of the area. Through this technology, comprehensive point data (point clouds) about the surface of an object can be generated in a highly accurate three-dimensional manner. Furthermore, with the proper software this three-dimensional point cloud data can be transformed into three-dimensional rendering/mapping/modeling and quantitative orthophotographs. In this chapter, the study will present the results of terrestrial laser scanning and surveying which was used to obtain three-dimensional point clouds through three-dimensional survey measurements and scans of silhouettes of streets in Fatih in Historic Peninsula in Istanbul, which were then transposed into survey images and drawings. The study will also cite examples of the facade mapping using terrestrial laser scanning data in Istanbul Historic Peninsula Project.


Author(s):  
Lee J. Wells ◽  
Mohammed S. Shafae ◽  
Jaime A. Camelio

Ever advancing sensor and measurement technologies continually provide new opportunities for knowledge discovery and quality control (QC) strategies for complex manufacturing systems. One such state-of-the-art measurement technology currently being implemented in industry is the 3D laser scanner, which can rapidly provide millions of data points to represent an entire manufactured part’s surface. This gives 3D laser scanners a significant advantage over competing technologies that typically provide tens or hundreds of data points. Consequently, data collected from 3D laser scanners have a great potential to be used for inspecting parts for surface and feature abnormalities. The current use of 3D point clouds for part inspection falls into two main categories; 1) Extracting feature parameters, which does not complement the nature of 3D point clouds as it wastes valuable data and 2) An ad-hoc manual process where a visual representation of a point cloud (usually as deviations from nominal) is analyzed, which tends to suffer from slow, inefficient, and inconsistent inspection results. Therefore our paper proposes an approach to automate the latter approach to 3D point cloud inspection. The proposed approach uses a newly developed adaptive generalized likelihood ratio (AGLR) technique to identify the most likely size, shape, and magnitude of a potential fault within the point cloud, which transforms the ad-hoc visual inspection approach to a statistically viable automated inspection solution. In order to aid practitioners in designing and implementing an AGLR-based inspection process, our paper also reports the performance of the AGLR with respect to the probability of detecting specific size and magnitude faults in addition to the probability of a false alarms.


Sign in / Sign up

Export Citation Format

Share Document