scholarly journals Architecture of Solution for Panoramic Image Blurring in GIS projects Application

2021 ◽  
Author(s):  
Dejan Vasić ◽  
Marina Davidović ◽  
Ivan Radosavljević ◽  
Đorđe Obradović

Abstract. Panoramic images captured using laser scanning technologies, which principally produce point clouds, are readily applicable in colorization of point cloud, detailed visual inspection, road defect detection, spatial entities extraction, diverse maps creation etc. This paper underlines the importance of images in modern surveying technologies and different GIS projects at the same time having regard to their anonymization in accordance with GDPR. Namely, it is a legislative requirement that faces of persons and license plates of vehicles in the collected data are blurred. The objective of this paper is to present a novel architecture of the solution for a particular object blurring. The methodology was tested on four data sets counting 5000, 10 000, 15 000 and 20 000 panoramic images respectively. Percentage of accuracy, i.e. successfully detected and blurred objects of interest, was higher than 97 % for each data set.

2021 ◽  
Vol 10 (2) ◽  
pp. 287-296
Author(s):  
Dejan Vasić ◽  
Marina Davidović ◽  
Ivan Radosavljević ◽  
Đorđe Obradović

Abstract. Panoramic images captured using laser scanning technologies, which principally produce point clouds, are readily applicable in colorization of point cloud, detailed visual inspection, road defect detection, spatial entities extraction, diverse map creation, etc. This paper underlines the importance of images in modern surveying technologies and different GIS projects at the same time having regard to their anonymization in accordance with law. The General Data Protection Regulation (GDPR) is a legal framework that sets guidelines for the collection and processing of personal information from individuals who live in the European Union (EU). Namely, it is a legislative requirement that faces of persons and license plates of vehicles in the collected data are blurred. The objective of this paper is to present a novel architecture of the solution for a particular object blurring. The architecture is designed as a pipeline of object detection algorithms that progressively narrows the search space until it detects the objects to be blurred. The methodology was tested on four data sets counting 5000, 10 000, 15 000 and 20 000 panoramic images. The percentage of accuracy, i.e., successfully detected and blurred objects of interest, was higher than 97 % for each data set. Additionally, our aim was to achieve efficiency and broad use.


2020 ◽  
Vol 1 (1) ◽  
pp. 01-07
Author(s):  
Bashar Alsadik

The coregistration of terrestrial laser point clouds is widely investigated where different techniques are presented to solve this problem. The techniques are divided either as target-based or targetless approaches for coarse and fine coregistration. The targetless approach is more challenging since no physical reference targets are placed in the field during the scanning. Mainly, targetless methods are image-based and they are applied through projecting the point clouds back to the scanning stations. The projected 360 point cloud images are normally in the form of panoramic images utilizing either intensity or RGB values, and an image matching is followed to align the scan stations together. However, the point cloud coregistration is still a challenge since ICP like methods are applicable for fine registration. Furthermore, image-based approaches are restricted when there is: a limited overlap between point clouds, no RGB data accompanied to intensity values, and unstructured scanned objects in the point clouds. Therefore, we present in this paper the concept of a multi surrounding scan MSS image-based approach to overcome the difficulty to register point clouds in challenging cases. The multi surrounding scan approach means to create multi-perspective images per laser scan point cloud. These multi-perspective images will offer different viewpoints per scan station to overcome the viewpoint distortion that causes the failure of the image matching in challenging situations. Two experimental tests are applied using point clouds collected in Enschede city and the published 3D toolkit data set in Bremen city. The experiments showed a successful coregistration approach even in challenging settings with different constellations.


Author(s):  
M. Lemmens

<p><strong>Abstract.</strong> A knowledge-based system exploits the knowledge, which a human expert uses for completing a complex task, through a database containing decision rules, and an inference engine. Already in the early nineties knowledge-based systems have been proposed for automated image classification. Lack of success faded out initial interest and enthusiasm, the same fate neural networks struck at that time. Today the latter enjoy a steady revival. This paper aims at demonstrating that a knowledge-based approach to automated classification of mobile laser scanning point clouds has promising prospects. An initial experiment exploiting only two features, height and reflectance value, resulted in an overall accuracy of 79<span class="thinspace"></span>% for the Paris-rue-Madame point cloud bench mark data set.</p>


2021 ◽  
Vol 906 (1) ◽  
pp. 012091
Author(s):  
Petr Kalvoda ◽  
Jakub Nosek ◽  
Petra Kalvodova

Abstract Mobile mapping systems (MMS) are becoming widely used in standard geodetic tasks more commonly in the last years. The paper is focused on the influence of control points (CPs) number and configuration on mobile laser scanning accuracy. The mobile laser scanning (MLS) data was acquired by MMS RIEGL VMX-450. The resulting point cloud was compared with two different reference data sets. The first reference data set consisted of a high-accuracy test point field (TPF) measured by a Trimble R8s GNSS system and a Trimble S8 HP total station. The second reference data set was a point cloud from terrestrial laser scanning (TLS) using two Faro Focus3D X 130 laser scanners. The coordinates of both reference data sets were determined with significantly higher accuracy than the coordinates of the tested MLS point cloud. The accuracy testing is based on coordinate differences between the reference data set and the tested MLS point cloud. There is a minimum number of 6–7 CPs in our scanned area (based on MLS trajectory length) to achieve the declared relative accuracy of trajectory positioning according to the RIEGL datasheet. We tested two types of ground control point (GCP) configurations for 7 GCPs, using TPF reference data. The first type is a trajectory-based CPs configuration, and the second is a geometry-based CPs configuration. The accuracy differences of the MLS point clouds with trajectory-based CPs configuration and geometry-based CPs configuration are not statistically significant. From a practical perspective, a geometry-based CPs configuration is more advantageous in the nonlinear type of urban area such as our one. The following analyzes are performed on geometry-based CPs configuration variants. We tested the influence of changing the location of two CPs from ground to roof. The effect of the vertical configuration of the CPs on the accuracy of the tested MLS point cloud has not been demonstrated. The effect of the number of control points on the accuracy of the MLS point cloud was also tested. In the overall statistics using TPF, the accuracy increases significantly with increasing the number of GCPs up to 6. This number corresponds to a requirement of the manufacturer. Although further increasing the number of CPs does not significantly increase the global accuracy, local accuracy improves with increasing the number of CPs up to 10 (average spacing 50 m) according to the comparison with the TLS reference point cloud. The accuracy test of the MLS point cloud was divided into the horizontal accuracy test on the façade data subset and the vertical accuracy test on the road data subset using the TLS reference point cloud. The results of this paper can help improve the efficiency and accuracy of the mobile mapping process in geodetic praxis.


Author(s):  
H.-J. Przybilla ◽  
M. Lindstaedt ◽  
T. Kersten

<p><strong>Abstract.</strong> The quality of image-based point clouds generated from images of UAV aerial flights is subject to various influencing factors. In addition to the performance of the sensor used (a digital camera), the image data format (e.g. TIF or JPG) is another important quality parameter. At the UAV test field at the former Zollern colliery (Dortmund, Germany), set up by Bochum University of Applied Sciences, a medium-format camera from Phase One (IXU 1000) was used to capture UAV image data in RAW format. This investigation aims at evaluating the influence of the image data format on point clouds generated by a Dense Image Matching process. Furthermore, the effects of different data filters, which are part of the evaluation programs, were considered. The processing was carried out with two software packages from Agisoft and Pix4D on the basis of both generated TIF or JPG data sets. The point clouds generated are the basis for the investigation presented in this contribution. Point cloud comparisons with reference data from terrestrial laser scanning were performed on selected test areas representing object-typical surfaces (with varying surface structures). In addition to these area-based comparisons, selected linear objects (profiles) were evaluated between the different data sets. Furthermore, height point deviations from the dense point clouds were determined using check points. Differences in the results generated through the two software packages used could be detected. The reasons for these differences are filtering settings used for the generation of dense point clouds. It can also be assumed that there are differences in the algorithms for point cloud generation which are implemented in the two software packages. The slightly compressed JPG image data used for the point cloud generation did not show any significant changes in the quality of the examined point clouds compared to the uncompressed TIF data sets.</p>


2020 ◽  
Vol 12 (18) ◽  
pp. 2884
Author(s):  
Qingwang Liu ◽  
Liyong Fu ◽  
Qiao Chen ◽  
Guangxing Wang ◽  
Peng Luo ◽  
...  

Forest canopy height is one of the most important spatial characteristics for forest resource inventories and forest ecosystem modeling. Light detection and ranging (LiDAR) can be used to accurately detect canopy surface and terrain information from the backscattering signals of laser pulses, while photogrammetry tends to accurately depict the canopy surface envelope. The spatial differences between the canopy surfaces estimated by LiDAR and photogrammetry have not been investigated in depth. Thus, this study aims to assess LiDAR and photogrammetry point clouds and analyze the spatial differences in canopy heights. The study site is located in the Jigongshan National Nature Reserve of Henan Province, Central China. Six data sets, including one LiDAR data set and five photogrammetry data sets captured from an unmanned aerial vehicle (UAV), were used to estimate the forest canopy heights. Three spatial distribution descriptors, namely, the effective cell ratio (ECR), point cloud homogeneity (PCH) and point cloud redundancy (PCR), were developed to assess the LiDAR and photogrammetry point clouds in the grid. The ordinary neighbor (ON) and constrained neighbor (CN) interpolation algorithms were used to fill void cells in digital surface models (DSMs) and canopy height models (CHMs). The CN algorithm could be used to distinguish small and large holes in the CHMs. The optimal spatial resolution was analyzed according to the ECR changes of DSMs or CHMs resulting from the CN algorithms. Large negative and positive variations were observed between the LiDAR and photogrammetry canopy heights. The stratified mean difference in canopy heights increased gradually from negative to positive when the canopy heights were greater than 3 m, which means that photogrammetry tends to overestimate low canopy heights and underestimate high canopy heights. The CN interpolation algorithm achieved smaller relative root mean square errors than the ON interpolation algorithm. This article provides an operational method for the spatial assessment of point clouds and suggests that the variations between LiDAR and photogrammetry CHMs should be considered when modeling forest parameters.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Siyuan Huang ◽  
Limin Liu ◽  
Jian Dong ◽  
Xiongjun Fu ◽  
Leilei Jia

Purpose Most of the existing ground filtering algorithms are based on the Cartesian coordinate system, which is not compatible with the working principle of mobile light detection and ranging and difficult to obtain good filtering accuracy. The purpose of this paper is to improve the accuracy of ground filtering by making full use of the order information between the point and the point in the spherical coordinate. Design/methodology/approach First, the cloth simulation (CS) algorithm is modified into a sorting algorithm for scattered point clouds to obtain the adjacent relationship of the point clouds and to generate a matrix containing the adjacent information of the point cloud. Then, according to the adjacent information of the points, a projection distance comparison and local slope analysis are simultaneously performed. These results are integrated to process the point cloud details further and the algorithm is finally used to filter a point cloud in a scene from the KITTI data set. Findings The results show that the accuracy of KITTI point cloud sorting is 96.3% and the kappa coefficient of the ground filtering result is 0.7978. Compared with other algorithms applied to the same scene, the proposed algorithm has higher processing accuracy. Research limitations/implications Steps of the algorithm are parallel computing, which saves time owing to the small amount of computation. In addition, the generality of the algorithm is improved and it could be used for different data sets from urban streets. However, due to the lack of point clouds from the field environment with labeled ground points, the filtering result of this algorithm in the field environment needs further study. Originality/value In this study, the point cloud neighboring information was obtained by a modified CS algorithm. The ground filtering algorithm distinguish ground points and off-ground points according to the flatness, continuity and minimality of ground points in point cloud data. In addition, it has little effect on the algorithm results if thresholds were changed.


2021 ◽  
Vol 30 ◽  
pp. 126-130
Author(s):  
Jan Voříšek ◽  
Bořek Patzák ◽  
Edita Dvořáková ◽  
Daniel Rypl

Laser scanning is used widely in architecture and construction to document existing buildings by providing accurate data for creating a 3D model. The output is a set of data points in space, so-called point cloud. While point clouds can be directly rendered and inspected, they do not hold any semantics. Typically, engineers manually obtain floor plans, structural models, or the whole BIM model, which is a very time-consuming task for large building projects. In this contribution, we present the design and concept of a PointCloud2BIM library [1]. It provides a set of algorithms for automated or user assisted detection of fundamental entities from scanned point cloud data sets, such as floors, rooms, walls, and openings, and identification of the mutual relationships between them. The entity detection is based on a reasonable degree of human interaction (i.e., expected wall thickness). The results reside in a platform-agnostic JSON database allowing future integration into any existing BIM software.


2011 ◽  
Author(s):  
David Doria

This document presents a GUI application to manually select corresponding points in two data sets. The data sets can each be either an image or a point cloud. If both data sets are images, the functionality is equivalent to Matlab’s ‘cpselect’ function. There are many uses of selecting correspondences. If both data sets are images, the correspondences can be used to compute the fundamental matrix, or to perform registration. If both data sets are point clouds, the correspondences can be used to compute a landmark transformation. If one data set is an image and the other is a point cloud, the camera matrix relating the two can be computed.


2020 ◽  
Vol 25 ◽  
pp. 545-560
Author(s):  
Gustaf Uggla ◽  
Milan Horemuz

Capturing geographic information from a mobile platform, a method known as mobile mapping, is today one of the best methods for rapid and safe data acquisition along roads and railroads. The digitalization of society and the use of information technology in the construction industry is increasing the need for structured geometric and semantic information about the built environment. This puts an emphasis on automatic object identification in data such as point clouds. Most point clouds are accompanied by RGB images, and a recent literature review showed that these are possibly underutilized for object identification. This article presents a method (image-based point cloud segmentations – IBPCS) where semantic segmentation of images is used to filter point clouds, which drastically reduces the number of points that have to be considered in object identification and allows simpler algorithms to be used. An example implementation where IBPCS is used to identify roadside game fences along a country road is provided, and the accuracy and efficiency of the method is compared to the performance of PointNet, which is a neural network designed for end-to-end point cloud classification and segmentation. The results show that our implementation of IBPCS outperforms PointNet for the given task. The strengths of IBPCS are the ability to filter point clouds based on visual appearance and that it efficiently can process large data sets. This makes the method a suitable candidate for object identification along rural roads and railroads, where the objects of interest are scattered over long distances.


Sign in / Sign up

Export Citation Format

Share Document