GAMING ENGINES AND GEOSPATIAL IMAGING: VISUALIZING HIGH-RESOLUTION POINT CLOUD DATA FROM BIG BAT CAVE IN UNITY

2018 ◽  
Author(s):  
Jeffrey A. Baggett ◽  
◽  
Margaret E. McMillan
2017 ◽  
Vol 142 ◽  
pp. 1805-1810 ◽  
Author(s):  
Tom Lloyd Garwood ◽  
Ben Richard Hughes ◽  
Dominic O’Connor ◽  
John K Calautit ◽  
Michael R Oates ◽  
...  

Author(s):  
S. D. Jawak ◽  
S. N. Panditrao ◽  
A. J. Luis

This work uses the canopy height model (CHM) based workflow for individual tree crown delineation and 3D feature extraction approach (Overwatch Geospatial's proprietary algorithm) for building feature delineation from high-density light detection and ranging (LiDAR) point cloud data in an urban environment and evaluates its accuracy by using very high-resolution panchromatic (PAN) (spatial) and 8-band (multispectral) WorldView-2 (WV-2) imagery. LiDAR point cloud data over San Francisco, California, USA, recorded in June 2010, was used to detect tree and building features by classifying point elevation values. The workflow employed includes resampling of LiDAR point cloud to generate a raster surface or digital terrain model (DTM), generation of a hill-shade image and an intensity image, extraction of digital surface model, generation of bare earth digital elevation model (DEM) and extraction of tree and building features. First, the optical WV-2 data and the LiDAR intensity image were co-registered using ground control points (GCPs). The WV-2 rational polynomial coefficients model (RPC) was executed in ERDAS Leica Photogrammetry Suite (LPS) using supplementary *.RPB file. In the second stage, ortho-rectification was carried out using ERDAS LPS by incorporating well-distributed GCPs. The root mean square error (RMSE) for the WV-2 was estimated to be 0.25 m by using more than 10 well-distributed GCPs. In the second stage, we generated the bare earth DEM from LiDAR point cloud data. In most of the cases, bare earth DEM does not represent true ground elevation. Hence, the model was edited to get the most accurate DEM/ DTM possible and normalized the LiDAR point cloud data based on DTM in order to reduce the effect of undulating terrain. We normalized the vegetation point cloud values by subtracting the ground points (DEM) from the LiDAR point cloud. A normalized digital surface model (nDSM) or CHM was calculated from the LiDAR data by subtracting the DEM from the DSM. The CHM or the normalized DSM represents the absolute height of all aboveground urban features relative to the ground. After normalization, the elevation value of a point indicates the height from the ground to the point. The above-ground points were used for tree feature and building footprint extraction. In individual tree extraction, first and last return point clouds were used along with the bare earth and building footprint models discussed above. In this study, scene dependent extraction criteria were employed to improve the 3D feature extraction process. LiDAR-based refining/ filtering techniques used for bare earth layer extraction were crucial for improving the subsequent 3D features (tree and building) feature extraction. The PAN-sharpened WV-2 image (with 0.5 m spatial resolution) was used to assess the accuracy of LiDAR-based 3D feature extraction. Our analysis provided an accuracy of 98 % for tree feature extraction and 96 % for building feature extraction from LiDAR data. This study could extract total of 15143 tree features using CHM method, out of which total of 14841 were visually interpreted on PAN-sharpened WV-2 image data. The extracted tree features included both shadowed (total 13830) and non-shadowed (total 1011). We note that CHM method could overestimate total of 302 tree features, which were not observed on the WV-2 image. One of the potential sources for tree feature overestimation was observed in case of those tree features which were adjacent to buildings. In case of building feature extraction, the algorithm could extract total of 6117 building features which were interpreted on WV-2 image, even capturing buildings under the trees (total 605) and buildings under shadow (total 112). Overestimation of tree and building features was observed to be limiting factor in 3D feature extraction process. This is due to the incorrect filtering of point cloud in these areas. One of the potential sources of overestimation was the man-made structures, including skyscrapers and bridges, which were confounded and extracted as buildings. This can be attributed to low point density at building edges and on flat roofs or occlusions due to which LiDAR cannot give as much precise planimetric accuracy as photogrammetric techniques (in segmentation) and lack of optimum use of textural information as well as contextual information (especially at walls which are away from roof) in automatic extraction algorithm. In addition, there were no separate classes for bridges or the features lying inside the water and multiple water height levels were also not considered. Based on these inferences, we conclude that the LiDAR-based 3D feature extraction supplemented by high resolution satellite data is a potential application which can be used for understanding and characterization of urban setup.


2020 ◽  
Author(s):  
Meiert W. Grootes ◽  
Christiaan Meijer ◽  
Zsofia Koma ◽  
Bouwe Andela ◽  
Elena Ranguelova ◽  
...  

<p>LiDAR as a remote sensing technology, enabling the rapid 3D characterization of an area from an air- or spaceborne platform, has become a mainstream tool in the (bio)geosciences and related disciplines. For instance, LiDAR-derived metrics are used for characterizing vegetation type, structure, and prevalence and are widely employed across ecosystem research, forestry, and ecology/biology. Furthermore, these types of metrics are key candidates in the quest for Essential Biodiversity Variables (EBVs) suited to quantifying habitat structure, reflecting the importance of this property in assessing and monitoring the biodiversity of flora and fauna, and consequently in informing policy to safeguard it in the light of climate change an human impact.</p><p>In all these use cases, the power of LiDAR point cloud datasets resides in the information encoded within the spatial distribution of LiDAR returns, which can be extracted by calculating domain-specific statistical/ensemble properties of well-defined subsets of points.  </p><p>Facilitated by technological advances, the volume of point cloud data sets provided by LiDAR has steadily increased, with modern airborne laser scanning surveys now providing high-resolution, (super-)national scale datasets, tens to hundreds of terabytes in size and encompassing hundreds of billions of individual points, many of which are available as open data.</p><p>Representing a trove of data and, for the first time, enabling the study of ecosystem structure at meter resolution over the extent of tens to hundreds of kilometers, these datasets represent highly valuable new resources. However, their scientific exploitation is hindered by the scarcity of Free Open Source Software (FOSS) tools capable of handling the challenges of accessing, processing, and extracting meaningful information from massive multi-terabyte datasets, as well as by the domain-specificity of any existing tools.</p><p>Here we present Laserchicken a FOSS, user-extendable, cross-platform Python tool for extracting user-defined statistical properties of flexibly defined subsets of point cloud data, aimed at enabling efficient, scalable, and distributed processing of multi-terabyte datasets. Laserchicken can be seamlessly employed on computing architectures ranging from desktop systems to distributed clusters, and supports standard point cloud and geo-data formats (LAS/LAZ, PLY, GeoTIFF, etc.) making it compatible with a wide range of (FOSS) tools for geoscience.</p><p>The Laserchicken feature extraction tool is complemented by a FOSS Python processing pipeline tailored to the scientific exploitation of massive nation-scale point cloud datasets, together forming the Laserchicken framework.</p><p>The ability of the Laserchicken framework to unlock nation-scale LiDAR point cloud datasets is demonstrated on the basis of its use in the eEcoLiDAR project, a collaborative project between the University of Amsterdam and the Netherlands eScience Center. Within the eEcoLiDAR project, Laserchicken has been instrumental in defining classification methods for wetland habitats, as well as in facilitating the use of high-resolution vegetation structure metrics in modelling species distributions at national scales, with preliminary results highlighting the importance of including this information.</p><p>The Laserchicken Framework rests on FOSS, including the GDAL and PDAL libraries as well as numerous packages hosted on the open source Python Package Index (PyPI), and is itself also available as FOSS (https://pypi.org/project/laserchicken/ and https://github.com/eEcoLiDAR/ ).</p>


Author(s):  
P. M. Mat Zam ◽  
N. A. Fuad ◽  
A. R. Yusoff ◽  
Z. Majid

<p><strong>Abstract.</strong> Nowadays, Terrestrial Laser Scanning (TLS) technology is gaining popularity in monitoring and predicting the movement of landslide due to the capability of high-speed data capture without requiring a direct contact with the monitored surface. It offers very high density of point cloud data in high resolution and also can be an effective tool in detecting the surface movement of the landslide area. The aim of this research is to determine the optimal level of scanning resolution for landslide monitoring using TLS. The Topcon Geodetic Laser Scanner (GLS) 2000 was used in this research to obtain the three dimensional (3D) point cloud data of the landslide area. Four types of resolution were used during scanning operation which were consist of very high, high, medium and low resolutions. After done with the data collection, the point clouds datasets were undergone the process of registration and filtering using ScanMaster software. After that, the registered point clouds datasets were analyzed using CloudCompare software. Based on the results obtained, the accuracy of TLS point cloud data between picking point manually and computed automatically by ScanMaster software shows the maximum Root Mean Square (RMS) value of coordinate differences were 0.013<span class="thinspace"></span>m in very high resolution, 0.017<span class="thinspace"></span>m in high resolution, 0.031<span class="thinspace"></span>m in medium resolution and 0.052<span class="thinspace"></span>m in low resolution respectively. Meanwhile, the accuracy of TLS point cloud data between picking point manually and total station data using intersection method shows the maximum RMS values of coordinate differences were 0.013<span class="thinspace"></span>m in very high resolution, 0.018<span class="thinspace"></span>m in high resolution, 0.033<span class="thinspace"></span>m in medium resolution and 0.054<span class="thinspace"></span>m in low resolution respectively. Hence, it can be concluded that the high or very high resolution is needed for landslide monitoring using Topcon GLS-2000 which can provide more accurate data in slope result, while the low and medium resolutions is not suitable for landslide monitoring due to the accuracy of TLS point cloud data that will decreased when the resolution value is increased.</p>


Author(s):  
Naga Madhavi lavanya Gandi

Land cover classification information plays a very important role in various applications. Airborne Light detection and Ranging (LiDAR) data is widely used in remote sensing application for the classification of land cover. The present study presents a Spatial classification method using Terrasoild macros . The data used in this study are a LiDAR point cloud data with the wavelength of green:532nm, near infrared:1064nm and mid-infrared-1550nm and High Resolution RGB data. The classification is carried in TERRASCAN Module with twelve land cover classes. The classification accuracies were assessed using high resolution RGB data. From the results it is concluded that the LiDAR data classification with overall accuracy and kappa coefficient 85.2% and 0.7562.


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Sign in / Sign up

Export Citation Format

Share Document