scholarly journals 3D LAND COVER CLASSIFICATION BASED ON MULTISPECTRAL LIDAR POINT CLOUDS

Author(s):  
Xiaoliang Zou ◽  
Guihua Zhao ◽  
Jonathan Li ◽  
Yuanxi Yang ◽  
Yong Fang

Multispectral Lidar System can emit simultaneous laser pulses at the different wavelengths. The reflected multispectral energy is captured through a receiver of the sensor, and the return signal together with the position and orientation information of sensor is recorded. These recorded data are solved with GNSS/IMU data for further post-processing, forming high density multispectral 3D point clouds. As the first commercial multispectral airborne Lidar sensor, Optech Titan system is capable of collecting point clouds data from all three channels at 532nm visible (Green), at 1064 nm near infrared (NIR) and at 1550nm intermediate infrared (IR). It has become a new source of data for 3D land cover classification. The paper presents an Object Based Image Analysis (OBIA) approach to only use multispectral Lidar point clouds datasets for 3D land cover classification. The approach consists of three steps. Firstly, multispectral intensity images are segmented into image objects on the basis of multi-resolution segmentation integrating different scale parameters. Secondly, intensity objects are classified into nine categories by using the customized features of classification indexes and a combination the multispectral reflectance with the vertical distribution of object features. Finally, accuracy assessment is conducted via comparing random reference samples points from google imagery tiles with the classification results. The classification results show higher overall accuracy for most of the land cover types. Over 90% of overall accuracy is achieved via using multispectral Lidar point clouds for 3D land cover classification.

Author(s):  
Xiaoliang Zou ◽  
Guihua Zhao ◽  
Jonathan Li ◽  
Yuanxi Yang ◽  
Yong Fang

Multispectral Lidar System can emit simultaneous laser pulses at the different wavelengths. The reflected multispectral energy is captured through a receiver of the sensor, and the return signal together with the position and orientation information of sensor is recorded. These recorded data are solved with GNSS/IMU data for further post-processing, forming high density multispectral 3D point clouds. As the first commercial multispectral airborne Lidar sensor, Optech Titan system is capable of collecting point clouds data from all three channels at 532nm visible (Green), at 1064 nm near infrared (NIR) and at 1550nm intermediate infrared (IR). It has become a new source of data for 3D land cover classification. The paper presents an Object Based Image Analysis (OBIA) approach to only use multispectral Lidar point clouds datasets for 3D land cover classification. The approach consists of three steps. Firstly, multispectral intensity images are segmented into image objects on the basis of multi-resolution segmentation integrating different scale parameters. Secondly, intensity objects are classified into nine categories by using the customized features of classification indexes and a combination the multispectral reflectance with the vertical distribution of object features. Finally, accuracy assessment is conducted via comparing random reference samples points from google imagery tiles with the classification results. The classification results show higher overall accuracy for most of the land cover types. Over 90% of overall accuracy is achieved via using multispectral Lidar point clouds for 3D land cover classification.


Author(s):  
L. Ma ◽  
Z. Chen ◽  
Y. Li ◽  
D. Zhang ◽  
J. Li ◽  
...  

<p><strong>Abstract.</strong> This paper presents an automated workflow for pixel-wise land cover (LC) classification from multispectral airborne laser scanning (ALS) data using deep learning methods. It mainly contains three procedures: data pre-processing, land cover classification, and accuracy assessment. First, a total of nine raster images with different information were generated from the pre-processed point clouds. These images were assembled into six input data combinations. Meanwhile, the labelled dataset was created using the orthophotos as the ground truth. Also, three deep learning networks were established. Then, each input data combination was used to train and validate each network, which developed eighteen LC classification models with different parameters to predict LC types for pixels. Finally, accuracy assessments and comparisons were done for the eighteen classification results to determine an optimal scheme. The proposed method was tested on six input datasets with three deep learning classification networks (i.e., 1D CNN, 2D CNN, and 3D CNN). The highest overall classification accuracy of 97.2% has been achieved using the proposed 3D CNN. The overall accuracy (OA) of the 2D and 3D CNNs was, on average, 8.4% higher than that of the 1D CNN. Although the OA of the 2D CNN was at most 0.3% lower than that of the 3D CNN, the runtime of the 3D CNN was five times longer than the 2D CNN. Thus, the 2D CNN was the best choice for the multispectral ALS LC classification when considering efficiency. The results demonstrated the proposed methods can successfully classify land covers from multispectral ALS data.</p>


Author(s):  
Bambang Trisakti ◽  
Dini Oktaviana Ambarwati

Abstract.  Advanced Land Observation Satellite (ALOS) is a Japanese satellite equipped with 3  sensors  i.e.,  PRISM,  AVNIR,  and  PALSAR.  The  Advanced  Visible  and  Near  Infrared Radiometer (AVNIR) provides multi spectral sensors ranging from Visible to Near Infrared to observe  land  and  coastal  zones.  It  has  10  meter  spatial  resolution,  which  can  be  used  to map  land  cover  with  a  scale  of 1:25000.  The  purpose  of  this  research  was  to  determineclassification  for  land  cover  mapping  using  ALOS  AVNIR  data.  Training  samples  were collected  for  11  land  cover  classes  from  Bromo  volcano  by  visually  referring  to  very  high resolution  data  of  IKONOS  panchromatic  data.  The  training  samples  were  divided  into samples  for  classification  input  and  samples  for  accuracy  evaluation.  Principal  component analysis (PCA) was conducted for AVNIR data, and the generated PCA bands were classified using Maximum Likehood  Enhanced Neighbor method. The classification result was filtered and  re-classed  into  8  classes.  Misclassifications  were  evaluated  and  corrected  in  the  post processing  stage.  The  accuracy  of  classifications  results,  before  and  after  post  processing, were  evaluated  using  confusion  matrix  method.  The  result  showed  that  Maximum Likelihood  Enhanced  Neighbor  classifier  with  post  processing  can  produce  land  cover classification  result  of  AVNIR  data  with  good  accuracy  (total  accuracy  94%  and  kappa statistic 0.92).  ALOS AVNIR has been proven as a potential satellite data to map land cover in the study area with good accuracy.


Author(s):  
T. Guo ◽  
A. Capra ◽  
M. Troyer ◽  
A. Gruen ◽  
A. J. Brooks ◽  
...  

Recent advances in automation of photogrammetric 3D modelling software packages have stimulated interest in reconstructing highly accurate 3D object geometry in unconventional environments such as underwater utilizing simple and low-cost camera systems. The accuracy of underwater 3D modelling is affected by more parameters than in single media cases. This study is part of a larger project on 3D measurements of temporal change of coral cover in tropical waters. It compares the accuracies of 3D point clouds generated by using images acquired from a system camera mounted in an underwater housing and the popular GoPro cameras respectively. A precisely measured calibration frame was placed in the target scene in order to provide accurate control information and also quantify the errors of the modelling procedure. In addition, several objects (cinder blocks) with various shapes were arranged in the air and underwater and 3D point clouds were generated by automated image matching. These were further used to examine the relative accuracy of the point cloud generation by comparing the point clouds of the individual objects with the objects measured by the system camera in air (the best possible values). Given a working distance of about 1.5 m, the GoPro camera can achieve a relative accuracy of 1.3 mm in air and 2.0 mm in water. The system camera achieved an accuracy of 1.8 mm in water, which meets our requirements for coral measurement in this system.


Sign in / Sign up

Export Citation Format

Share Document