Hyperspectral and LiDAR data fusion in features based classification

2021 ◽  
Vol 14 (24) ◽  
Author(s):  
Farsat Heeto Abdulrahman
Keyword(s):  
2011 ◽  
Vol 19 (21) ◽  
pp. 20916 ◽  
Author(s):  
A. V. Kanaev ◽  
B. J. Daniel ◽  
J. G. Neumann ◽  
A. M. Kim ◽  
K. R. Lee

Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3960 ◽  
Author(s):  
Jeremy Castagno ◽  
Ella Atkins

Geographic information systems (GIS) provide accurate maps of terrain, roads, waterways, and building footprints and heights. Aircraft, particularly small unmanned aircraft systems (UAS), can exploit this and additional information such as building roof structure to improve navigation accuracy and safely perform contingency landings particularly in urban regions. However, building roof structure is not fully provided in maps. This paper proposes a method to automatically label building roof shape from publicly available GIS data. Satellite imagery and airborne LiDAR data are processed and manually labeled to create a diverse annotated roof image dataset for small to large urban cities. Multiple convolutional neural network (CNN) architectures are trained and tested, with the best performing networks providing a condensed feature set for support vector machine and decision tree classifiers. Satellite image and LiDAR data fusion is shown to provide greater classification accuracy than using either data type alone. Model confidence thresholds are adjusted leading to significant increases in models precision. Networks trained from roof data in Witten, Germany and Manhattan (New York City) are evaluated on independent data from these cities and Ann Arbor, Michigan.


2020 ◽  
Vol 12 (20) ◽  
pp. 3274
Author(s):  
Keke Geng ◽  
Ge Dong ◽  
Guodong Yin ◽  
Jingyu Hu

Recent advancements in environmental perception for autonomous vehicles have been driven by deep learning-based approaches. However, effective traffic target detection in complex environments remains a challenging task. This paper presents a novel dual-modal instance segmentation deep neural network (DM-ISDNN) by merging camera and LIDAR data, which can be used to deal with the problem of target detection in complex environments efficiently based on multi-sensor data fusion. Due to the sparseness of the LIDAR point cloud data, we propose a weight assignment function that assigns different weight coefficients to different feature pyramid convolutional layers for the LIDAR sub-network. We compare and analyze the adaptations of early-, middle-, and late-stage fusion architectures in depth. By comprehensively considering the detection accuracy and detection speed, the middle-stage fusion architecture with a weight assignment mechanism, with the best performance, is selected. This work has great significance for exploring the best feature fusion scheme of a multi-modal neural network. In addition, we apply a mask distribution function to improve the quality of the predicted mask. A dual-modal traffic object instance segmentation dataset is established using a 7481 camera and LIDAR data pairs from the KITTI dataset, with 79,118 manually annotated instance masks. To the best of our knowledge, there is no existing instance annotation for the KITTI dataset with such quality and volume. A novel dual-modal dataset, composed of 14,652 camera and LIDAR data pairs, is collected using our own developed autonomous vehicle under different environmental conditions in real driving scenarios, for which a total of 62,579 instance masks are obtained using semi-automatic annotation method. This dataset can be used to validate the detection performance under complex environmental conditions of instance segmentation networks. Experimental results on the dual-modal KITTI Benchmark demonstrate that DM-ISDNN using middle-stage data fusion and the weight assignment mechanism has better detection performance than single- and dual-modal networks with other data fusion strategies, which validates the robustness and effectiveness of the proposed method. Meanwhile, compared to the state-of-the-art instance segmentation networks, our method shows much better detection performance, in terms of AP and F1 score, on the dual-modal dataset collected under complex environmental conditions, which further validates the superiority of our method.


2020 ◽  
Vol 12 (21) ◽  
pp. 3506
Author(s):  
Nuria Sanchez-Lopez ◽  
Luigi Boschetti ◽  
Andrew T. Hudak ◽  
Steven Hancock ◽  
Laura I. Duncanson

Stand-level maps of past forest disturbances (expressed as time since disturbance, TSD) are needed to model forest ecosystem processes, but the conventional approaches based on remotely sensed satellite data can only extend as far back as the first available satellite observations. Stand-level analysis of airborne LiDAR data has been demonstrated to accurately estimate long-term TSD (~100 years), but large-scale coverage of airborne LiDAR remains costly. NASA’s spaceborne LiDAR Global Ecosystem Dynamics Investigation (GEDI) instrument, launched in December 2018, is providing billions of measurements of tropical and temperate forest canopies around the globe. GEDI is a spatial sampling instrument and, as such, does not provide wall-to-wall data. GEDI’s lasers illuminate ground footprints, which are separated by ~600 m across-track and ~60 m along-track, so new approaches are needed to generate wall-to-wall maps from the discrete measurements. In this paper, we studied the feasibility of a data fusion approach between GEDI and Landsat for wall-to-wall mapping of TSD. We tested the methodology on a ~52,500-ha area located in central Idaho (USA), where an extensive record of stand-replacing disturbances is available, starting in 1870. GEDI data were simulated over the nominal two-year planned mission lifetime from airborne LiDAR data and used for TSD estimation using a random forest (RF) classifier. Image segmentation was performed on Landsat-8 data, obtaining image-objects representing forest stands needed for the spatial extrapolation of estimated TSD from the discrete GEDI locations. We quantified the influence of (1) the forest stand map delineation, (2) the sample size of the training dataset, and (3) the number of GEDI footprints per stand on the accuracy of estimated TSD. The results show that GEDI-Landsat data fusion would allow for TSD estimation in stands covering ~95% of the study area, having the potential to reconstruct the long-term disturbance history of temperate even-aged forests with accuracy (median root mean square deviation = 22.14 years, median BIAS = 1.70 years, 60.13% of stands classified within 10 years of the reference disturbance date) comparable to the results obtained in the same study area with airborne LiDAR.


2019 ◽  
Vol 79 (47-48) ◽  
pp. 35503-35518 ◽  
Author(s):  
Huafeng Liu ◽  
Yazhou Yao ◽  
Zeren Sun ◽  
Xiangrui Li ◽  
Ke Jia ◽  
...  

2007 ◽  
Vol 28 (19) ◽  
pp. 4263-4284 ◽  
Author(s):  
G. W. Geerling ◽  
M. Labrador‐Garcia ◽  
J. G. P. W. Clevers ◽  
A. M. J. Ragas ◽  
A. J. M. Smits

Sign in / Sign up

Export Citation Format

Share Document