scholarly journals A decision-level fusion approach to tree species classification from multi-source remotely sensed data

Author(s):  
Baoxin Hu ◽  
Qian Li ◽  
G. Brent Hall
2016 ◽  
Vol 186 ◽  
pp. 64-87 ◽  
Author(s):  
Fabian Ewald Fassnacht ◽  
Hooman Latifi ◽  
Krzysztof Stereńczak ◽  
Aneta Modzelewska ◽  
Michael Lefsky ◽  
...  

Author(s):  
B. Hu ◽  
Q. Li

Abstract. The objective of this study was to explore the use of multi-source remotely sensed data for individual tree species. To achieve this, a neutrosophic logic-based method was developed for tree species classification using the combined spectral, textural and structural information derived from WorldView-2 (WV-2) multispectral bands, WV-2 panchromatic band, and LiDAR (Light Detection And Ranging)-derived canopy height model (CHM), respectively. The developed method was tested on the data obtained over the Keele campus, York University, Toronto Canada and the KNN (K Nearest Neighbour) classification method. Twenty-one spectral, three textural and three structural features were used to classify five species (Norway maple, honey locust, Austrian pine, blue spruce, and white spruce). For this study, 522 trees were used for training and 223 for testing. The overall classification accuracy obtained by the proposed method was 0.82. It was significantly improved compared with the KNN (0.73), weighted KNN (0.76), and fuzzy KNN (0.75) methods. In addition, Dempster-Shafer (DS) theory was explored to perform information fusion at the decision level in comparison to that at the feature level. The accuracies obtained by the fusion at the decision level were generally lower than those at the feature level. Even though promising results based on the neutrosophic logic were obtained during this proof-concept stage, studies are underway to perform more tests with a large number of tree crowns and more species and exploit other classification methods, such as support vector machine.


Author(s):  
Priti Shivaji Sanjekar ◽  
Jayantrao B. Patil

Multimodal biometrics is the frontier to unimodal biometrics as it integrates the information obtained from multiple biometric sources at various fusion levels i.e. sensor level, feature extraction level, match score level, or decision level. In this article, fingerprint, palmprint, and iris are used for verification of an individual. The wavelet transformation is used to extract features from fingerprint, palmprint, and iris. Further the PCA is used for dimensionality reduction. The fusion of traits is employed at three levels: feature level; feature level combined with match score level; and feature level combined with decision level. The main objective of this research is to observe effect of combined fusion levels on verification of an individual. The performance of three cases of fusion is measured in terms of EER and represented with ROC. The experiments performed on 100 different subjects from publicly available databases demonstrate that combining feature level with match score level and feature level with decision level fusion both outperforms fusion at only a feature level.


Author(s):  
M. Lazari Zare ◽  
F. Tabib Mahmoudi

Abstract. Road recognition and extraction based on remotely sensed data is efficient and applicable in much urban management studies. In this research, the capabilities of SPOT and SAR images are investigated for road recognition. Spectral and textural similarities between roads and other urban objects such as building’s roofs many cause some difficulties in road recognition based on SPOT image. On the other hand, SAR images are good for small road recognition but, may have some difficulties for detecting roads among vegetation. The proposed method in this paper is a decision level fusion of SPOT and SAR classification results in order to modify extracted road regions. This method has three main steps; 1) texture feature extraction from each of the SPOT and SAR images, 2) classifying each of the SPOT and SAR images based on SVM classifier, 3) decision level fusion of classification results in order to reduce road recognition difficulties and having optimum road regions. Performing the capabilities of the proposed decision level fusion algorithm for road recognition can improve the quality of the classification for about 21%.


Sensors ◽  
2019 ◽  
Vol 19 (6) ◽  
pp. 1284 ◽  
Author(s):  
Sean Hartling ◽  
Vasit Sagan ◽  
Paheding Sidike ◽  
Maitiniyazi Maimaitijiang ◽  
Joshua Carron

Urban areas feature complex and heterogeneous land covers which create challenging issues for tree species classification. The increased availability of high spatial resolution multispectral satellite imagery and LiDAR datasets combined with the recent evolution of deep learning within remote sensing for object detection and scene classification, provide promising opportunities to map individual tree species with greater accuracy and resolution. However, there are knowledge gaps that are related to the contribution of Worldview-3 SWIR bands, very high resolution PAN band and LiDAR data in detailed tree species mapping. Additionally, contemporary deep learning methods are hampered by lack of training samples and difficulties of preparing training data. The objective of this study was to examine the potential of a novel deep learning method, Dense Convolutional Network (DenseNet), to identify dominant individual tree species in a complex urban environment within a fused image of WorldView-2 VNIR, Worldview-3 SWIR and LiDAR datasets. DenseNet results were compared against two popular machine classifiers in remote sensing image analysis, Random Forest (RF) and Support Vector Machine (SVM). Our results demonstrated that: (1) utilizing a data fusion approach beginning with VNIR and adding SWIR, LiDAR, and panchromatic (PAN) bands increased the overall accuracy of the DenseNet classifier from 75.9% to 76.8%, 81.1% and 82.6%, respectively. (2) DenseNet significantly outperformed RF and SVM for the classification of eight dominant tree species with an overall accuracy of 82.6%, compared to 51.8% and 52% for SVM and RF classifiers, respectively. (3) DenseNet maintained superior performance over RF and SVM classifiers under restricted training sample quantities which is a major limiting factor for deep learning techniques. Overall, the study reveals that DenseNet is more effective for urban tree species classification as it outperforms the popular RF and SVM techniques when working with highly complex image scenes regardless of training sample size.


2019 ◽  
Vol 11 (21) ◽  
pp. 2516 ◽  
Author(s):  
Xiaolong Ma ◽  
Chengming Li ◽  
Xiaohua Tong ◽  
Sicong Liu

Recent advances in the fusion technology of remotely sensed data have led to an increased availability of extracted urban information from multiple spatial resolutions and multi-temporal acquisitions. Despite the existing extraction methods, there remains the challenging task of fully exploiting the characteristics of multisource remote sensing data, each of which has its own advantages. In this paper, a new fusion approach for accurately extracting urban built-up areas based on the use of multisource remotely sensed data, i.e., the DMSP-OLS nighttime light data, the MODIS land cover product (MCD12Q1) and Landsat 7 ETM+ images, was proposed. The proposed method mainly consists of two components: (1) the multi-level data fusion, including the initial sample selection, unified pixel resolution and feature weighted calculation at the feature level, as well as pixel attribution determination at decision level; and (2) the optimized sample selection with multi-factor constraints, which indicates that an iterative optimization with the normalized difference vegetation index (NDVI), the modified normalized difference water index (MNDWI), and the bare soil index (BSI), along with the sample training of the support vector machine (SVM) and the extraction of urban built-up areas, produces results with high credibility. Nine Chinese provincial capitals along the Silk Road Economic Belt, such as Chengdu, Chongqing, Kunming, Xining, and Nanning, were selected to test the proposed method with data from 2001 to 2010. Compared with the results obtained by the traditional threshold dichotomy and the improved neighborhood focal statistics (NFS) method, the following could be concluded. (1) The proposed approach achieved high accuracy and eliminated natural elements to a great extent while obtaining extraction results very consistent to those of the more precise improved NFS approach at a fine scale. The average overall accuracy (OA) and average Kappa values of the extracted urban built-up areas were 95% and 0.83, respectively. (2) The proposed method not only identified the characteristics of the urban built-up area from the nighttime light data and other daylight images at the feature level but also optimized the samples of the urban built-up area category at the decision level, making it possible to provide valuable information for urban planning, construction, and management with high accuracy.


Sign in / Sign up

Export Citation Format

Share Document