A new deep learning approach for classification of hyperspectral images: feature and decision level fusion of spectral and spatial features in multiscale CNN

2021 ◽  
pp. 1-26
Author(s):  
Obeid Sharifi ◽  
Mehdi Mokhtarzadeh ◽  
Behnam Asghari Beirami
Author(s):  
S. Daneshtalab ◽  
H. Rastiveis

Automatic urban objects extraction from airborne remote sensing data is essential to process and efficiently interpret the vast amount of airborne imagery and Lidar data available today. The aim of this study is to propose a new approach for the integration of high-resolution aerial imagery and Lidar data to improve the accuracy of classification in the city complications. In the proposed method, first, the classification of each data is separately performed using Support Vector Machine algorithm. In this case, extracted Normalized Digital Surface Model (nDSM) and pulse intensity are used in classification of LiDAR data, and three spectral visible bands (Red, Green, Blue) are considered as feature vector for the orthoimage classification. Moreover, combining the extracted features of the image and Lidar data another classification is also performed using all the features. The outputs of these classifications are integrated in a decision level fusion system according to the their confusion matrices to find the final classification result. The proposed method was evaluated using an urban area of Zeebruges, Belgium. The obtained results represented several advantages of image fusion with respect to a single shot dataset. With the capabilities of the proposed decision level fusion method, most of the object extraction difficulties and uncertainty were decreased and, the overall accuracy and the kappa values were improved 7% and 10%, respectively.


2019 ◽  
Vol 11 (15) ◽  
pp. 1794 ◽  
Author(s):  
Wenju Wang ◽  
Shuguang Dou ◽  
Sen Wang

The connection structure in the convolutional layers of most deep learning-based algorithms used for the classification of hyperspectral images (HSIs) has typically been in the forward direction. In this study, an end-to-end alternately updated spectral–spatial convolutional network (AUSSC) with a recurrent feedback structure is used to learn refined spectral and spatial features for HSI classification. The proposed AUSSC includes alternating updated blocks in which each layer serves as both an input and an output for the other layers. The AUSSC can refine spectral and spatial features many times under fixed parameters. A center loss function is introduced as an auxiliary objective function to improve the discrimination of features acquired by the model. Additionally, the AUSSC utilizes smaller convolutional kernels than other convolutional neural network (CNN)-based methods to reduce the number of parameters and alleviate overfitting. The proposed method was implemented on four HSI data sets, as follows: Indian Pines, Kennedy Space Center, Salinas Scene, and Houston. Experimental results demonstrated that the proposed AUSSC outperformed the HSI classification accuracy obtained by state-of-the-art deep learning-based methods with a small number of training samples.


2021 ◽  
Vol 13 (22) ◽  
pp. 4668
Author(s):  
Stella Ofori-Ampofo ◽  
Charlotte Pelletier ◽  
Stefan Lang

Crop maps are key inputs for crop inventory production and yield estimation and can inform the implementation of effective farm management practices. Producing these maps at detailed scales requires exhaustive field surveys that can be laborious, time-consuming, and expensive to replicate. With a growing archive of remote sensing data, there are enormous opportunities to exploit dense satellite image time series (SITS), temporal sequences of images over the same area. Generally, crop type mapping relies on single-sensor inputs and is solved with the help of traditional learning algorithms such as random forests or support vector machines. Nowadays, deep learning techniques have brought significant improvements by leveraging information in both spatial and temporal dimensions, which are relevant in crop studies. The concurrent availability of Sentinel-1 (synthetic aperture radar) and Sentinel-2 (optical) data offers a great opportunity to utilize them jointly; however, optimizing their synergy has been understudied with deep learning techniques. In this work, we analyze and compare three fusion strategies (input, layer, and decision levels) to identify the best strategy that optimizes optical-radar classification performance. They are applied to a recent architecture, notably, the pixel-set encoder–temporal attention encoder (PSE-TAE) developed specifically for object-based classification of SITS and based on self-attention mechanisms. Experiments are carried out in Brittany, in the northwest of France, with Sentinel-1 and Sentinel-2 time series. Input and layer-level fusion competitively achieved the best overall F-score surpassing decision-level fusion by 2%. On a per-class basis, decision-level fusion increased the accuracy of dominant classes, whereas layer-level fusion improves up to 13% for minority classes. Against single-sensor baseline, multi-sensor fusion strategies identified crop types more accurately: for example, input-level outperformed Sentinel-2 and Sentinel-1 by 3% and 9% in F-score, respectively. We have also conducted experiments that showed the importance of fusion for early time series classification and under high cloud cover condition.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Mubariz Manzoor ◽  
Muhammad Umer ◽  
Saima Sadiq ◽  
Abid Ishaq ◽  
Saleem Ullah ◽  
...  

2019 ◽  
Vol 48 (6) ◽  
pp. 626001
Author(s):  
唐 聪 Tang Cong ◽  
凌永顺 Ling Yongshun ◽  
杨 华 Yang Hua ◽  
杨 星 Yang Xing ◽  
路 远 Lu Yuan

Sign in / Sign up

Export Citation Format

Share Document