scholarly journals Sea Ice Image Classification Based on Heterogeneous Data Fusion and Deep Learning

2021 ◽  
Vol 13 (4) ◽  
pp. 592
Author(s):  
Yanling Han ◽  
Yekun Liu ◽  
Zhonghua Hong ◽  
Yun Zhang ◽  
Shuhu Yang ◽  
...  

Sea ice is one of the typical causes of marine disasters. Sea ice image classification is an important component of sea ice detection. Optical data contain rich spectral information, but they do not allow one to easily distinguish between ground objects with a similar spectrum and foreign objects with the same spectrum. Synthetic aperture radar (SAR) data contain rich texture information, but the data usually have a single source. The limitation of single-source data is that they do not allow for further improvements of the accuracy of remote sensing sea ice classification. In this paper, we propose a method for sea ice image classification based on deep learning and heterogeneous data fusion. Utilizing the advantages of convolutional neural networks (CNNs) in terms of depth feature extraction, we designed a deep learning network structure for SAR and optical images and achieve sea ice image classification through feature extraction and a feature-level fusion of heterogeneous data. For the SAR images, the improved spatial pyramid pooling (SPP) network was used and texture information on sea ice at different scales was extracted by depth. For the optical data, multi-level feature information on sea ice such as spatial and spectral information on different types of sea ice was extracted through a path aggregation network (PANet), which enabled low-level features to be fully utilized due to the gradual feature extraction of the convolution neural network. In order to verify the effectiveness of the method, two sets of heterogeneous sentinel satellite data were used for sea ice classification in the Hudson Bay area. The experimental results show that compared with the typical image classification methods and other heterogeneous data fusion methods, the method proposed in this paper fully integrates multi-scale and multi-level texture and spectral information from heterogeneous data and achieves a better classification effect (96.61%, 95.69%).

2021 ◽  
Vol 13 (8) ◽  
pp. 1602
Author(s):  
Qiaoqiao Sun ◽  
Xuefeng Liu ◽  
Salah Bourennane

Deep learning models have strong abilities in learning features and they have been successfully applied in hyperspectral images (HSIs). However, the training of most deep learning models requires labeled samples and the collection of labeled samples are labor-consuming in HSI. In addition, single-level features from a single layer are usually considered, which may result in the loss of some important information. Using multiple networks to obtain multi-level features is a solution, but at the cost of longer training time and computational complexity. To solve these problems, a novel unsupervised multi-level feature extraction framework that is based on a three dimensional convolutional autoencoder (3D-CAE) is proposed in this paper. The designed 3D-CAE is stacked by fully 3D convolutional layers and 3D deconvolutional layers, which allows for the spectral-spatial information of targets to be mined simultaneously. Besides, the 3D-CAE can be trained in an unsupervised way without involving labeled samples. Moreover, the multi-level features are directly obtained from the encoded layers with different scales and resolutions, which is more efficient than using multiple networks to get them. The effectiveness of the proposed multi-level features is verified on two hyperspectral data sets. The results demonstrate that the proposed method has great promise in unsupervised feature learning and can help us to further improve the hyperspectral classification when compared with single-level features.


Landslides ◽  
2021 ◽  
Author(s):  
Sansar Raj Meena ◽  
Omid Ghorbanzadeh ◽  
Cees J. van Westen ◽  
Thimmaiah Gudiyangada Nachappa ◽  
Thomas Blaschke ◽  
...  

AbstractRainfall-induced landslide inventories can be compiled using remote sensing and topographical data, gathered using either traditional or semi-automatic supervised methods. In this study, we used the PlanetScope imagery and deep learning convolution neural networks (CNNs) to map the 2018 rainfall-induced landslides in the Kodagu district of Karnataka state in the Western Ghats of India. We used a fourfold cross-validation (CV) to select the training and testing data to remove any random results of the model. Topographic slope data was used as auxiliary information to increase the performance of the model. The resulting landslide inventory map, created using the slope data with the spectral information, reduces the false positives, which helps to distinguish the landslide areas from other similar features such as barren lands and riverbeds. However, while including the slope data did not increase the true positives, the overall accuracy was higher compared to using only spectral information to train the model. The mean accuracies of correctly classified landslide values were 65.5% when using only optical data, which increased to 78% with the use of slope data. The methodology presented in this research can be applied in other landslide-prone regions, and the results can be used to support hazard mitigation in landslide-prone regions.


Author(s):  
M. Schmitt ◽  
L. H. Hughes ◽  
X. X. Zhu

<p><strong>Abstract.</strong> While deep learning techniques have an increasing impact on many technical fields, gathering sufficient amounts of training data is a challenging problem in remote sensing. In particular, this holds for applications involving data from multiple sensors with heterogeneous characteristics. One example for that is the fusion of synthetic aperture radar (SAR) data and optical imagery. With this paper, we publish the <i>SEN1-2</i> dataset to foster deep learning research in SAR-optical data fusion. <i>SEN1-2</i> comprises 282;384 pairs of corresponding image patches, collected from across the globe and throughout all meteorological seasons. Besides a detailed description of the dataset, we show exemplary results for several possible applications, such as SAR image colorization, SAR-optical image matching, and creation of artificial optical images from SAR input data. Since <i>SEN1-2</i> is the first large open dataset of this kind, we believe it will support further developments in the field of deep learning for remote sensing as well as multi-sensor data fusion.</p>


Sign in / Sign up

Export Citation Format

Share Document