scholarly journals Nonlinear Manifold Learning Integrated with Fully Convolutional Networks for PolSAR Image Classification

2020 ◽  
Vol 12 (4) ◽  
pp. 655
Author(s):  
Chu He ◽  
Mingxia Tu ◽  
Dehui Xiong ◽  
Mingsheng Liao

Synthetic Aperture Rradar (SAR) provides rich ground information for remote sensing survey and can be used all time and in all weather conditions. Polarimetric SAR (PolSAR) can further reveal surface scattering difference and improve radar’s application ability. Most existing classification methods for PolSAR imagery are based on manual features, such methods with fixed pattern has poor data adaptability and low feature utilization, if directly input to the classifier. Therefore, combining PolSAR data characteristics and deep network with auto-feature learning ability forms a new breakthrough direction. In fact, feature learning of deep network is to realize function approximation from data to label, through multi-layer accumulation, but finite layers limit the network’s mapping ability. According to manifold hypothesis, high-dimensional data exists in potential low-dimensional manifold and different types of data locates in different manifolds. Manifold learning can model core variables of the target, and separate different data’s manifold as much as possible, so as to complete data classification better. Therefore, taking manifold hypothesis as a starting point, nonlinear manifold learning integrated with fully convolutional networks for PolSAR image classification method is proposed in this paper. Firstly, high-dimensional polarized features are extracted based on scattering matrix and coherence matrix of original PolSAR data, whose compact representation is mined by manifold learning. Meanwhile, drawing on transfer learning, pre-trained Fully Convolutional Networks (FCN) model is utilized to learn deep spatial features of PolSAR imagery. Considering complementary advantages, weighted strategy is adopted to embed manifold representation into deep spatial features, which are input into support vector machine (SVM) classifier for final classification. A series of experiments on three PolSAR datasets have verified effectiveness and superiority of the proposed classification algorithm.

2020 ◽  
Vol 12 (9) ◽  
pp. 1467 ◽  
Author(s):  
Chu He ◽  
Bokun He ◽  
Mingxia Tu ◽  
Yan Wang ◽  
Tao Qu ◽  
...  

With the rapid development of artificial intelligence, how to take advantage of deep learning and big data to classify polarimetric synthetic aperture radar (PolSAR) imagery is a hot topic in the field of remote sensing. As a key step for PolSAR image classification, feature extraction technology based on target decomposition is relatively mature, and how to extract discriminative spatial features and integrate these features with polarized information to maximize the classification accuracy is the core issue. In this context, this paper proposes a PolSAR image classification algorithm based on fully convolutional networks (FCNs) and a manifold graph embedding model. First, to describe different types of land objects more comprehensively, various polarized features of PolSAR images are extracted through seven kinds of traditional decomposition methods. Afterwards, drawing on transfer learning, the decomposed features are fed into multiple parallel and pre-trained FCN-8s models to learn deep multi-scale spatial features. Feature maps from the last layer of each FCN model are concatenated to obtain spatial polarization features with high dimensions. Then, a manifold graph embedding model is adopted to seek an effective and compact representation for spatially polarized features in a manifold subspace, simultaneously removing redundant information. Finally, a support vector machine (SVM) is selected as the classifier for pixel-level classification in a manifold subspace. Extensive experiments on three PolSAR datasets demonstrate that the proposed algorithm achieves a superior classification performance.


2020 ◽  
Author(s):  
Keiller Nogueira ◽  
William Robson Schwartz ◽  
Jefersson Alex Dos Santos

A lot of information may be extracted from the Earth’s surface through aerial images. This information may assist in myriad applications, such as urban planning, crop and forest management, disaster relief, etc. However, the process of distilling this information is strongly based on efficiently encoding the spatial features, a challenging task. Facing this, Deep Learning is able to learn specific data-driven features. This PhD thesis1 introduces deep learning into the remote sensing domain. Specifically, we tackled two main tasks, scene and pixel classification, using Deep Learning to encode spatial features over high-resolution remote sensing images. First, we proposed an architecture and analyze different strategies to exploit Convolutional Networks for image classification. Second, we introduced a network and proposed a new strategy to better exploit multi-context information in order to improve pixelwise classification. Finally, we proposed a new network based on morphological operations towards better learning of some relevant visual features.


Author(s):  
Jasmin Léveillé ◽  
◽  
Isao Hayashi ◽  
Kunihiko Fukushima ◽  
◽  
...  

Recent advances in machine learning and computer vision have led to the development of several sophisticated learning schemes for object recognition by convolutional networks. One relatively simple learning rule, the Winner-Kill-Loser (WKL), was shown to be efficient at learning higher-order features in the neocognitron model when used in a written digit classification task. The WKL rule is one variant of incremental clustering procedures that adapt the number of cluster components to the input data. The WKL rule seeks to provide a complete, yet minimally redundant, covering of the input distribution. It is difficult to apply this approach directly to high-dimensional spaces since it leads to a dramatic explosion in the number of clustering components. In this work, a small generalization of the WKL rule is proposed to learn from high-dimensional data. We first show that the learning rule leads mostly to V1-like oriented cells when applied to natural images, suggesting that it captures second-order image statistics not unlike variants of Hebbian learning. We further embed the proposed learning rule into a convolutional network, specifically, the Neocognitron, and show its usefulness on a standard written digit recognition benchmark. Although the new learning rule leads to a small reduction in overall accuracy, this small reduction is accompanied by a major reduction in the number of coding nodes in the network. This in turn confirms that by learning statistical regularities rather than covering an entire input space, it may be possible to incrementally learn and retain most of the useful structure in the input distribution.


2018 ◽  
Vol 10 (8) ◽  
pp. 1271 ◽  
Author(s):  
Feng Gao ◽  
Qun Wang ◽  
Junyu Dong ◽  
Qizhi Xu

Hyperspectral image classification has been acknowledged as the fundamental and challenging task of hyperspectral data processing. The abundance of spectral and spatial information has provided great opportunities to effectively characterize and identify ground materials. In this paper, we propose a spectral and spatial classification framework for hyperspectral images based on Random Multi-Graphs (RMGs). The RMG is a graph-based ensemble learning method, which is rarely considered in hyperspectral image classification. It is empirically verified that the semi-supervised RMG deals well with small sample setting problems. This kind of problem is very common in hyperspectral image applications. In the proposed method, spatial features are extracted based on linear prediction error analysis and local binary patterns; spatial features and spectral features are then stacked into high dimensional vectors. The high dimensional vectors are fed into the RMG for classification. By randomly selecting a subset of features to create a graph, the proposed method can achieve excellent classification performance. The experiments on three real hyperspectral datasets have demonstrated that the proposed method exhibits better performance than several closely related methods.


2021 ◽  
Author(s):  
Ying Bi ◽  
Bing Xue ◽  
Mengjie Zhang

© Springer Nature Switzerland AG 2018. To learn image features automatically from the problems being tackled is more effective for classification. However, it is very difficult due to image variations and the high dimensionality of image data. This paper proposes a new feature learning approach based on Gaussian filters and genetic programming (GauGP) for image classification. Genetic programming (GP) is a well-known evolutionary learning technique and has been applied to many visual tasks, showing good learning ability and interpretability. In the proposed GauGP method, a new program structure, a new function set and a new terminal set are developed, which allow it to detect small regions from the input image and to learn discriminative features using Gaussian filters for image classification. The performance of GauGP is examined on six different data sets of varying difficulty and compared with four GP methods, eight traditional approaches and convolutional neural networks. The experimental results show GauGP achieves significantly better or similar performance in most cases.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Houari Youcef Moudjib ◽  
Duan Haibin ◽  
Baochang Zhang ◽  
Mohammed Salah Ahmed Ghaleb

Purpose Hyperspectral imaging (HSI) systems are becoming potent technologies for computer vision tasks due to the rich information they uncover, where each substance exhibits a distinct spectral distribution. Although the high spectral dimensionality of the data empowers feature learning, the joint spatial–spectral features have not been well explored yet. Gabor convolutional networks (GCNs) incorporate Gabor filters into a deep convolutional neural network (CNN) to extract discriminative features of different orientations and frequencies. To the best if the authors’ knowledge, this paper introduces the exploitation of GCNs for hyperspectral image classification (HSI-GCN) for the first time. HSI-GCN is able to extract deep joint spatial–spectral features more rapidly and accurately despite the shortage of training samples. The authors thoroughly evaluate the effectiveness of used method on different hyperspectral data sets, where promising results and high classification accuracy have been achieved compared to the previously proposed CNN-based and Gabor-based methods. Design/methodology/approach The authors have implemented the new algorithm of Gabor convolution network on the hyperspectral images for classification purposes. Findings Implementing the new GCN has shown unexpectable results with an excellent classification accuracy. Originality/value To the best of the authors’ knowledge, this work is the first one that implements this approach.


2019 ◽  
Vol 11 (4) ◽  
pp. 415 ◽  
Author(s):  
Yanqiao Chen ◽  
Yangyang Li ◽  
Licheng Jiao ◽  
Cheng Peng ◽  
Xiangrong Zhang ◽  
...  

Polarimetric synthetic aperture radar (PolSAR) image classification has become more and more widely used in recent years. It is well known that PolSAR image classification is a dense prediction problem. The recently proposed fully convolutional networks (FCN) model, which is very good at dealing with the dense prediction problem, has great potential in resolving the task of PolSAR image classification. Nevertheless, for FCN, there are some problems to solve in PolSAR image classification. Fortunately, Li et al. proposed the sliding window fully convolutional networks (SFCN) model to tackle the problems of FCN in PolSAR image classification. However, only when the labeled training sample is sufficient, can SFCN achieve good classification results. To address the above mentioned problem, we propose adversarial reconstruction-classification networks (ARCN), which is based on SFCN and introduces reconstruction-classification networks (RCN) and adversarial training. The merit of our method is threefold: (i) A single composite representation that encodes information for supervised image classification and unsupervised image reconstruction can be constructed; (ii) By introducing adversarial training, the higher-order inconsistencies between the true image and reconstructed image can be detected and revised. Our method can achieve impressive performance in PolSAR image classification with fewer labeled training samples. We have validated its performance by comparing it against several state-of-the-art methods. Experimental results obtained by classifying three PolSAR images demonstrate the efficiency of the proposed method.


2021 ◽  
Author(s):  
Ying Bi ◽  
Bing Xue ◽  
Mengjie Zhang

© Springer Nature Switzerland AG 2018. To learn image features automatically from the problems being tackled is more effective for classification. However, it is very difficult due to image variations and the high dimensionality of image data. This paper proposes a new feature learning approach based on Gaussian filters and genetic programming (GauGP) for image classification. Genetic programming (GP) is a well-known evolutionary learning technique and has been applied to many visual tasks, showing good learning ability and interpretability. In the proposed GauGP method, a new program structure, a new function set and a new terminal set are developed, which allow it to detect small regions from the input image and to learn discriminative features using Gaussian filters for image classification. The performance of GauGP is examined on six different data sets of varying difficulty and compared with four GP methods, eight traditional approaches and convolutional neural networks. The experimental results show GauGP achieves significantly better or similar performance in most cases.


2019 ◽  
Vol 11 (13) ◽  
pp. 1552 ◽  
Author(s):  
Dong ◽  
Naghedolfeizi ◽  
Aberra ◽  
Zeng

Sparse representation classification (SRC) is being widely applied to target detection in hyperspectral images (HSI). However, due to the problem in HSI that high-dimensional data contain redundant information, SRC methods may fail to achieve high classification performance, even with a large number of spectral bands. Selecting a subset of predictive features in a high-dimensional space is an important and challenging problem for hyperspectral image classification. In this paper, we propose a novel discriminant feature learning (DFL) method, which combines spectral and spatial information into a hypergraph Laplacian. First, a subset of discriminative features is selected, which preserve the spectral structure of data and the inter- and intra-class constraints on labeled training samples. A feature evaluator is obtained by semi-supervised learning with the hypergraph Laplacian. Secondly, the selected features are mapped into a further lower-dimensional eigenspace through a generalized eigendecomposition of the Laplacian matrix. The finally extracted discriminative features are used in a joint sparsity-model algorithm. Experiments conducted with benchmark data sets and different experimental settings show that our proposed method increases classification accuracy and outperforms the state-of-the-art HSI classification methods.


Sign in / Sign up

Export Citation Format

Share Document