scholarly journals Remote Sensing Image Land Classification Based on Deep Learning

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Kai Zhang ◽  
Chengquan Hu ◽  
Hang Yu

Aiming at the problems of high-resolution remote sensing images with many features and low classification accuracy using a single feature description, a remote sensing image land classification model based on deep learning from the perspective of ecological resource utilization is proposed. Firstly, the remote sensing image obtained by Gaofen-1 satellite is preprocessed, including multispectral data and panchromatic data. Then, the color, texture, shape, and local features are extracted from the image data, and the feature-level image fusion method is used to associate these features to realize the fusion of remote sensing image features. Finally, the fused image features are input into the trained depth belief network (DBN) for processing, and the land type is obtained by the Softmax classifier. Based on the Keras and TensorFlow platform, the experimental analysis of the proposed model shows that it can clearly classify all land types, and the overall accuracy, F1 value, and reasoning time of the classification results are 97.86%, 87.25%, and 128 ms, respectively, which are better than other comparative models.

Author(s):  
Z. L. Cai ◽  
Q. Weng ◽  
S. Z. Ye

Abstract. With the deepening research and cross-fusion in the modern remote sensing image area, the classification of high spatial resolution remote sensing images has captured the attention of the researchers in the field of remote sensing. However, due to the serious phenomenon of “same object, different spectrum” and “same spectrum, different object” of high-resolution remote sensing image, the traditional classification strategy is hard to handle this challenge. In this paper, a remote sensing image scene classification model based on SENet and Inception-V3 is proposed by utilizing the deep learning method and transfer learning strategy. The model first adds a dropout layer before the full connection layer of the original Inception-V3 model to avoid over-fitting. Then we embed the SENet module into the Inception-V3 model for optimizing the network performance. In this paper, global average pooling is used as squeeze operation, and then two fully connected layers are used to construct a bottleneck structure. The model proposed in this paper is more non-linear, can better fit the complex correlation between channels, and greatly reduces the amount of parameters and computation. In the training process, this paper adopts the transfer learning strategy, makes full use of existing models and knowledge, improves training efficiency, and finally obtains scene classification results. The experimental results based on AID high-score remote sensing scene images show that SE-Inception has faster convergence speed and more stable training effect than the original Inception-V3 training. Compared with other traditional methods and deep learning networks, the improved model proposed in this paper has greater accuracy improvement.


Author(s):  
Sumit Kaur

Abstract- Deep learning is an emerging research area in machine learning and pattern recognition field which has been presented with the goal of drawing Machine Learning nearer to one of its unique objectives, Artificial Intelligence. It tries to mimic the human brain, which is capable of processing and learning from the complex input data and solving different kinds of complicated tasks well. Deep learning (DL) basically based on a set of supervised and unsupervised algorithms that attempt to model higher level abstractions in data and make it self-learning for hierarchical representation for classification. In the recent years, it has attracted much attention due to its state-of-the-art performance in diverse areas like object perception, speech recognition, computer vision, collaborative filtering and natural language processing. This paper will present a survey on different deep learning techniques for remote sensing image classification. 


2021 ◽  
Vol 13 (4) ◽  
pp. 1917
Author(s):  
Alma Elizabeth Thuestad ◽  
Ole Risbøl ◽  
Jan Ingolf Kleppe ◽  
Stine Barlindhaug ◽  
Elin Rose Myrvoll

What can remote sensing contribute to archaeological surveying in subarctic and arctic landscapes? The pros and cons of remote sensing data vary as do areas of utilization and methodological approaches. We assessed the applicability of remote sensing for archaeological surveying of northern landscapes using airborne laser scanning (LiDAR) and satellite and aerial images to map archaeological features as a basis for (a) assessing the pros and cons of the different approaches and (b) assessing the potential detection rate of remote sensing. Interpretation of images and a LiDAR-based bare-earth digital terrain model (DTM) was based on visual analyses aided by processing and visualizing techniques. 368 features were identified in the aerial images, 437 in the satellite images and 1186 in the DTM. LiDAR yielded the better result, especially for hunting pits. Image data proved suitable for dwellings and settlement sites. Feature characteristics proved a key factor for detectability, both in LiDAR and image data. This study has shown that LiDAR and remote sensing image data are highly applicable for archaeological surveying in northern landscapes. It showed that a multi-sensor approach contributes to high detection rates. Our results have improved the inventory of archaeological sites in a non-destructive and minimally invasive manner.


2018 ◽  
Vol 10 (8) ◽  
pp. 1243 ◽  
Author(s):  
Xu Tang ◽  
Xiangrong Zhang ◽  
Fang Liu ◽  
Licheng Jiao

Due to the specific characteristics and complicated contents of remote sensing (RS) images, remote sensing image retrieval (RSIR) is always an open and tough research topic in the RS community. There are two basic blocks in RSIR, including feature learning and similarity matching. In this paper, we focus on developing an effective feature learning method for RSIR. With the help of the deep learning technique, the proposed feature learning method is designed under the bag-of-words (BOW) paradigm. Thus, we name the obtained feature deep BOW (DBOW). The learning process consists of two parts, including image descriptor learning and feature construction. First, to explore the complex contents within the RS image, we extract the image descriptor in the image patch level rather than the whole image. In addition, instead of using the handcrafted feature to describe the patches, we propose the deep convolutional auto-encoder (DCAE) model to deeply learn the discriminative descriptor for the RS image. Second, the k-means algorithm is selected to generate the codebook using the obtained deep descriptors. Then, the final histogrammic DBOW features are acquired by counting the frequency of the single code word. When we get the DBOW features from the RS images, the similarities between RS images are measured using L1-norm distance. Then, the retrieval results can be acquired according to the similarity order. The encouraging experimental results counted on four public RS image archives demonstrate that our DBOW feature is effective for the RSIR task. Compared with the existing RS image features, our DBOW can achieve improved behavior on RSIR.


2021 ◽  
Vol 13 (4) ◽  
pp. 747
Author(s):  
Yanghua Di ◽  
Zhiguo Jiang ◽  
Haopeng Zhang

Fine-grained visual categorization (FGVC) is an important and challenging problem due to large intra-class differences and small inter-class differences caused by deformation, illumination, angles, etc. Although major advances have been achieved in natural images in the past few years due to the release of popular datasets such as the CUB-200-2011, Stanford Cars and Aircraft datasets, fine-grained ship classification in remote sensing images has been rarely studied because of relative scarcity of publicly available datasets. In this paper, we investigate a large amount of remote sensing image data of sea ships and determine most common 42 categories for fine-grained visual categorization. Based our previous DSCR dataset, a dataset for ship classification in remote sensing images, we collect more remote sensing images containing warships and civilian ships of various scales from Google Earth and other popular remote sensing image datasets including DOTA, HRSC2016, NWPU VHR-10, We call our dataset FGSCR-42, meaning a dataset for Fine-Grained Ship Classification in Remote sensing images with 42 categories. The whole dataset of FGSCR-42 contains 9320 images of most common types of ships. We evaluate popular object classification algorithms and fine-grained visual categorization algorithms to build a benchmark. Our FGSCR-42 dataset is publicly available at our webpages.


2018 ◽  
Vol 10 (12) ◽  
pp. 1934 ◽  
Author(s):  
Bao-Di Liu ◽  
Wen-Yang Xie ◽  
Jie Meng ◽  
Ye Li ◽  
Yanjiang Wang

In recent years, the collaborative representation-based classification (CRC) method has achieved great success in visual recognition by directly utilizing training images as dictionary bases. However, it describes a test sample with all training samples to extract shared attributes and does not consider the representation of the test sample with the training samples in a specific class to extract the class-specific attributes. For remote-sensing images, both the shared attributes and class-specific attributes are important for classification. In this paper, we propose a hybrid collaborative representation-based classification approach. The proposed method is capable of improving the performance of classifying remote-sensing images by embedding the class-specific collaborative representation to conventional collaborative representation-based classification. Moreover, we extend the proposed method to arbitrary kernel space to explore the nonlinear characteristics hidden in remote-sensing image features to further enhance classification performance. Extensive experiments on several benchmark remote-sensing image datasets were conducted and clearly demonstrate the superior performance of our proposed algorithm to state-of-the-art approaches.


Sign in / Sign up

Export Citation Format

Share Document