scholarly journals Classification of Remote Sensing Image Scenes Using Double Feature Extraction Hybrid Deep Learning Approach

Author(s):  
Akey Sungheetha ◽  
Rajesh Sharma R

Over the last decade, remote sensing technology has advanced dramatically, resulting in significant improvements on image quality, data volume, and application usage. These images have essential applications since they can help with quick and easy interpretation. Many standard detection algorithms fail to accurately categorize a scene from a remote sensing image recorded from the earth. A method that uses bilinear convolution neural networks to produce a lessweighted set of models those results in better visual recognition in remote sensing images using fine-grained techniques. This proposed hybrid method is utilized to extract scene feature information in two times from remote sensing images for improved recognition. In layman's terms, these features are defined as raw, and only have a single defined frame, so they will allow basic recognition from remote sensing images. This research work has proposed a double feature extraction hybrid deep learning approach to classify remotely sensed image scenes based on feature abstraction techniques. Also, the proposed algorithm is applied to feature values in order to convert them to feature vectors that have pure black and white values after many product operations. The next stage is pooling and normalization, which occurs after the CNN feature extraction process has changed. This research work has developed a novel hybrid framework method that has a better level of accuracy and recognition rate than any prior model.

2018 ◽  
Vol 10 (6) ◽  
pp. 964 ◽  
Author(s):  
Zhenfeng Shao ◽  
Ke Yang ◽  
Weixun Zhou

Benchmark datasets are essential for developing and evaluating remote sensing image retrieval (RSIR) approaches. However, most of the existing datasets are single-labeled, with each image in these datasets being annotated by a single label representing the most significant semantic content of the image. This is sufficient for simple problems, such as distinguishing between a building and a beach, but multiple labels and sometimes even dense (pixel) labels are required for more complex problems, such as RSIR and semantic segmentation.We therefore extended the existing multi-labeled dataset collected for multi-label RSIR and presented a dense labeling remote sensing dataset termed "DLRSD". DLRSD contained a total of 17 classes, and the pixels of each image were assigned with 17 pre-defined labels. We used DLRSD to evaluate the performance of RSIR methods ranging from traditional handcrafted feature-based methods to deep learning-based ones. More specifically, we evaluated the performances of RSIR methods from both single-label and multi-label perspectives. These results demonstrated the advantages of multiple labels over single labels for interpreting complex remote sensing images. DLRSD provided the literature a benchmark for RSIR and other pixel-based problems such as semantic segmentation.


2019 ◽  
Vol 11 (4) ◽  
pp. 430 ◽  
Author(s):  
Yunyun Dong ◽  
Weili Jiao ◽  
Tengfei Long ◽  
Lanfa Liu ◽  
Guojin He ◽  
...  

Feature matching via local descriptors is one of the most fundamental problems in many computer vision tasks, as well as in the remote sensing image processing community. For example, in terms of remote sensing image registration based on the feature, feature matching is a vital process to determine the quality of transform model. While in the process of feature matching, the quality of feature descriptor determines the matching result directly. At present, the most commonly used descriptor is hand-crafted by the designer’s expertise or intuition. However, it is hard to cover all the different cases, especially for remote sensing images with nonlinear grayscale deformation. Recently, deep learning shows explosive growth and improves the performance of tasks in various fields, especially in the computer vision community. Here, we created remote sensing image training patch samples, named Invar-Dataset in a novel and automatic way, then trained a deep learning convolutional neural network, named DescNet to generate a robust feature descriptor for feature matching. A special experiment was carried out to illustrate that our created training dataset was more helpful to train a network to generate a good feature descriptor. A qualitative experiment was then performed to show that feature descriptor vector learned by the DescNet could be used to register remote sensing images with large gray scale difference successfully. A quantitative experiment was then carried out to illustrate that the feature vector generated by the DescNet could acquire more matched points than those generated by hand-crafted feature Scale Invariant Feature Transform (SIFT) descriptor and other networks. On average, the matched points acquired by DescNet was almost twice those acquired by other methods. Finally, we analyzed the advantages of our created training dataset Invar-Dataset and DescNet and gave the possible development of training deep descriptor network.


Change detection is used to find whether the changes happened or not between two different time periods using remote sensing images. We can use various machine learning techniques and deep learning techniques for the change detection analysis using remote sensing images. This paper mainly focused on computational and performance analysis of both techniques in the application of change detection .For each approach, we considered ten different kinds of algorithms and evaluated the performance. Moreover, in this research work, we have analyzed merits and demerits of each method which have used to change detection.


2019 ◽  
Vol 11 (9) ◽  
pp. 1044 ◽  
Author(s):  
Wei Cui ◽  
Fei Wang ◽  
Xin He ◽  
Dongyou Zhang ◽  
Xuxiang Xu ◽  
...  

A comprehensive interpretation of remote sensing images involves not only remote sensing object recognition but also the recognition of spatial relations between objects. Especially in the case of different objects with the same spectrum, the spatial relationship can help interpret remote sensing objects more accurately. Compared with traditional remote sensing object recognition methods, deep learning has the advantages of high accuracy and strong generalizability regarding scene classification and semantic segmentation. However, it is difficult to simultaneously recognize remote sensing objects and their spatial relationship from end-to-end only relying on present deep learning networks. To address this problem, we propose a multi-scale remote sensing image interpretation network, called the MSRIN. The architecture of the MSRIN is a parallel deep neural network based on a fully convolutional network (FCN), a U-Net, and a long short-term memory network (LSTM). The MSRIN recognizes remote sensing objects and their spatial relationship through three processes. First, the MSRIN defines a multi-scale remote sensing image caption strategy and simultaneously segments the same image using the FCN and U-Net on different spatial scales so that a two-scale hierarchy is formed. The output of the FCN and U-Net are masked to obtain the location and boundaries of remote sensing objects. Second, using an attention-based LSTM, the remote sensing image captions include the remote sensing objects (nouns) and their spatial relationships described with natural language. Finally, we designed a remote sensing object recognition and correction mechanism to build the relationship between nouns in captions and object mask graphs using an attention weight matrix to transfer the spatial relationship from captions to objects mask graphs. In other words, the MSRIN simultaneously realizes the semantic segmentation of the remote sensing objects and their spatial relationship identification end-to-end. Experimental results demonstrated that the matching rate between samples and the mask graph increased by 67.37 percentage points, and the matching rate between nouns and the mask graph increased by 41.78 percentage points compared to before correction. The proposed MSRIN has achieved remarkable results.


Symmetry ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 495
Author(s):  
Liang Jin ◽  
Guodong Liu

Compared with ordinary images, each of the remote sensing images contains many kinds of objects with large scale changes, providing more details. As a typical object of remote sensing image, ship detection has been playing an essential role in the field of remote sensing. With the rapid development of deep learning, remote sensing image detection method based on convolutional neural network (CNN) has occupied a key position. In remote sensing images, the objects of which small scale objects account for a large proportion are closely arranged. In addition, the convolution layer in CNN lacks ample context information, leading to low detection accuracy for remote sensing image detection. To improve detection accuracy and keep the speed of real-time detection, this paper proposed an efficient object detection algorithm for ship detection of remote sensing image based on improved SSD. Firstly, we add a feature fusion module to shallow feature layers to refine feature extraction ability of small object. Then, we add Squeeze-and-Excitation Network (SE) module to each feature layers, introducing attention mechanism to network. The experimental results based on Synthetic Aperture Radar ship detection dataset (SSDD) show that the mAP reaches 94.41%, and the average detection speed is 31FPS. Compared with SSD and other representative object detection algorithms, this improved algorithm has a better performance in detection accuracy and can realize real-time detection.


Author(s):  
Y. Dai ◽  
J. S. Xiao ◽  
B. S. Yi ◽  
J. F. Lei ◽  
Z. Y. Du

Abstract. Aiming at multi-class artificial object detection in remote sensing images, the detection framework based on deep learning is used to extract and localize the numerous targets existing in very high resolution remote sensing images. In order to realize rapid and efficient detection of the typical artificial targets on the remote sensing image, this paper proposes an end-to-end multi-category object detection method in remote sensing image based on the convolutional neural network to solve several challenges, including dense objects and objects with arbitrary direction and large aspect ratios. Specifically, in this paper, the feature extraction process is improved by utilizing a more advanced backbone network with deeper layers and combining multiple feature maps including the high-resolution features maps with more location details and low-resolution feature maps with highly-abstracted information. And a Rotating Regional Proposal Network is adopted into the Faster R-CNN network to generate candidate object-like regions with different orientations and to improve the sensitivity to dense and cluttered objects. The rotation factor is added into the regional proposal network to control the generation of anchor box’s angle and to cover enough directions of typical man-made objects. Meanwhile, the misalignment caused by the two quantifications operations in the pooling process is eliminated and a convolution layer is appended before the fully connected layer of the final classification network to reduce the feature parameters and avoid overfitting. Compared with current generic object detection method, the proposed algorithm focus on the arbitrary oriented and dense artificial targets in remote sensing images. After comprehensive evaluation with several state-of-the-art object detection algorithms, our method is proved to be effective to detect multi-class artificial object in remote sensing image. Experiments demonstrate that the proposed method combines the powerful features extracted by the improved convolutional neural networks with multi-scale features and rotating region network is more accurate in the public DOTA dataset.


2021 ◽  
Author(s):  
Yue Wang ◽  
Ye Ni ◽  
Xutao Li ◽  
Yunming Ye

Wildfires are a serious disaster, which often cause severe damages to forests and plants. Without an early detection and suitable control action, a small wildfire could grow into a big and serious one. The problem is especially fatal at night, as firefighters in general miss the chance to detect the wildfires in the very first few hours. Low-light satellites, which take pictures at night, offer an opportunity to detect night fire timely. However, previous studies identify night fires based on threshold methods or conventional machine learning approaches, which are not robust and accurate enough. In this paper, we develop a new deep learning approach, which determines night fire locations by a pixel-level classification on low-light remote sensing image. Experimental results on VIIRS data demonstrate the superiority and effectiveness of the proposed method, which outperforms conventional threshold and machine learning approaches.


Author(s):  
H. Teffahi ◽  
N. Teffahi

Abstract. The classification of hyperspectral image (HSI) with high spectral and spatial resolution represents an important and challenging task in image processing and remote sensing (RS) domains due to the problem of computational complexity and big dimensionality of the remote sensing images. The spatial and spectral pixel characteristics have crucial significance for hyperspectral image classification and to take into account these two types of characteristics, various classification and feature extraction methods have been developed to improve spectral-spatial classification of remote sensing images for thematic mapping purposes such as agricultural mapping, urban mapping, emergency mapping in case of natural disasters... In recent years, mathematical morphology and deep learning (DL) have been recognized as prominent feature extraction techniques that led to remarkable spectral-spatial classification performances. Among them, Extended Multi-Attribute Profiles (EMAP) and Dense Convolutional Neural Network (DCNN) are considered as robust and powerful approaches such as the work in this paper is based on these two techniques for the feature extraction stage and used in two combined manners and constructing the EMAP-DCNN frame. The experiments were conducted on two popular datasets: “Indian Pines” and “Huston” hyperspectral datasets. Experimental results demonstrate that the two proposed approaches of the EMAP-DCNN frame denoted EMAP-DCNN 1, EMAP-DCNN 2 provide competitive performances compared with some state-of-the-art spectral-spatial classification methods based on deep learning.


Photonics ◽  
2021 ◽  
Vol 8 (10) ◽  
pp. 431
Author(s):  
Yuwu Wang ◽  
Guobing Sun ◽  
Shengwei Guo

With the widespread use of remote sensing images, low-resolution target detection in remote sensing images has become a hot research topic in the field of computer vision. In this paper, we propose a Target Detection on Super-Resolution Reconstruction (TDoSR) method to solve the problem of low target recognition rates in low-resolution remote sensing images under foggy conditions. The TDoSR method uses the Enhanced Super-Resolution Generative Adversarial Network (ESRGAN) to perform defogging and super-resolution reconstruction of foggy low-resolution remote sensing images. In the target detection part, the Rotation Equivariant Detector (ReDet) algorithm, which has a higher recognition rate at this stage, is used to identify and classify various types of targets. While a large number of experiments have been carried out on the remote sensing image dataset DOTA-v1.5, the results of this paper suggest that the proposed method achieves good results in the target detection of low-resolution foggy remote sensing images. The principal result of this paper demonstrates that the recognition rate of the TDoSR method increases by roughly 20% when compared with low-resolution foggy remote sensing images.


Sign in / Sign up

Export Citation Format

Share Document