scholarly journals Deep Memory Connected Neural Network for Optical Remote Sensing Image Restoration

2018 ◽  
Vol 10 (12) ◽  
pp. 1893 ◽  
Author(s):  
Wenjia Xu ◽  
Guangluan Xu ◽  
Yang Wang ◽  
Xian Sun ◽  
Daoyu Lin ◽  
...  

The spatial resolution and clarity of remote sensing images are crucial for many applications such as target detection and image classification. In the last several decades, tremendous image restoration tasks have shown great success in ordinary images. However, since remote sensing images are more complex and more blurry than ordinary images, most of the existing methods are not good enough for remote sensing image restoration. To address such problem, we propose a novel method named deep memory connected network (DMCN) based on the convolutional neural network to reconstruct high-quality images. We build local and global memory connections to combine image detail with global information. To further reduce parameters and ease time consumption, we propose Downsampling Units, shrinking the spatial size of feature maps. We verify its capability on two representative applications, Gaussian image denoising and single image super-resolution (SR). DMCN is tested on three remote sensing datasets with various spatial resolution. Experimental results indicate that our method yields promising improvements and better visual performance over the current state-of-the-art. The PSNR and SSIM improvements over the second best method are up to 0.3 dB.

Author(s):  
Xiaochuan Tang ◽  
Mingzhe Liu ◽  
Hao Zhong ◽  
Yuanzhen Ju ◽  
Weile Li ◽  
...  

Landslide recognition is widely used in natural disaster risk management. Traditional landslide recognition is mainly conducted by geologists, which is accurate but inefficient. This article introduces multiple instance learning (MIL) to perform automatic landslide recognition. An end-to-end deep convolutional neural network is proposed, referred to as Multiple Instance Learning–based Landslide classification (MILL). First, MILL uses a large-scale remote sensing image classification dataset to build pre-train networks for landslide feature extraction. Second, MILL extracts instances and assign instance labels without pixel-level annotations. Third, MILL uses a new channel attention–based MIL pooling function to map instance-level labels to bag-level label. We apply MIL to detect landslides in a loess area. Experimental results demonstrate that MILL is effective in identifying landslides in remote sensing images.


Sensors ◽  
2019 ◽  
Vol 19 (19) ◽  
pp. 4115 ◽  
Author(s):  
Yuxia Li ◽  
Bo Peng ◽  
Lei He ◽  
Kunlong Fan ◽  
Zhenxu Li ◽  
...  

Roads are vital components of infrastructure, the extraction of which has become a topic of significant interest in the field of remote sensing. Because deep learning has been a popular method in image processing and information extraction, researchers have paid more attention to extracting road using neural networks. This article proposes the improvement of neural networks to extract roads from Unmanned Aerial Vehicle (UAV) remote sensing images. D-Linknet was first considered for its high performance; however, the huge scale of the net reduced computational efficiency. With a focus on the low computational efficiency problem of the popular D-LinkNet, this article made some improvements: (1) Replace the initial block with a stem block. (2) Rebuild the entire network based on ResNet units with a new structure, allowing for the construction of an improved neural network D-Linknetplus. (3) Add a 1 × 1 convolution layer before DBlock to reduce the input feature maps, reducing parameters and improving computational efficiency. Add another 1 × 1 convolution layer after DBlock to recover the required number of output channels. Accordingly, another improved neural network B-D-LinknetPlus was built. Comparisons were performed between the neural nets, and the verification were made with the Massachusetts Roads Dataset. The results show improved neural networks are helpful in reducing the network size and developing the precision needed for road extraction.


2021 ◽  
Vol 13 (4) ◽  
pp. 747
Author(s):  
Yanghua Di ◽  
Zhiguo Jiang ◽  
Haopeng Zhang

Fine-grained visual categorization (FGVC) is an important and challenging problem due to large intra-class differences and small inter-class differences caused by deformation, illumination, angles, etc. Although major advances have been achieved in natural images in the past few years due to the release of popular datasets such as the CUB-200-2011, Stanford Cars and Aircraft datasets, fine-grained ship classification in remote sensing images has been rarely studied because of relative scarcity of publicly available datasets. In this paper, we investigate a large amount of remote sensing image data of sea ships and determine most common 42 categories for fine-grained visual categorization. Based our previous DSCR dataset, a dataset for ship classification in remote sensing images, we collect more remote sensing images containing warships and civilian ships of various scales from Google Earth and other popular remote sensing image datasets including DOTA, HRSC2016, NWPU VHR-10, We call our dataset FGSCR-42, meaning a dataset for Fine-Grained Ship Classification in Remote sensing images with 42 categories. The whole dataset of FGSCR-42 contains 9320 images of most common types of ships. We evaluate popular object classification algorithms and fine-grained visual categorization algorithms to build a benchmark. Our FGSCR-42 dataset is publicly available at our webpages.


2018 ◽  
Vol 10 (12) ◽  
pp. 1934 ◽  
Author(s):  
Bao-Di Liu ◽  
Wen-Yang Xie ◽  
Jie Meng ◽  
Ye Li ◽  
Yanjiang Wang

In recent years, the collaborative representation-based classification (CRC) method has achieved great success in visual recognition by directly utilizing training images as dictionary bases. However, it describes a test sample with all training samples to extract shared attributes and does not consider the representation of the test sample with the training samples in a specific class to extract the class-specific attributes. For remote-sensing images, both the shared attributes and class-specific attributes are important for classification. In this paper, we propose a hybrid collaborative representation-based classification approach. The proposed method is capable of improving the performance of classifying remote-sensing images by embedding the class-specific collaborative representation to conventional collaborative representation-based classification. Moreover, we extend the proposed method to arbitrary kernel space to explore the nonlinear characteristics hidden in remote-sensing image features to further enhance classification performance. Extensive experiments on several benchmark remote-sensing image datasets were conducted and clearly demonstrate the superior performance of our proposed algorithm to state-of-the-art approaches.


2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Cheng Zhang ◽  
Dan He

The urban data provides a wealth of information that can support the life and work for people. In this work, we research the object saliency detection in optical remote sensing images, which is conducive to the interpretation of urban scenes. Saliency detection selects the regions with important information in the remote sensing images, which severely imitates the human visual system. It plays a powerful role in other image processing. It has successfully made great achievements in change detection, object tracking, temperature reversal, and other tasks. The traditional method has some disadvantages such as poor robustness and high computational complexity. Therefore, this paper proposes a deep multiscale fusion method via low-rank sparse decomposition for object saliency detection in optical remote sensing images. First, we execute multiscale segmentation for remote sensing images. Then, we calculate the saliency value, and the proposal region is generated. The superpixel blocks of the remaining proposal regions of the segmentation map are input into the convolutional neural network. By extracting the depth feature, the saliency value is calculated and the proposal regions are updated. The feature transformation matrix is obtained based on the gradient descent method, and the high-level semantic prior knowledge is obtained by using the convolutional neural network. The process is iterated continuously to obtain the saliency map at each scale. The low-rank sparse decomposition of the transformed matrix is carried out by robust principal component analysis. Finally, the weight cellular automata method is utilized to fuse the multiscale saliency graphs and the saliency map calculated according to the sparse noise obtained by decomposition. Meanwhile, the object priors knowledge can filter most of the background information, reduce unnecessary depth feature extraction, and meaningfully improve the saliency detection rate. The experiment results show that the proposed method can effectively improve the detection effect compared to other deep learning methods.


2021 ◽  
Vol 13 (23) ◽  
pp. 4743
Author(s):  
Wei Yuan ◽  
Wenbo Xu

The segmentation of remote sensing images by deep learning technology is the main method for remote sensing image interpretation. However, the segmentation model based on a convolutional neural network cannot capture the global features very well. A transformer, whose self-attention mechanism can supply each pixel with a global feature, makes up for the deficiency of the convolutional neural network. Therefore, a multi-scale adaptive segmentation network model (MSST-Net) based on a Swin Transformer is proposed in this paper. Firstly, a Swin Transformer is used as the backbone to encode the input image. Then, the feature maps of different levels are decoded separately. Thirdly, the convolution is used for fusion, so that the network can automatically learn the weight of the decoding results of each level. Finally, we adjust the channels to obtain the final prediction map by using the convolution with a kernel of 1 × 1. By comparing this with other segmentation network models on a WHU building data set, the evaluation metrics, mIoU, F1-score and accuracy are all improved. The network model proposed in this paper is a multi-scale adaptive network model that pays more attention to the global features for remote sensing segmentation.


Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1465 ◽  
Author(s):  
Lili Zhang ◽  
Jisen Wu ◽  
Yu Fan ◽  
Hongmin Gao ◽  
Yehong Shao

In this paper, we consider building extraction from high spatial resolution remote sensing images. At present, most building extraction methods are based on artificial features. However, the diversity and complexity of buildings mean that building extraction methods still face great challenges, so methods based on deep learning have recently been proposed. In this paper, a building extraction framework based on a convolution neural network and edge detection algorithm is proposed. The method is called Mask R-CNN Fusion Sobel. Because of the outstanding achievement of Mask R-CNN in the field of image segmentation, this paper improves it and then applies it in remote sensing image building extraction. Our method consists of three parts. First, the convolutional neural network is used for rough location and pixel level classification, and the problem of false and missed extraction is solved by automatically discovering semantic features. Second, Sobel edge detection algorithm is used to segment building edges accurately so as to solve the problem of edge extraction and the integrity of the object of deep convolutional neural networks in semantic segmentation. Third, buildings are extracted by the fusion algorithm. We utilize the proposed framework to extract the building in high-resolution remote sensing images from Chinese satellite GF-2, and the experiments show that the average value of IOU (intersection over union) of the proposed method was 88.7% and the average value of Kappa was 87.8%, respectively. Therefore, our method can be applied to the recognition and segmentation of complex buildings and is superior to the classical method in accuracy.


Sign in / Sign up

Export Citation Format

Share Document