Regions of Interest Detection in Panchromatic Remote Sensing Images Based on Multiscale Feature Fusion

Author(s):  
Libao Zhang ◽  
Kaina Yang ◽  
Hao Li
2005 ◽  
Author(s):  
Zhenwei Geng ◽  
Wenxian Yu ◽  
Yongmei Jiang ◽  
Yi Su

2021 ◽  
Vol 10 (3) ◽  
pp. 125
Author(s):  
Junqing Huang ◽  
Liguo Weng ◽  
Bingyu Chen ◽  
Min Xia

Analyzing land cover using remote sensing images has broad prospects, the precise segmentation of land cover is the key to the application of this technology. Nowadays, the Convolution Neural Network (CNN) is widely used in many image semantic segmentation tasks. However, existing CNN models often exhibit poor generalization ability and low segmentation accuracy when dealing with land cover segmentation tasks. To solve this problem, this paper proposes Dual Function Feature Aggregation Network (DFFAN). This method combines image context information, gathers image spatial information, and extracts and fuses features. DFFAN uses residual neural networks as backbone to obtain different dimensional feature information of remote sensing images through multiple downsamplings. This work designs Affinity Matrix Module (AMM) to obtain the context of each feature map and proposes Boundary Feature Fusion Module (BFF) to fuse the context information and spatial information of an image to determine the location distribution of each image’s category. Compared with existing methods, the proposed method is significantly improved in accuracy. Its mean intersection over union (MIoU) on the LandCover dataset reaches 84.81%.


2020 ◽  
Vol 9 (7) ◽  
pp. 462
Author(s):  
Josephina Paul ◽  
B. Uma Shankar ◽  
Balaram Bhattacharyya

Change detection (CD) using Remote sensing images have been a challenging problem over the years. Particularly in the unsupervised domain it is even more difficult. A novel automatic change detection technique in the unsupervised framework is proposed to address the real challenges involved in remote sensing change detection. As the accuracy of change map is highly dependent on quality of difference image (DI), a set of Normalized difference images and a complementary set of Normalized Ratio images are fused in the Nonsubsampled Contourlet Transform (NSCT) domain to generate high quality difference images. The NSCT is chosen as it is efficient in suppressing noise by utilizing its unique characteristics such as multidirectionality and shift-invariance that are suitable for change detection. The low frequency sub bands are fused by averaging to combine the complementary information in the two DIs, and, the higher frequency sub bands are merged by minimum energy rule, for preserving the edges and salient features in the image. By employing a novel Particle Swarm Optimization algorithm with Leader Intelligence (LIPSO), change maps are generated from fused sub bands in two different ways: (i) single spectral band, and (ii) combination of spectral bands. In LIPSO, the concept of leader and followers has been modified with intelligent particles performing Lévy flight randomly for better exploration, to achieve global optima. The proposed method achieved an overall accuracy of 99.64%, 98.49% and 97.66% on the three datasets considered, which is very high. The results have been compared with relevant algorithms. The quantitative metrics demonstrate the superiority of the proposed techniques over the other methods and are found to be statistically significant with McNemar’s test. Visual quality of the results also corroborate the superiority of the proposed method.


2020 ◽  
Vol 12 (9) ◽  
pp. 1366 ◽  
Author(s):  
Jun Li ◽  
Daoyu Lin ◽  
Yang Wang ◽  
Guangluan Xu ◽  
Yunyan Zhang ◽  
...  

In recent years, convolutional neural networks (CNNs) have shown great success in the scene classification of computer vision images. Although these CNNs can achieve excellent classification accuracy, the discriminative ability of feature representations extracted from CNNs is still limited in distinguishing more complex remote sensing images. Therefore, we propose a unified feature fusion framework based on attention mechanism in this paper, which is called Deep Discriminative Representation Learning with Attention Map (DDRL-AM). Firstly, by applying Gradient-weighted Class Activation Mapping (Grad-CAM) algorithm, attention maps associated with the predicted results are generated in order to make CNNs focus on the most salient parts of the image. Secondly, a spatial feature transformer (SFT) is designed to extract discriminative features from attention maps. Then an innovative two-channel CNN architecture is proposed by the fusion of features extracted from attention maps and the RGB (red green blue) stream. A new objective function that considers both center and cross-entropy loss are optimized to decrease the influence of inter-class dispersion and within-class variance. In order to show its effectiveness in classifying remote sensing images, the proposed DDRL-AM method is evaluated on four public benchmark datasets. The experimental results demonstrate the competitive scene classification performance of the DDRL-AM approach. Moreover, the visualization of features extracted by the proposed DDRL-AM method can prove that the discriminative ability of features has been increased.


2019 ◽  
Vol 11 (13) ◽  
pp. 1617 ◽  
Author(s):  
Jicheng Wang ◽  
Li Shen ◽  
Wenfan Qiao ◽  
Yanshuai Dai ◽  
Zhilin Li

The classification of very-high-resolution (VHR) remote sensing images is essential in many applications. However, high intraclass and low interclass variations in these kinds of images pose serious challenges. Fully convolutional network (FCN) models, which benefit from a powerful feature learning ability, have shown impressive performance and great potential. Nevertheless, only classification results with coarse resolution can be obtained from the original FCN method. Deep feature fusion is often employed to improve the resolution of outputs. Existing strategies for such fusion are not capable of properly utilizing the low-level features and considering the importance of features at different scales. This paper proposes a novel, end-to-end, fully convolutional network to integrate a multiconnection ResNet model and a class-specific attention model into a unified framework to overcome these problems. The former fuses multilevel deep features without introducing any redundant information from low-level features. The latter can learn the contributions from different features of each geo-object at each scale. Extensive experiments on two open datasets indicate that the proposed method can achieve class-specific scale-adaptive classification results and it outperforms other state-of-the-art methods. The results were submitted to the International Society for Photogrammetry and Remote Sensing (ISPRS) online contest for comparison with more than 50 other methods. The results indicate that the proposed method (ID: SWJ_2) ranks #1 in terms of overall accuracy, even though no additional digital surface model (DSM) data that were offered by ISPRS were used and no postprocessing was applied.


2020 ◽  
Vol 12 (20) ◽  
pp. 3316 ◽  
Author(s):  
Yulian Zhang ◽  
Lihong Guo ◽  
Zengfa Wang ◽  
Yang Yu ◽  
Xinwei Liu ◽  
...  

Intelligent detection and recognition of ships from high-resolution remote sensing images is an extraordinarily useful task in civil and military reconnaissance. It is difficult to detect ships with high precision because various disturbances are present in the sea such as clouds, mist, islands, coastlines, ripples, and so on. To solve this problem, we propose a novel ship detection network based on multi-layer convolutional feature fusion (CFF-SDN). Our ship detection network consists of three parts. Firstly, the convolutional feature extraction network is used to extract ship features of different levels. Residual connection is introduced so that the model can be designed very deeply, and it is easy to train and converge. Secondly, the proposed network fuses fine-grained features from shallow layers with semantic features from deep layers, which is beneficial for detecting ship targets with different sizes. At the same time, it is helpful to improve the localization accuracy and detection accuracy of small objects. Finally, multiple fused feature maps are used for classification and regression, which can adapt to ships of multiple scales. Since the CFF-SDN model uses a pruning strategy, the detection speed is greatly improved. In the experiment, we create a dataset for ship detection in remote sensing images (DSDR), including actual satellite images from Google Earth and aerial images from electro-optical pod. The DSDR dataset contains not only visible light images, but also infrared images. To improve the robustness to various sea scenes, images under different scales, perspectives and illumination are obtained through data augmentation or affine transformation methods. To reduce the influence of atmospheric absorption and scattering, a dark channel prior is adopted to solve atmospheric correction on the sea scenes. Moreover, soft non-maximum suppression (NMS) is introduced to increase the recall rate for densely arranged ships. In addition, better detection performance is observed in comparison with the existing models in terms of precision rate and recall rate. The experimental results show that the proposed detection model can achieve the superior performance of ship detection in optical remote sensing image.


2020 ◽  
Vol 11 (6) ◽  
pp. 563-571 ◽  
Author(s):  
Wei Xiong ◽  
Zhenyu Xiong ◽  
Yaqi Cui ◽  
Yafei Lv

Sign in / Sign up

Export Citation Format

Share Document