scholarly journals MFANet: A Multi-Level Feature Aggregation Network for Semantic Segmentation of Land Cover

2021 ◽  
Vol 13 (4) ◽  
pp. 731 ◽  
Author(s):  
Bingyu Chen ◽  
Min Xia ◽  
Junqing Huang

Detailed information regarding land utilization/cover is a valuable resource in various fields. In recent years, remote sensing images, especially aerial images, have become higher in resolution and larger span in time and space, and the phenomenon that the objects in an identical category may yield a different spectrum would lead to the fact that relying on spectral features only is often insufficient to accurately segment the target objects. In convolutional neural networks, down-sampling operations are usually used to extract abstract semantic features, which leads to loss of details and fuzzy edges. To solve these problems, the paper proposes a Multi-level Feature Aggregation Network (MFANet), which is improved in two aspects: deep feature extraction and up-sampling feature fusion. Firstly, the proposed Channel Feature Compression module extracts the deep features and filters the redundant channel information from the backbone to optimize the learned context. Secondly, the proposed Multi-level Feature Aggregation Upsample module nestedly uses the idea that high-level features provide guidance information for low-level features, which is of great significance for positioning the restoration of high-resolution remote sensing images. Finally, the proposed Channel Ladder Refinement module is used to refine the restored high-resolution feature maps. Experimental results show that the proposed method achieves state-of-the-art performance 86.45% mean IOU on LandCover dataset.

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1267
Author(s):  
Sijun Dong ◽  
Zhengchao Chen

High-resolution remote sensing image segmentation is a mature application in many industrial-level image applications and it also has military and civil applications. The scene analysis needs to be automated as much as possible with high-resolution remote sensing images. This plays a significant role in environmental disaster monitoring, forestry industry, agricultural farming, urban planning, and road analysis. This study proposes a multi-level feature fusion network (MFNet) that can integrate the multi-level features in the backbone to obtain different types of image information. Finally, the experiments in this study demonstrate that the proposed network can achieve good segmentation results in the Vaihingen and Potsdam datasets. By aiming to achieve a large difference in the scale of the target objects in remote sensing images and achieving a poor recognition result for small objects, a multi-level feature fusion solution is proposed in this study. This investigation improves the recognition results of the remote sensing image segmentation to a certain extent.


2021 ◽  
Vol 10 (3) ◽  
pp. 125
Author(s):  
Junqing Huang ◽  
Liguo Weng ◽  
Bingyu Chen ◽  
Min Xia

Analyzing land cover using remote sensing images has broad prospects, the precise segmentation of land cover is the key to the application of this technology. Nowadays, the Convolution Neural Network (CNN) is widely used in many image semantic segmentation tasks. However, existing CNN models often exhibit poor generalization ability and low segmentation accuracy when dealing with land cover segmentation tasks. To solve this problem, this paper proposes Dual Function Feature Aggregation Network (DFFAN). This method combines image context information, gathers image spatial information, and extracts and fuses features. DFFAN uses residual neural networks as backbone to obtain different dimensional feature information of remote sensing images through multiple downsamplings. This work designs Affinity Matrix Module (AMM) to obtain the context of each feature map and proposes Boundary Feature Fusion Module (BFF) to fuse the context information and spatial information of an image to determine the location distribution of each image’s category. Compared with existing methods, the proposed method is significantly improved in accuracy. Its mean intersection over union (MIoU) on the LandCover dataset reaches 84.81%.


2020 ◽  
Vol 12 (20) ◽  
pp. 3316 ◽  
Author(s):  
Yulian Zhang ◽  
Lihong Guo ◽  
Zengfa Wang ◽  
Yang Yu ◽  
Xinwei Liu ◽  
...  

Intelligent detection and recognition of ships from high-resolution remote sensing images is an extraordinarily useful task in civil and military reconnaissance. It is difficult to detect ships with high precision because various disturbances are present in the sea such as clouds, mist, islands, coastlines, ripples, and so on. To solve this problem, we propose a novel ship detection network based on multi-layer convolutional feature fusion (CFF-SDN). Our ship detection network consists of three parts. Firstly, the convolutional feature extraction network is used to extract ship features of different levels. Residual connection is introduced so that the model can be designed very deeply, and it is easy to train and converge. Secondly, the proposed network fuses fine-grained features from shallow layers with semantic features from deep layers, which is beneficial for detecting ship targets with different sizes. At the same time, it is helpful to improve the localization accuracy and detection accuracy of small objects. Finally, multiple fused feature maps are used for classification and regression, which can adapt to ships of multiple scales. Since the CFF-SDN model uses a pruning strategy, the detection speed is greatly improved. In the experiment, we create a dataset for ship detection in remote sensing images (DSDR), including actual satellite images from Google Earth and aerial images from electro-optical pod. The DSDR dataset contains not only visible light images, but also infrared images. To improve the robustness to various sea scenes, images under different scales, perspectives and illumination are obtained through data augmentation or affine transformation methods. To reduce the influence of atmospheric absorption and scattering, a dark channel prior is adopted to solve atmospheric correction on the sea scenes. Moreover, soft non-maximum suppression (NMS) is introduced to increase the recall rate for densely arranged ships. In addition, better detection performance is observed in comparison with the existing models in terms of precision rate and recall rate. The experimental results show that the proposed detection model can achieve the superior performance of ship detection in optical remote sensing image.


Author(s):  
N. Mo ◽  
L. Yan

Abstract. Vehicles usually lack detailed information and are difficult to be trained on the high-resolution remote sensing images because of small size. In addition, vehicles contain multiple fine-grained categories that are slightly different, randomly located and oriented. Therefore, it is difficult to locate and identify these fine categories of vehicles. Considering the above problems in high-resolution remote sensing images, this paper proposes an oriented vehicle detection approach. First of all, we propose an oversampling and stitching method to augment the training dataset by increasing the frequency of objects with fewer training samples in order to balance the number of objects in each fine-grained vehicle category. Then considering the effect of the pooling operations on representing small objects, we propose to improve the resolution of feature maps so that detailed information hidden in feature maps can be enriched and they can better distinguish the fine-grained vehicle categories. Finally, we design a joint training loss function for horizontal and oriented bounding boxes with center loss, to decrease the impact of small between-class diversity on vehicle detection. Experimental verification is performed on the VEDAI dataset consisting of 9 fine-grained vehicle categories so as to evaluate the proposed framework. The experimental results show that the proposed framework performs better than most of competitive approaches in terms of a mean average precision of 60.7% and 60.4% in detecting horizontal and oriented bounding boxes respectively.


2020 ◽  
Vol 12 (21) ◽  
pp. 3501
Author(s):  
Qingsong Xu ◽  
Xin Yuan ◽  
Chaojun Ouyang ◽  
Yue Zeng

Unlike conventional natural (RGB) images, the inherent large scale and complex structures of remote sensing images pose major challenges such as spatial object distribution diversity and spectral information extraction when existing models are directly applied for image classification. In this study, we develop an attention-based pyramid network for segmentation and classification of remote sensing datasets. Attention mechanisms are used to develop the following modules: (i) a novel and robust attention-based multi-scale fusion method effectively fuses useful spatial or spectral information at different and same scales; (ii) a region pyramid attention mechanism using region-based attention addresses the target geometric size diversity in large-scale remote sensing images; and (iii) cross-scale attention in our adaptive atrous spatial pyramid pooling network adapts to varied contents in a feature-embedded space. Different forms of feature fusion pyramid frameworks are established by combining these attention-based modules. First, a novel segmentation framework, called the heavy-weight spatial feature fusion pyramid network (FFPNet), is proposed to address the spatial problem of high-resolution remote sensing images. Second, an end-to-end spatial-spectral FFPNet is presented for classifying hyperspectral images. Experiments conducted on ISPRS Vaihingen and ISPRS Potsdam high-resolution datasets demonstrate the competitive segmentation accuracy achieved by the proposed heavy-weight spatial FFPNet. Furthermore, experiments on the Indian Pines and the University of Pavia hyperspectral datasets indicate that the proposed spatial-spectral FFPNet outperforms the current state-of-the-art methods in hyperspectral image classification.


2020 ◽  
Vol 12 (5) ◽  
pp. 872 ◽  
Author(s):  
Ronghua Shang ◽  
Jiyu Zhang ◽  
Licheng Jiao ◽  
Yangyang Li ◽  
Naresh Marturi ◽  
...  

Semantic segmentation of high-resolution remote sensing images is highly challenging due to the presence of a complicated background, irregular target shapes, and similarities in the appearance of multiple target categories. Most of the existing segmentation methods that rely only on simple fusion of the extracted multi-scale features often fail to provide satisfactory results when there is a large difference in the target sizes. Handling this problem through multi-scale context extraction and efficient fusion of multi-scale features, in this paper we present an end-to-end multi-scale adaptive feature fusion network (MANet) for semantic segmentation in remote sensing images. It is a coding and decoding structure that includes a multi-scale context extraction module (MCM) and an adaptive fusion module (AFM). The MCM employs two layers of atrous convolutions with different dilatation rates and global average pooling to extract context information at multiple scales in parallel. MANet embeds the channel attention mechanism to fuse semantic features. The high- and low-level semantic information are concatenated to generate global features via global average pooling. These global features are used as channel weights to acquire adaptive weight information of each channel by the fully connected layer. To accomplish an efficient fusion, these tuned weights are applied to the fused features. Performance of the proposed method has been evaluated by comparing it with six other state-of-the-art networks: fully convolutional networks (FCN), U-net, UZ1, Light-weight RefineNet, DeepLabv3+, and APPD. Experiments performed using the publicly available Potsdam and Vaihingen datasets show that the proposed MANet significantly outperforms the other existing networks, with overall accuracy reaching 89.4% and 88.2%, respectively and with average of F1 reaching 90.4% and 86.7% respectively.


2020 ◽  
Vol 12 (6) ◽  
pp. 989 ◽  
Author(s):  
Hao Su ◽  
Shunjun Wei ◽  
Shan Liu ◽  
Jiadian Liang ◽  
Chen Wang ◽  
...  

Instance segmentation in high-resolution (HR) remote sensing imagery is one of the most challenging tasks and is more difficult than object detection and semantic segmentation tasks. It aims to predict class labels and pixel-wise instance masks to locate instances in an image. However, there are rare methods currently suitable for instance segmentation in the HR remote sensing images. Meanwhile, it is more difficult to implement instance segmentation due to the complex background of remote sensing images. In this article, a novel instance segmentation approach of HR remote sensing imagery based on Cascade Mask R-CNN is proposed, which is called a high-quality instance segmentation network (HQ-ISNet). In this scheme, the HQ-ISNet exploits a HR feature pyramid network (HRFPN) to fully utilize multi-level feature maps and maintain HR feature maps for remote sensing images’ instance segmentation. Next, to refine mask information flow between mask branches, the instance segmentation network version 2 (ISNetV2) is proposed to promote further improvements in mask prediction accuracy. Then, we construct a new, more challenging dataset based on the synthetic aperture radar (SAR) ship detection dataset (SSDD) and the Northwestern Polytechnical University very-high-resolution 10-class geospatial object detection dataset (NWPU VHR-10) for remote sensing images instance segmentation which can be used as a benchmark for evaluating instance segmentation algorithms in the high-resolution remote sensing images. Finally, extensive experimental analyses and comparisons on the SSDD and the NWPU VHR-10 dataset show that (1) the HRFPN makes the predicted instance masks more accurate, which can effectively enhance the instance segmentation performance of the high-resolution remote sensing imagery; (2) the ISNetV2 is effective and promotes further improvements in mask prediction accuracy; (3) our proposed framework HQ-ISNet is effective and more accurate for instance segmentation in the remote sensing imagery than the existing algorithms.


2021 ◽  
Vol 13 (11) ◽  
pp. 2052
Author(s):  
Dongchuan Yan ◽  
Guoqing Li ◽  
Xiangqiang Li ◽  
Hao Zhang ◽  
Hua Lei ◽  
...  

Dam failure of tailings ponds can result in serious casualties and environmental pollution. Therefore, timely and accurate monitoring is crucial for managing tailings ponds and preventing damage from tailings pond accidents. Remote sensing technology facilitates the regular extraction and monitoring of tailings pond information. However, traditional remote sensing techniques are inefficient and have low levels of automation, which hinders the large-scale, high-frequency, and high-precision extraction of tailings pond information. Moreover, research into the automatic and intelligent extraction of tailings pond information from high-resolution remote sensing images is relatively rare. However, the deep learning end-to-end model offers a solution to this problem. This study proposes an intelligent and high-precision method for extracting tailings pond information from high-resolution images, which improves deep learning target detection model: faster region-based convolutional neural network (Faster R-CNN). A comparison study is conducted and the model input size with the highest precision is selected. The feature pyramid network (FPN) is adopted to obtain multiscale feature maps with rich context information, the attention mechanism is used to improve the FPN, and the contribution degrees of feature channels are recalibrated. The model test results based on GoogleEarth high-resolution remote sensing images indicate a significant increase in the average precision (AP) and recall of tailings pond detection from that of Faster R-CNN by 5.6% and 10.9%, reaching 85.7% and 62.9%, respectively. Considering the current rapid increase in high-resolution remote sensing images, this method will be important for large-scale, high-precision, and intelligent monitoring of tailings ponds, which will greatly improve the decision-making efficiency in tailings pond management.


2021 ◽  
Vol 13 (21) ◽  
pp. 4237
Author(s):  
Xiaoping Zhang ◽  
Bo Cheng ◽  
Jinfen Chen ◽  
Chenbin Liang

Agricultural greenhouses (AGs) are an important component of modern facility agriculture, and accurately mapping and dynamically monitoring their distribution are necessary for agricultural scientific management and planning. Semantic segmentation can be adopted for AG extraction from remote sensing images. However, the feature maps obtained by traditional deep convolutional neural network (DCNN)-based segmentation algorithms blur spatial details and insufficient attention is usually paid to contextual representation. Meanwhile, the maintenance of the original morphological characteristics, especially the boundaries, is still a challenge for precise identification of AGs. To alleviate these problems, this paper proposes a novel network called high-resolution boundary refined network (HBRNet). In this method, we design a new backbone with multiple paths based on HRNetV2 aiming to preserve high spatial resolution and improve feature extraction capability, in which the Pyramid Cross Channel Attention (PCCA) module is embedded to residual blocks to strengthen the interaction of multiscale information. Moreover, the Spatial Enhancement (SE) module is employed to integrate the contextual information of different scales. In addition, we introduce the Spatial Gradient Variation (SGV) unit in the Boundary Refined (BR) module to couple the segmentation task and boundary learning task, so that they can share latent high-level semantics and interact with each other, and combine this with the joint loss to refine the boundary. In our study, GaoFen-2 remote sensing images in Shouguang City, Shandong Province, China are selected to make the AG dataset. The experimental results show that HBRNet demonstrates a significant improvement in segmentation performance up to an IoU score of 94.89%, implying that this approach has advantages and potential for precise identification of AGs.


Sign in / Sign up

Export Citation Format

Share Document