remotely sensed images
Recently Published Documents


TOTAL DOCUMENTS

590
(FIVE YEARS 85)

H-INDEX

38
(FIVE YEARS 6)

2022 ◽  
Vol 14 (1) ◽  
pp. 215
Author(s):  
Xuerui Niu ◽  
Qiaolin Zeng ◽  
Xiaobo Luo ◽  
Liangfu Chen

The semantic segmentation of fine-resolution remotely sensed images is an urgent issue in satellite image processing. Solving this problem can help overcome various obstacles in urban planning, land cover classification, and environmental protection, paving the way for scene-level landscape pattern analysis and decision making. Encoder-decoder structures based on attention mechanisms have been frequently used for fine-resolution image segmentation. In this paper, we incorporate a coordinate attention (CA) mechanism, adopt an asymmetric convolution block (ACB), and design a refinement fusion block (RFB), forming a network named the fusion coordinate and asymmetry-based U-Net (FCAU-Net). Furthermore, we propose novel convolutional neural network (CNN) architecture to fully capture long-term dependencies and fine-grained details in fine-resolution remotely sensed imagery. This approach has the following advantages: (1) the CA mechanism embeds position information into a channel attention mechanism to enhance the feature representations produced by the network while effectively capturing position information and channel relationships; (2) the ACB enhances the feature representation ability of the standard convolution layer and captures and refines the feature information in each layer of the encoder; and (3) the RFB effectively integrates low-level spatial information and high-level abstract features to eliminate background noise when extracting feature information, reduces the fitting residuals of the fused features, and improves the ability of the network to capture information flows. Extensive experiments conducted on two public datasets (ZY-3 and DeepGlobe) demonstrate the effectiveness of the FCAU-Net. The proposed FCAU-Net transcends U-Net, Attention U-Net, the pyramid scene parsing network (PSPNet), DeepLab v3+, the multistage attention residual U-Net (MAResU-Net), MACU-Net, and the Transformer U-Net (TransUNet). Specifically, the FCAU-Net achieves a 97.97% (95.05%) pixel accuracy (PA), a 98.53% (91.27%) mean PA (mPA), a 95.17% (85.54%) mean intersection over union (mIoU), and a 96.07% (90.74%) frequency-weighted IoU (FWIoU) on the ZY-3 (DeepGlobe) dataset.


2021 ◽  
Vol 13 (24) ◽  
pp. 5111
Author(s):  
Zhen Shu ◽  
Xiangyun Hu ◽  
Hengming Dai

Accurate building extraction from remotely sensed images is essential for topographic mapping, cadastral surveying and many other applications. Fully automatic segmentation methods still remain a great challenge due to the poor generalization ability and the inaccurate segmentation results. In this work, we are committed to robust click-based interactive building extraction in remote sensing imagery. We argue that stability is vital to an interactive segmentation system, and we observe that the distance of the newly added click to the boundaries of the previous segmentation mask contains progress guidance information of the interactive segmentation process. To promote the robustness of the interactive segmentation, we exploit this information with the previous segmentation mask, positive and negative clicks to form a progress guidance map, and feed it to a convolutional neural network (CNN) with the original RGB image, we name the network as PGR-Net. In addition, an adaptive zoom-in strategy and an iterative training scheme are proposed to further promote the stability of PGR-Net. Compared with the latest methods FCA and f-BRS, the proposed PGR-Net basically requires 1–2 fewer clicks to achieve the same segmentation results. Comprehensive experiments have demonstrated that the PGR-Net outperforms related state-of-the-art methods on five natural image datasets and three building datasets of remote sensing images.


2021 ◽  
Vol 13 (21) ◽  
pp. 4288
Author(s):  
Zherui Yin ◽  
Wenhui Kuang ◽  
Yuhai Bao ◽  
Yinyin Dou ◽  
Wenfeng Chi ◽  
...  

Dramatic urban land expansion and its internal sub-fraction change during 2000–2020 have taken place in Africa; however, the investigation of their spatial heterogeneity and dynamic change monitoring at the continental scale are rarely reported. Taking the whole of Africa as a study area, the synergic approach of normalized settlement density index and random forest was applied to assess urban land and its sub-land fractions (i.e., impervious surface area and vegetation space) in Africa, through time series of remotely sensed images on a cloud computing platform. The generated 30-m resolution urban land/sub-land products displayed good accuracy, with comprehensive accuracy of over 90%. During 2000–2020, the evaluated urban land throughout Africa increased from 1.93 × 104 km2 to 4.18 × 104 km2, with a total expansion rate of 116.49%, and the expanded urban area of the top six countries accounted for more than half of the total increments, meaning that the urban expansion was concentrated in several major countries. A turning green Africa was observed, with a continuously increasing ratio of vegetation space to built-up area and a faster increment of vegetation space than impervious surface area (i.e., 134.43% vs., 108.88%) within urban regions. A better living environment was also found in different urbanized regions, as the newly expanded urban area was characterized by lower impervious surface area fraction and higher vegetation fraction compared with the original urban area. Similarly, the humid/semi-humid regions also displayed a better living environment than arid/semi-arid regions. The relationship between socioeconomic development factors (i.e., gross domestic product and urban population) and impervious surface area was investigated and both passed the significance test (p < 0.05), with a higher fit value in the former than the latter. Overall, urban land and its fractional land cover change in Africa during 2000–2020 promoted the well-being of human settlements, indicating the positive effect on environments.


Sign in / Sign up

Export Citation Format

Share Document