Saliency and density enhanced region-of-interest extraction for large-scale high-resolution remote sensing images

Author(s):  
Tong Li ◽  
Junping Zhang ◽  
Qingle Guo ◽  
Bin Zou
Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3232 ◽  
Author(s):  
Yan Liu ◽  
Qirui Ren ◽  
Jiahui Geng ◽  
Meng Ding ◽  
Jiangyun Li

Efficient and accurate semantic segmentation is the key technique for automatic remote sensing image analysis. While there have been many segmentation methods based on traditional hand-craft feature extractors, it is still challenging to process high-resolution and large-scale remote sensing images. In this work, a novel patch-wise semantic segmentation method with a new training strategy based on fully convolutional networks is presented to segment common land resources. First, to handle the high-resolution image, the images are split as local patches and then a patch-wise network is built. Second, training data is preprocessed in several ways to meet the specific characteristics of remote sensing images, i.e., color imbalance, object rotation variations and lens distortion. Third, a multi-scale training strategy is developed to solve the severe scale variation problem. In addition, the impact of conditional random field (CRF) is studied to improve the precision. The proposed method was evaluated on a dataset collected from a capital city in West China with the Gaofen-2 satellite. The dataset contains ten common land resources (Grassland, Road, etc.). The experimental results show that the proposed algorithm achieves 54.96% in terms of mean intersection over union (MIoU) and outperforms other state-of-the-art methods in remote sensing image segmentation.


Author(s):  
Changmiao Hu ◽  
Ping Tang

In recent years, China's demand for satellite remote sensing images increased. Thus, the country launched a series of satellites equipped with high-resolution sensors. The resolutions of these satellites range from 30 m to a few meters, and the spectral range covers the visible to the near-infrared band. These satellite images are mainly used for environmental monitoring, mapping, land surface classification and other fields. However, haze is an important factor that often affects image quality. Thus, dehazing technology is becoming a critical step in high-resolution remote sensing image processing. This paper presents a rapid algorithm for dehazing based on a semi-physical haze model. Large-scale median filtering technique is used to extract large areas of bright, low-frequency information from images to estimate the distribution and thickness of the haze. Four images from different satellites are used for experiment. Results show that the algorithm is valid, fast, and suitable for the rapid dehazing of numerous large-sized high-resolution remote sensing images in engineering applications.


2020 ◽  
Vol 12 (21) ◽  
pp. 3501
Author(s):  
Qingsong Xu ◽  
Xin Yuan ◽  
Chaojun Ouyang ◽  
Yue Zeng

Unlike conventional natural (RGB) images, the inherent large scale and complex structures of remote sensing images pose major challenges such as spatial object distribution diversity and spectral information extraction when existing models are directly applied for image classification. In this study, we develop an attention-based pyramid network for segmentation and classification of remote sensing datasets. Attention mechanisms are used to develop the following modules: (i) a novel and robust attention-based multi-scale fusion method effectively fuses useful spatial or spectral information at different and same scales; (ii) a region pyramid attention mechanism using region-based attention addresses the target geometric size diversity in large-scale remote sensing images; and (iii) cross-scale attention in our adaptive atrous spatial pyramid pooling network adapts to varied contents in a feature-embedded space. Different forms of feature fusion pyramid frameworks are established by combining these attention-based modules. First, a novel segmentation framework, called the heavy-weight spatial feature fusion pyramid network (FFPNet), is proposed to address the spatial problem of high-resolution remote sensing images. Second, an end-to-end spatial-spectral FFPNet is presented for classifying hyperspectral images. Experiments conducted on ISPRS Vaihingen and ISPRS Potsdam high-resolution datasets demonstrate the competitive segmentation accuracy achieved by the proposed heavy-weight spatial FFPNet. Furthermore, experiments on the Indian Pines and the University of Pavia hyperspectral datasets indicate that the proposed spatial-spectral FFPNet outperforms the current state-of-the-art methods in hyperspectral image classification.


2016 ◽  
Vol 16 (5) ◽  
pp. 34-49 ◽  
Author(s):  
Fenghua Huang ◽  
Zhengyuan Mao ◽  
Wenzao Shi

Abstract While SIFT (Scale Invariant Feature Transform) features are used to match High-Resolution (HR) remote sensing urban images captured at different phases with large scale and view variations, feature points are few and the matching accuracy is low. Although replacing SIFT with fully affine invariant features ASIFT (Affine-SIFT) can increase the number of feature points, it results in matching inefficiency and a non-uniform distribution of matched feature point pairs. To address these problems, this paper proposes the novel matching method ICA-ASIFT, which matches HR remote sensing urban images captured at different phases by using an Independent Component Analysis algorithm (ICA) and ASIFT features jointly. First, all possible affine deformations are modeled for the image transform, extracting ASIFT features of remote sensing images captured at different times. The ICA algorithm reduces the dimensionality of ASIFT features and improves matching efficiency of subsequent ASIFT feature point pairs. Next, coarse matching is performed on ASIFT feature point pairs through the algorithms of Nearest Vector Angle Ratio (NVAR), Direction Difference Analysis (DDA) and RANdom SAmple Consensus (RANSAC), eliminating apparent mismatches. Then, fine matching is performed on rough matched point pairs using a Neighborhoodbased Feature Graph Matching algorithm (NFGM) to obtain final ASIFT matching point pairs of remote sensing images. Finally, final matching point pairs are used to compute the affine transform matrix. Matching HR remote sensing images captured at different phases is achieved through affine transform. Experiments are used to compare the performance of ICA-ASFIT and three other algorithms (i.e., Harris- SIFT, PCA-SIFT, TD-ASIFT) on HR remote sensing images captured at different times in different regions. Experimental results show that the proposed ICA-ASFIT algorithm effectively matches HR remote sensing urban images and outperforms other algorithms in terms of matching accuracy and efficiency.


2021 ◽  
Vol 13 (11) ◽  
pp. 2052
Author(s):  
Dongchuan Yan ◽  
Guoqing Li ◽  
Xiangqiang Li ◽  
Hao Zhang ◽  
Hua Lei ◽  
...  

Dam failure of tailings ponds can result in serious casualties and environmental pollution. Therefore, timely and accurate monitoring is crucial for managing tailings ponds and preventing damage from tailings pond accidents. Remote sensing technology facilitates the regular extraction and monitoring of tailings pond information. However, traditional remote sensing techniques are inefficient and have low levels of automation, which hinders the large-scale, high-frequency, and high-precision extraction of tailings pond information. Moreover, research into the automatic and intelligent extraction of tailings pond information from high-resolution remote sensing images is relatively rare. However, the deep learning end-to-end model offers a solution to this problem. This study proposes an intelligent and high-precision method for extracting tailings pond information from high-resolution images, which improves deep learning target detection model: faster region-based convolutional neural network (Faster R-CNN). A comparison study is conducted and the model input size with the highest precision is selected. The feature pyramid network (FPN) is adopted to obtain multiscale feature maps with rich context information, the attention mechanism is used to improve the FPN, and the contribution degrees of feature channels are recalibrated. The model test results based on GoogleEarth high-resolution remote sensing images indicate a significant increase in the average precision (AP) and recall of tailings pond detection from that of Faster R-CNN by 5.6% and 10.9%, reaching 85.7% and 62.9%, respectively. Considering the current rapid increase in high-resolution remote sensing images, this method will be important for large-scale, high-precision, and intelligent monitoring of tailings ponds, which will greatly improve the decision-making efficiency in tailings pond management.


2022 ◽  
Vol 14 (2) ◽  
pp. 269
Author(s):  
Yong Wang ◽  
Xiangqiang Zeng ◽  
Xiaohan Liao ◽  
Dafang Zhuang

Deep learning (DL) shows remarkable performance in extracting buildings from high resolution remote sensing images. However, how to improve the performance of DL based methods, especially the perception of spatial information, is worth further study. For this purpose, we proposed a building extraction network with feature highlighting, global awareness, and cross level information fusion (B-FGC-Net). The residual learning and spatial attention unit are introduced in the encoder of the B-FGC-Net, which simplifies the training of deep convolutional neural networks and highlights the spatial information representation of features. The global feature information awareness module is added to capture multiscale contextual information and integrate the global semantic information. The cross level feature recalibration module is used to bridge the semantic gap between low and high level features to complete the effective fusion of cross level information. The performance of the proposed method was tested on two public building datasets and compared with classical methods, such as UNet, LinkNet, and SegNet. Experimental results demonstrate that B-FGC-Net exhibits improved profitability of accurate extraction and information integration for both small and large scale buildings. The IoU scores of B-FGC-Net on WHU and INRIA Building datasets are 90.04% and 79.31%, respectively. B-FGC-Net is an effective and recommended method for extracting buildings from high resolution remote sensing images.


2021 ◽  
Vol 13 (16) ◽  
pp. 3243
Author(s):  
Pengfei Shi ◽  
Qigang Jiang ◽  
Chao Shi ◽  
Jing Xi ◽  
Guofang Tao ◽  
...  

Oil is an important resource for the development of modern society. Accurate detection of oil wells is of great significance to the investigation of oil exploitation status and the formulation of an exploitation plan. However, detecting small objects in large-scale and high-resolution remote sensing images, such as oil wells, is a challenging task due to the problems of large number, limited pixels, and complex background. In order to overcome this problem, first, we create our own oil well dataset to conduct experiments given the lack of a public dataset. Second, we provide a comparative assessment of two state-of-the-art object detection algorithms, SSD and YOLO v4, for oil well detection in our image dataset. The results show that both of them have good performance, but YOLO v4 has better accuracy in oil well detection because of its better feature extraction capability for small objects. In view of the fact that small objects are currently difficult to be detected in large-scale and high-resolution remote sensing images, this article proposes an improved algorithm based on YOLO v4 with sliding slices and discarding edges. The algorithm effectively solves the problems of repeated detection and inaccurate positioning of oil well detection in large-scale and high-resolution remote sensing images, and the accuracy of detection result increases considerably. In summary, this study investigates an appropriate algorithm for oil well detection, improves the algorithm, and achieves an excellent effect on a large-scale and high-resolution satellite image. It provides a new idea for small objects detection in large-scale and high-resolution remote sensing images.


Sign in / Sign up

Export Citation Format

Share Document