Road Segmentation of Unmanned Aerial Vehicle Remote Sensing Images Using Adversarial Network With Multiscale Context Aggregation

Author(s):  
Yuxia Li ◽  
Bo Peng ◽  
Lei He ◽  
Kunlong Fan ◽  
Ling Tong
Sensors ◽  
2019 ◽  
Vol 19 (19) ◽  
pp. 4115 ◽  
Author(s):  
Yuxia Li ◽  
Bo Peng ◽  
Lei He ◽  
Kunlong Fan ◽  
Zhenxu Li ◽  
...  

Roads are vital components of infrastructure, the extraction of which has become a topic of significant interest in the field of remote sensing. Because deep learning has been a popular method in image processing and information extraction, researchers have paid more attention to extracting road using neural networks. This article proposes the improvement of neural networks to extract roads from Unmanned Aerial Vehicle (UAV) remote sensing images. D-Linknet was first considered for its high performance; however, the huge scale of the net reduced computational efficiency. With a focus on the low computational efficiency problem of the popular D-LinkNet, this article made some improvements: (1) Replace the initial block with a stem block. (2) Rebuild the entire network based on ResNet units with a new structure, allowing for the construction of an improved neural network D-Linknetplus. (3) Add a 1 × 1 convolution layer before DBlock to reduce the input feature maps, reducing parameters and improving computational efficiency. Add another 1 × 1 convolution layer after DBlock to recover the required number of output channels. Accordingly, another improved neural network B-D-LinknetPlus was built. Comparisons were performed between the neural nets, and the verification were made with the Massachusetts Roads Dataset. The results show improved neural networks are helpful in reducing the network size and developing the precision needed for road extraction.


2020 ◽  
pp. 002029402092226
Author(s):  
Cheng Xu ◽  
Chanjuan Yin ◽  
Daqing Huang ◽  
Wei Han ◽  
Dongzhen Wang

Ground target three-dimensional positions measured from optical remote-sensing images taken by an unmanned aerial vehicle play an important role in related military and civil applications. The weakness of this system lies in its localization accuracy being unstable and its efficiency being low when using a single unmanned aerial vehicle. In this paper, a novel multi–unmanned aerial vehicle cooperative target localization measurement method is proposed to overcome these issues. In the target localization measurement stage, three or more unmanned aerial vehicles simultaneously observe the same ground target and acquire multiple remote-sensing images. According to the principle of perspective projection, the target point, its image point, and the camera’s optic center are collinear, and nonlinear observation equations are established. These equations are then converted to linear equations using a Taylor expansion. Robust weighted least-squares estimation is used to solve the equations with the objective function of minimizing the weighted square sum of re-projection errors from target points to multiple pairs of images, which can make the best use of the effective information and avoid interference from the observation data. An automatic calculation strategy using a weight matrix is designed, and the weight matrix and target-position coordinate value are updated in each iteration until the iteration stopping condition is satisfied. Compared with the stereo-image-pair cross-target localization method, the multi–unmanned aerial vehicle cooperative target localization method can use more observation information, which results in higher rendezvous accuracy and improved performance. Finally, the effectiveness and robustness of this method is verified by numerical simulation and flight testing. The results show that the proposed method can effectively improve the precision of the target’s localization and demonstrates great potential for providing more accurate target localization in engineering applications.


Author(s):  
T. J. Lei ◽  
R. R. Xu ◽  
J. H. Cheng ◽  
W. L. Song ◽  
W. Jiang ◽  
...  

Abstract. Remote sensing system fitted on UAV (Unmanned Aerial Vehicle) can obtain clear images and high-resolution aerial photographs. It has advantages of flexibility, convenience and ability to work full-time. However, there are some problems of UAV image such as small coverage area, large number, irregular overlap, etc. How to obtain a large regional map quickly becomes a major obstacle to UAV remote sensing application. In this paper, a new method of fast registration of UAV remote sensing images was proposed to meet the needs of practical application. This paper used Progressive Sample Consensus (PROSAC) algorithm to improve the matching accuracy by removed a large number of mismatching point pairs of remote sensing image registration based-on SURF (Speed Up Robust Feature) algorithm, and GPU (Graphic Processing Unit) was also used to accelerate the speed of improved SURF algorithm. Finally, geometric verification was used to achieve mosaic accuracy in survey area. The number of feature points obtained by using improved SURF based-on PROSAC algorithm was only 9.5% than that of SURF algorithm. Moreover, the accuracy rate of improved method was about 99.7%, while the accuracy rate of improved SURF algorithm was increased by 8% than SURF algorithm. Moreover, the improved running time of SURFGPU algorithm for UAV remote sensing image registration was a speed of around 16 times than SURF algorithm, and the image matching time had reached millisecond level. Thus, improved SURF algorithm had better matching accuracy and executing speed to meet the requirements of real-time and robustness in UAV remote sensing image registration.


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Liang Huang ◽  
Xuequn Wu ◽  
Qiuzhi Peng ◽  
Xueqin Yu

The tobacco in plateau mountains has the characteristics of fragmented planting, uneven growth, and mixed/interplanting of crops. It is difficult to extract effective features using an object-oriented image analysis method to accurately extract tobacco planting areas. To this end, the advantage of deep learning features self-learning is relied on in this paper. An accurate extraction method of tobacco planting areas based on a deep semantic segmentation model from the unmanned aerial vehicle (UAV) remote sensing images in plateau mountains is proposed in this paper. Firstly, the tobacco semantic segmentation dataset is established using Labelme. Four deep semantic segmentation models of DeeplabV3+, PSPNet, SegNet, and U-Net are used to train the sample data in the dataset. Among them, in order to reduce the model training time, the MobileNet series of lightweight networks are used to replace the original backbone networks of the four network models. Finally, the predictive images are semantically segmented by trained networks, and the mean Intersection over Union (mIoU) is used to evaluate the accuracy. The experimental results show that, using DeeplabV3+, PSPNet, SegNet, and U-Net to perform semantic segmentation on 71 scene prediction images, the mIoU obtained is 0.9436, 0.9118, 0.9392, and 0.9473, respectively, and the accuracy of semantic segmentation is high. The feasibility of the deep semantic segmentation method for extracting tobacco planting surface from UAV remote sensing images has been verified, and the research method can provide a reference for subsequent automatic extraction of tobacco planting areas.


2019 ◽  
Vol 11 (17) ◽  
pp. 2008 ◽  
Author(s):  
Qinchen Yang ◽  
Man Liu ◽  
Zhitao Zhang ◽  
Shuqin Yang ◽  
Jifeng Ning ◽  
...  

With increasing consumption, plastic mulch benefits agriculture by promoting crop quality and yield, but the environmental and soil pollution is becoming increasingly serious. Therefore, research on the monitoring of plastic mulched farmland (PMF) has received increasing attention. Plastic mulched farmland in unmanned aerial vehicle (UAV) remote images due to the high resolution, shows a prominent spatial pattern, which brings difficulties to the task of monitoring PMF. In this paper, through a comparison between two deep semantic segmentation methods, SegNet and fully convolutional networks (FCN), and a traditional classification method, Support Vector Machine (SVM), we propose an end-to-end deep-learning method aimed at accurately recognizing PMF for UAV remote sensing images from Hetao Irrigation District, Inner Mongolia, China. After experiments with single-band, three-band and six-band image data, we found that deep semantic segmentation models built via single-band data which only use the texture pattern of PMF can identify it well; for example, SegNet reaching the highest accuracy of 88.68% in a 900 nm band. Furthermore, with three visual bands and six-band data (3 visible bands and 3 near-infrared bands), deep semantic segmentation models combining the texture and spectral features further improve the accuracy of PMF identification, whereas six-band data obtains an optimal performance for FCN and SegNet. In addition, deep semantic segmentation methods, FCN and SegNet, due to their strong feature extraction capability and direct pixel classification, clearly outperform the traditional SVM method in precision and speed. Among three classification methods, SegNet model built on three-band and six-band data obtains the optimal average accuracy of 89.62% and 90.6%, respectively. Therefore, the proposed deep semantic segmentation model, when tested against the traditional classification method, provides a promising path for mapping PMF in UAV remote sensing images.


Sign in / Sign up

Export Citation Format

Share Document