scholarly journals Detection of anthropogenic objects based on the spatial characteristics of their contour in aerial image

Author(s):  
Hayder Makki Hammed ◽  
Osama Majeed Hilal Almiahi ◽  
Oksana Shauchuk

In this paper, an analysis is performed for the spatial characteristics of the contours of aerial images that are homogeneous in brightness to determine the parameters that make it possible to identify anthropogenic objects with a higher probability. A complex criterion is proposed for the regions, which homogenous in brightness and an algorithm for detecting anthropogenic objects, which takes into account its area, the ratio of the total length of long contour fragments to the total length of the contour, the concentration of corners and endpoints of the contour. Also, when deciding on anthropogenicity, the particular characteristics of the contour of a homogeneous segment are used, depending on the number of long fragments of the contour. Using the proposed criterion, an algorithm of searching anthropogenic objects based on the analysis of contour elements (its shape, size and concentration) is developed. The proposed algorithm allows reducing the probability of skipping the anthropogenic object in comparison with the spatial anomaly search algorithm.

2019 ◽  
Vol 11 (10) ◽  
pp. 1157 ◽  
Author(s):  
Jorge Fuentes-Pacheco ◽  
Juan Torres-Olivares ◽  
Edgar Roman-Rangel ◽  
Salvador Cervantes ◽  
Porfirio Juarez-Lopez ◽  
...  

Crop segmentation is an important task in Precision Agriculture, where the use of aerial robots with an on-board camera has contributed to the development of new solution alternatives. We address the problem of fig plant segmentation in top-view RGB (Red-Green-Blue) images of a crop grown under open-field difficult circumstances of complex lighting conditions and non-ideal crop maintenance practices defined by local farmers. We present a Convolutional Neural Network (CNN) with an encoder-decoder architecture that classifies each pixel as crop or non-crop using only raw colour images as input. Our approach achieves a mean accuracy of 93.85% despite the complexity of the background and a highly variable visual appearance of the leaves. We make available our CNN code to the research community, as well as the aerial image data set and a hand-made ground truth segmentation with pixel precision to facilitate the comparison among different algorithms.


2021 ◽  
Vol 13 (14) ◽  
pp. 2656
Author(s):  
Furong Shi ◽  
Tong Zhang

Deep-learning technologies, especially convolutional neural networks (CNNs), have achieved great success in building extraction from areal images. However, shape details are often lost during the down-sampling process, which results in discontinuous segmentation or inaccurate segmentation boundary. In order to compensate for the loss of shape information, two shape-related auxiliary tasks (i.e., boundary prediction and distance estimation) were jointly learned with building segmentation task in our proposed network. Meanwhile, two consistency constraint losses were designed based on the multi-task network to exploit the duality between the mask prediction and two shape-related information predictions. Specifically, an atrous spatial pyramid pooling (ASPP) module was appended to the top of the encoder of a U-shaped network to obtain multi-scale features. Based on the multi-scale features, one regression loss and two classification losses were used for predicting the distance-transform map, segmentation, and boundary. Two inter-task consistency-loss functions were constructed to ensure the consistency between distance maps and masks, and the consistency between masks and boundary maps. Experimental results on three public aerial image data sets showed that our method achieved superior performance over the recent state-of-the-art models.


Author(s):  
Linying Zhou ◽  
Zhou Zhou ◽  
Hang Ning

Road detection from aerial images still is a challenging task since it is heavily influenced by spectral reflectance, shadows and occlusions. In order to increase the road detection accuracy, a proposed method for road detection by GAC model with edge feature extraction and segmentation is studied in this paper. First, edge feature can be extracted using the proposed gradient magnitude with Canny operator. Then, a reconstructed gradient map is applied in watershed transformation method, which is segmented for the next initial contour. Last, with the combination of edge feature and initial contour, the boundary stopping function is applied in the GAC model. The road boundary result can be accomplished finally. Experimental results show, by comparing with other methods in [Formula: see text]-measure system, that the proposed method can achieve satisfying results.


Author(s):  
WANG WEI ◽  
YANG XIN

This paper describes an innovative aerial images segmentation algorithm. The algorithm is based upon the knowledge of image multiscale geometric analysis using contourlet transform, which can extract the image's intrinsic geometrical structure efficiently. The contourlet transform is introduced to represent the most distinguished and the rotation invariant features of the image. A modified Mumford–Shah model is applied to segment the aerial image by a multifeature level set evolution. To avoid possible local minima in the level set evolution, we adjust the weighting coefficients of the multiscale features in different evolution periods, i.e. the global features have bigger weighting coefficients at the beginning stages which roughly define the shape of the contour, then bigger weighting coefficients are assigned to the detailed features for segmenting the precise shape. When the algorithm is applied to segment the aerial images with several classes, satisfied experimental results are achieved by the proposed method.


Author(s):  
Yi-Ta Hsieh ◽  
Shou-Tsung Wu ◽  
Chaur-Tzuhn Chen ◽  
Jan-Chang Chen

The shadows in optical remote sensing images are regarded as image nuisances in numerous applications. The classification and interpretation of shadow area in a remote sensing image are a challenge, because of the reduction or total loss of spectral information in those areas. In recent years, airborne multispectral aerial image devices have been developed 12-bit or higher radiometric resolution data, including Leica ADS-40, Intergraph DMC. The increased radiometric resolution of digital imagery provides more radiometric details of potential use in classification or interpretation of land cover of shadow areas. Therefore, the objectives of this study are to analyze the spectral properties of the land cover in the shadow areas by ADS-40 high radiometric resolution aerial images, and to investigate the spectral and vegetation index differences between the various shadow and non-shadow land covers. According to research findings of spectral analysis of ADS-40 image: (i) The DN values in shadow area are much lower than in nonshadow area; (ii) DN values received from shadowed areas that will also be affected by different land cover, and it shows the possibility of land cover property retrieval as in nonshadow area; (iii) The DN values received from shadowed regions decrease in the visible band from short to long wavelengths due to scattering; (iv) The shadow area NIR of vegetation category also shows a strong reflection; (v) Generally, vegetation indexes (NDVI) still have utility to classify the vegetation and non-vegetation in shadow area. The spectral data of high radiometric resolution images (ADS-40) is potential for the extract land cover information of shadow areas.


Author(s):  
L. Madhuanand ◽  
F. Nex ◽  
M. Y. Yang

Abstract. Depth is an essential component for various scene understanding tasks and for reconstructing the 3D geometry of the scene. Estimating depth from stereo images requires multiple views of the same scene to be captured which is often not possible when exploring new environments with a UAV. To overcome this monocular depth estimation has been a topic of interest with the recent advancements in computer vision and deep learning techniques. This research has been widely focused on indoor scenes or outdoor scenes captured at ground level. Single image depth estimation from aerial images has been limited due to additional complexities arising from increased camera distance, wider area coverage with lots of occlusions. A new aerial image dataset is prepared specifically for this purpose combining Unmanned Aerial Vehicles (UAV) images covering different regions, features and point of views. The single image depth estimation is based on image reconstruction techniques which uses stereo images for learning to estimate depth from single images. Among the various available models for ground-level single image depth estimation, two models, 1) a Convolutional Neural Network (CNN) and 2) a Generative Adversarial model (GAN) are used to learn depth from aerial images from UAVs. These models generate pixel-wise disparity images which could be converted into depth information. The generated disparity maps from these models are evaluated for its internal quality using various error metrics. The results show higher disparity ranges with smoother images generated by CNN model and sharper images with lesser disparity range generated by GAN model. The produced disparity images are converted to depth information and compared with point clouds obtained using Pix4D. It is found that the CNN model performs better than GAN and produces depth similar to that of Pix4D. This comparison helps in streamlining the efforts to produce depth from a single aerial image.


Author(s):  
H. Sun ◽  
Y. Ding ◽  
Y. Huang ◽  
G. Wang

Aerial Image records the large-range earth objects with the ever-improving spatial and radiometric resolution. It becomes a powerful tool for earth observation, land-coverage survey, geographical census, etc., and helps delineating the boundary of different kinds of objects on the earth both manually and automatically. In light of the geo-spatial correspondence between the pixel locations of aerial image and the spatial coordinates of ground objects, there is an increasing need of super-pixel segmentation and high-accuracy positioning of objects in aerial image. Besides the commercial software package of eCognition and ENVI, many algorithms have also been developed in the literature to segment objects of aerial images. But how to evaluate the segmentation results remains a challenge, especially in the context of the geo-spatial correspondence. The Geo-Hausdorff Distance (GHD) is proposed to measure the geo-spatial distance between the results of various object segmentation that can be done with the manual ground truth or with the automatic algorithms.Based on the early-breaking and random-sampling design, the GHD calculates the geographical Hausdorff distance with nearly-linear complexity. Segmentation results of several state-of-the-art algorithms, including those of the commercial packages, are evaluated with a diverse set of aerial images. They have different signal-to-noise ratio around the object boundaries and are hard to trace correctly even for human operators. The GHD value is analyzed to comprehensively measure the suitability of different object segmentation methods for aerial images of different spatial resolution. By critically assessing the strengths and limitations of the existing algorithms, the paper provides valuable insight and guideline for extensive research in automating object detection and classification of aerial image in the nation-wide geographic census. It is also promising for the optimal design of operational specification of remote sensing interpretation under the constraints of limited resource.


2019 ◽  
Vol 11 (19) ◽  
pp. 2219 ◽  
Author(s):  
Fatemeh Alidoost ◽  
Hossein Arefi ◽  
Federico Tombari

In this study, a deep learning (DL)-based approach is proposed for the detection and reconstruction of buildings from a single aerial image. The pre-required knowledge to reconstruct the 3D shapes of buildings, including the height data as well as the linear elements of individual roofs, is derived from the RGB image using an optimized multi-scale convolutional–deconvolutional network (MSCDN). The proposed network is composed of two feature extraction levels to first predict the coarse features, and then automatically refine them. The predicted features include the normalized digital surface models (nDSMs) and linear elements of roofs in three classes of eave, ridge, and hip lines. Then, the prismatic models of buildings are generated by analyzing the eave lines. The parametric models of individual roofs are also reconstructed using the predicted ridge and hip lines. The experiments show that, even in the presence of noises in height values, the proposed method performs well on 3D reconstruction of buildings with different shapes and complexities. The average root mean square error (RMSE) and normalized median absolute deviation (NMAD) metrics are about 3.43 m and 1.13 m, respectively for the predicted nDSM. Moreover, the quality of the extracted linear elements is about 91.31% and 83.69% for the Potsdam and Zeebrugge test data, respectively. Unlike the state-of-the-art methods, the proposed approach does not need any additional or auxiliary data and employs a single image to reconstruct the 3D models of buildings with the competitive precision of about 1.2 m and 0.8 m for the horizontal and vertical RMSEs over the Potsdam data and about 3.9 m and 2.4 m over the Zeebrugge test data.


2019 ◽  
Vol 8 (1) ◽  
pp. 47 ◽  
Author(s):  
Franz Kurz ◽  
Seyed Azimi ◽  
Chun-Yu Sheu ◽  
Pablo d’Angelo

The 3D information of road infrastructures is growing in importance with the development of autonomous driving. In this context, the exact 2D position of road markings as well as height information play an important role in, e.g., lane-accurate self-localization of autonomous vehicles. In this paper, the overall task is divided into an automatic segmentation followed by a refined 3D reconstruction. For the segmentation task, we applied a wavelet-enhanced fully convolutional network on multiview high-resolution aerial imagery. Based on the resulting 2D segments in the original images, we propose a successive workflow for the 3D reconstruction of road markings based on a least-squares line-fitting in multiview imagery. The 3D reconstruction exploits the line character of road markings with the aim to optimize the best 3D line location by minimizing the distance from its back projection to the detected 2D line in all the covering images. Results showed an improved IoU of the automatic road marking segmentation by exploiting the multiview character of the aerial images and a more accurate 3D reconstruction of the road surface compared to the semiglobal matching (SGM) algorithm. Further, the approach avoids the matching problem in non-textured image parts and is not limited to lines of finite length. In this paper, the approach is presented and validated on several aerial image data sets covering different scenarios like motorways and urban regions.


2019 ◽  
Vol 11 (18) ◽  
pp. 2176 ◽  
Author(s):  
Chen ◽  
Zhong ◽  
Tan

Detecting objects in aerial images is a challenging task due to multiple orientations and relatively small size of the objects. Although many traditional detection models have demonstrated an acceptable performance by using the imagery pyramid and multiple templates in a sliding-window manner, such techniques are inefficient and costly. Recently, convolutional neural networks (CNNs) have successfully been used for object detection, and they have demonstrated considerably superior performance than that of traditional detection methods; however, this success has not been expanded to aerial images. To overcome such problems, we propose a detection model based on two CNNs. One of the CNNs is designed to propose many object-like regions that are generated from the feature maps of multi scales and hierarchies with the orientation information. Based on such a design, the positioning of small size objects becomes more accurate, and the generated regions with orientation information are more suitable for the objects arranged with arbitrary orientations. Furthermore, another CNN is designed for object recognition; it first extracts the features of each generated region and subsequently makes the final decisions. The results of the extensive experiments performed on the vehicle detection in aerial imagery (VEDAI) and overhead imagery research data set (OIRDS) datasets indicate that the proposed model performs well in terms of not only the detection accuracy but also the detection speed.


Sign in / Sign up

Export Citation Format

Share Document