scholarly journals AUTOMATIC FEATURE DETECTION, DESCRIPTION AND MATCHING FROM MOBILE LASER SCANNING DATA AND AERIAL IMAGERY

Author(s):  
Zille Hussnain ◽  
Sander Oude Elberink ◽  
George Vosselman

In mobile laser scanning systems, the platform’s position is measured by GNSS and IMU, which is often not reliable in urban areas. Consequently, derived Mobile Laser Scanning Point Cloud (MLSPC) lacks expected positioning reliability and accuracy. Many of the current solutions are either semi-automatic or unable to achieve pixel level accuracy. We propose an automatic feature extraction method which involves utilizing corresponding aerial images as a reference data set. The proposed method comprise three steps; image feature detection, description and matching between corresponding patches of nadir aerial and MLSPC ortho images. In the data pre-processing step the MLSPC is patch-wise cropped and converted to ortho images. Furthermore, each aerial image patch covering the area of the corresponding MLSPC patch is also cropped from the aerial image. For feature detection, we implemented an adaptive variant of Harris-operator to automatically detect corner feature points on the vertices of road markings. In feature description phase, we used the LATCH binary descriptor, which is robust to data from different sensors. For descriptor matching, we developed an outlier filtering technique, which exploits the arrangements of relative Euclidean-distances and angles between corresponding sets of feature points. We found that the positioning accuracy of the computed correspondence has achieved the pixel level accuracy, where the image resolution is 12cm. Furthermore, the developed approach is reliable when enough road markings are available in the data sets. We conclude that, in urban areas, the developed approach can reliably extract features necessary to improve the MLSPC accuracy to pixel level.

Author(s):  
Zille Hussnain ◽  
Sander Oude Elberink ◽  
George Vosselman

In mobile laser scanning systems, the platform’s position is measured by GNSS and IMU, which is often not reliable in urban areas. Consequently, derived Mobile Laser Scanning Point Cloud (MLSPC) lacks expected positioning reliability and accuracy. Many of the current solutions are either semi-automatic or unable to achieve pixel level accuracy. We propose an automatic feature extraction method which involves utilizing corresponding aerial images as a reference data set. The proposed method comprise three steps; image feature detection, description and matching between corresponding patches of nadir aerial and MLSPC ortho images. In the data pre-processing step the MLSPC is patch-wise cropped and converted to ortho images. Furthermore, each aerial image patch covering the area of the corresponding MLSPC patch is also cropped from the aerial image. For feature detection, we implemented an adaptive variant of Harris-operator to automatically detect corner feature points on the vertices of road markings. In feature description phase, we used the LATCH binary descriptor, which is robust to data from different sensors. For descriptor matching, we developed an outlier filtering technique, which exploits the arrangements of relative Euclidean-distances and angles between corresponding sets of feature points. We found that the positioning accuracy of the computed correspondence has achieved the pixel level accuracy, where the image resolution is 12cm. Furthermore, the developed approach is reliable when enough road markings are available in the data sets. We conclude that, in urban areas, the developed approach can reliably extract features necessary to improve the MLSPC accuracy to pixel level.


2019 ◽  
Vol 11 (10) ◽  
pp. 1157 ◽  
Author(s):  
Jorge Fuentes-Pacheco ◽  
Juan Torres-Olivares ◽  
Edgar Roman-Rangel ◽  
Salvador Cervantes ◽  
Porfirio Juarez-Lopez ◽  
...  

Crop segmentation is an important task in Precision Agriculture, where the use of aerial robots with an on-board camera has contributed to the development of new solution alternatives. We address the problem of fig plant segmentation in top-view RGB (Red-Green-Blue) images of a crop grown under open-field difficult circumstances of complex lighting conditions and non-ideal crop maintenance practices defined by local farmers. We present a Convolutional Neural Network (CNN) with an encoder-decoder architecture that classifies each pixel as crop or non-crop using only raw colour images as input. Our approach achieves a mean accuracy of 93.85% despite the complexity of the background and a highly variable visual appearance of the leaves. We make available our CNN code to the research community, as well as the aerial image data set and a hand-made ground truth segmentation with pixel precision to facilitate the comparison among different algorithms.


2019 ◽  
Vol 11 (18) ◽  
pp. 2176 ◽  
Author(s):  
Chen ◽  
Zhong ◽  
Tan

Detecting objects in aerial images is a challenging task due to multiple orientations and relatively small size of the objects. Although many traditional detection models have demonstrated an acceptable performance by using the imagery pyramid and multiple templates in a sliding-window manner, such techniques are inefficient and costly. Recently, convolutional neural networks (CNNs) have successfully been used for object detection, and they have demonstrated considerably superior performance than that of traditional detection methods; however, this success has not been expanded to aerial images. To overcome such problems, we propose a detection model based on two CNNs. One of the CNNs is designed to propose many object-like regions that are generated from the feature maps of multi scales and hierarchies with the orientation information. Based on such a design, the positioning of small size objects becomes more accurate, and the generated regions with orientation information are more suitable for the objects arranged with arbitrary orientations. Furthermore, another CNN is designed for object recognition; it first extracts the features of each generated region and subsequently makes the final decisions. The results of the extensive experiments performed on the vehicle detection in aerial imagery (VEDAI) and overhead imagery research data set (OIRDS) datasets indicate that the proposed model performs well in terms of not only the detection accuracy but also the detection speed.


2020 ◽  
Vol 12 (21) ◽  
pp. 3630
Author(s):  
Jin Liu ◽  
Haokun Zheng

Object detection and recognition in aerial and remote sensing images has become a hot topic in the field of computer vision in recent years. As these images are usually taken from a bird’s-eye view, the targets often have different shapes and are densely arranged. Therefore, using an oriented bounding box to mark the target is a mainstream choice. However, this general method is designed based on horizontal box annotation, while the improved method for detecting an oriented bounding box has a high computational complexity. In this paper, we propose a method called ellipse field network (EFN) to organically integrate semantic segmentation and object detection. It predicts the probability distribution of the target and obtains accurate oriented bounding boxes through a post-processing step. We tested our method on the HRSC2016 and DOTA data sets, achieving mAP values of 0.863 and 0.701, respectively. At the same time, we also tested the performance of EFN on natural images and obtained a mAP of 84.7 in the VOC2012 data set. These extensive experiments demonstrate that EFN can achieve state-of-the-art results in aerial image tests and can obtain a good score when considering natural images.


2017 ◽  
Vol 30 ◽  
pp. 447-468 ◽  
Author(s):  
Nick Hannon ◽  
Darrell J. Rohl ◽  
Lyn Wilson

The “Hidden Landscape of a Roman Frontier” is a collaborative research project run and jointly funded by Canterbury Christ Church University (CCCU) and Historic Environment Scotland (HES). Intended to run for a 3-year period, it began in October 2015. The project focuses on the landscape archaeology, history, and heritage management of the Roman frontier in Scotland, part of the “Frontiers of the Roman Empire” transnational UNESCO World Heritage Site since 2008. The project's primary data-set is comprised of aerial LiDAR at 0.5-m resolution covering the World Heritage Site, combined with terrestrial laser-scanning coverage for the forts at Bar Hill and Rough Castle and the fortlet at Kinneil. All data was commissioned under the auspices of the Scottish Ten Project; the aerial data was captured in spring 2010, the terrestrial data in July 2013 and April 2016. The project also draws upon a number of supplemental data sources, including the National Monuments Record of Scotland (https://canmore.org.uk/), geophysical survey data, archive aerial images, colour infra-red imagery, and additional LiDAR data from the UK Environment Agency.


Author(s):  
C. Chen ◽  
W. Gong ◽  
Y. Hu ◽  
Y. Chen ◽  
Y. Ding

The automated building detection in aerial images is a fundamental problem encountered in aerial and satellite images analysis. Recently, thanks to the advances in feature descriptions, Region-based CNN model (R-CNN) for object detection is receiving an increasing attention. Despite the excellent performance in object detection, it is problematic to directly leverage the features of R-CNN model for building detection in single aerial image. As we know, the single aerial image is in vertical view and the buildings possess significant directional feature. However, in R-CNN model, direction of the building is ignored and the detection results are represented by horizontal rectangles. For this reason, the detection results with horizontal rectangle cannot describe the building precisely. To address this problem, in this paper, we proposed a novel model with a key feature related to orientation, namely, Oriented R-CNN (OR-CNN). Our contributions are mainly in the following two aspects: 1) Introducing a new oriented layer network for detecting the rotation angle of building on the basis of the successful VGG-net R-CNN model; 2) the oriented rectangle is proposed to leverage the powerful R-CNN for remote-sensing building detection. In experiments, we establish a complete and bran-new data set for training our oriented R-CNN model and comprehensively evaluate the proposed method on a publicly available building detection data set. We demonstrate State-of-the-art results compared with the previous baseline methods.


Author(s):  
Z. Hussnain ◽  
S. Oude Elberink ◽  
G. Vosselman

<p><strong>Abstract.</strong> In this paper, a method is presented to improve the MLS platform’s trajectory for GNSS denied areas. The method comprises two major steps. The first step is based on a 2D image registration technique described in our previous publication. Internally, this registration technique first performs aerial to aerial image matching, this issues correspondences which enable to compute the 3D tie points by multiview triangulation. Similarly, it registers the rasterized Mobile Laser Scanning Point Cloud (MLSPC) patches with the multiple related aerial image patches. The later registration provides the correspondence between the aerial to aerial tie points and the MLSPC’s 3D points. In the second step, which is described in this paper, a procedure utilizes three kinds of observations to improve the MLS platform’s trajectory. The first type of observation is the set of 3D tie points computed automatically in the previous step (and are already available), the second type of observation is based on IMU readings and the third type of observation is soft-constraint over related pose parameters. In this situation, the 3D tie points are considered accurate and precise observations, since they provide both locally and globally strict constraints, whereas the IMU observations and soft-constraints only provide locally precise constraints. For 6DOF trajectory representation, first, the pose [R, t] parameters are converted to 6 B-spline functions over time. Then for the trajectory adjustment, the coefficients of B-splines are updated from the established observations. We tested our method on an MLS data set acquired at a test area in Rotterdam, and verified the trajectory improvement by evaluation with independently and manually measured GCPs. After the adjustment, the trajectory has achieved the accuracy of RMSE X<span class="thinspace"></span>=<span class="thinspace"></span>9<span class="thinspace"></span>cm, Y<span class="thinspace"></span>=<span class="thinspace"></span>14<span class="thinspace"></span>cm and Z<span class="thinspace"></span>=<span class="thinspace"></span>14<span class="thinspace"></span>cm. Analysing the error in the updated trajectory suggests that our procedure is effective at adjusting the 6DOF trajectory and to regenerate a reliable MLSPC product.</p>


Author(s):  
T. Kemper ◽  
N. Mudau ◽  
P. Mangara ◽  
M. Pesaresi

Urban areas in sub-Saharan Africa are growing at an unprecedented pace. Much of this growth is taking place in informal settlements. In South Africa more than 10% of the population live in urban informal settlements. South Africa has established a National Informal Settlement Development Programme (NUSP) to respond to these challenges. This programme is designed to support the National Department of Human Settlement (NDHS) in its implementation of the Upgrading Informal Settlements Programme (UISP) with the objective of eventually upgrading all informal settlements in the country. Currently, the NDHS does not have access to an updated national dataset captured at the same scale using source data that can be used to understand the status of informal settlements in the country. <br><br> This pilot study is developing a fully automated workflow for the wall-to-wall processing of SPOT-5 satellite imagery of South Africa. The workflow includes an automatic image information extraction based on multiscale textural and morphological image features extraction. The advanced image feature compression and optimization together with innovative learning and classification techniques allow a processing of the SPOT-5 images using the Landsat-based National Land Cover (NLC) of South Africa from the year 2000 as low-resolution thematic reference layers as. The workflow was tested on 42 SPOT scenes based on a stratified sampling. The derived building information was validated against a visually interpreted building point data set and produced an accuracy of 97 per cent. Given this positive result, is planned to process the most recent wall-to-wall coverage as well as the archived imagery available since 2007 in the near future.


2020 ◽  
Vol 12 (9) ◽  
pp. 1404
Author(s):  
Saleh Javadi ◽  
Mattias Dahl ◽  
Mats I. Pettersson

Interest in aerial image analysis has increased owing to recent developments in and availability of aerial imaging technologies, like unmanned aerial vehicles (UAVs), as well as a growing need for autonomous surveillance systems. Variant illumination, intensity noise, and different viewpoints are among the main challenges to overcome in order to determine changes in aerial images. In this paper, we present a robust method for change detection in aerial images. To accomplish this, the method extracts three-dimensional (3D) features for segmentation of objects above a defined reference surface at each instant. The acquired 3D feature maps, with two measurements, are then used to determine changes in a scene over time. In addition, the important parameters that affect measurement, such as the camera’s sampling rate, image resolution, the height of the drone, and the pixel’s height information, are investigated through a mathematical model. To exhibit its applicability, the proposed method has been evaluated on aerial images of various real-world locations and the results are promising. The performance indicates the robustness of the method in addressing the problems of conventional change detection methods, such as intensity differences and shadows.


2006 ◽  
Vol 33 (10) ◽  
pp. 1320-1331 ◽  
Author(s):  
Jin Gon Kim ◽  
Dong Yeob Han ◽  
Ki Yun Yu ◽  
Yong Il Kim ◽  
Sung Mo Rhee

The efficient extraction of road information is increasingly important with the rapid growth of road-related services, such as car navigation systems, telematics, and location-based services. Conventional methods of creating and updating road information are expensive and time consuming. Therefore, a set of processes is required that collects the same information more efficiently. We propose a new method for collecting road information in complex urban areas from road pavement markings located on aerial images. This information includes lane and symbol markings that guide direction; the geometric properties of the pavement markings and their spatial relationships are analyzed. Road construction manuals and a series of cutting-edge techniques, including template matching, are used in our analysis. To validate our approach, the accuracy of our results was evaluated by comparing the data with manually extracted ground truth data. Our approach demonstrates that road information can be extracted efficiently to an extent in a complex urban area.Key words: aerial image, automatic extraction, pavement marking, road information, CNS.


Sign in / Sign up

Export Citation Format

Share Document