scholarly journals URBAN SCENE CLASSIFICATION USING FEATURES EXTRACTED FROM PHOTOGRAMMETRIC POINT CLOUDS ACQUIRED BY UAV

Author(s):  
G. G. Pessoa ◽  
R. C. Santos ◽  
A. C. Carrilho ◽  
M. Galo ◽  
A. Amorim

<p><strong>Abstract.</strong> Images and LiDAR point clouds are the two major data sources used by the photogrammetry and remote sensing community. Although different, the synergy between these two data sources has motivated exploration of the potential for combining data in various applications, especially for classification and extraction of information in urban environments. Despite the efforts of the scientific community, integrating LiDAR data and images remains a challenging task. For this reason, the development of Unmanned Aerial Vehicles (UAVs) along with the integration and synchronization of positioning receivers, inertial systems and off-the-shelf imaging sensors has enabled the exploitation of the high-density photogrammetric point cloud (PPC) as an alternative, obviating the need to integrate LiDAR and optical images. This study therefore aims to compare the results of PPC classification in urban scenes considering radiometric-only, geometric-only and combined radiometric and geometric data applied to the Random Forest algorithm. For this study the following classes were considered: buildings, asphalt, trees, grass, bare soil, sidewalks and power lines, which encompass the most common objects in urban scenes. The classification procedure was performed considering radiometric features (Green band, Red band, NIR band, NDVI and Saturation) and geometric features (Height – nDSM, Linearity, Planarity, Scatter, Anisotropy, Omnivariance and Eigenentropy). The quantitative analyses were performed by means of the classification error matrix using the following metrics: overall accuracy, recall and precision. The quantitative analyses present overall accuracy of 0.80, 0.74 and 0.98 for classification considering radiometric, geometric and both data combined, respectively.</p>

Author(s):  
L. H. Hughes ◽  
S. Auer ◽  
M. Schmitt

In this paper, we present a work-flow to investigate the joint visibility between very-high-resolution SAR and optical images of urban scenes. For this task, we extend the simulation framework SimGeoI to enable a simulation of individual pixels rather than complete images. Using the extended SimGeoI simulator, we carry out a case study using a TerraSAR-X staring spotlight image and a Worldview-2 panchromatic image acquired over the city of Munich, Germany. The results of this study indicate that about 55&amp;thinsp;% of the scene are visible in both images and are thus suitable for matching and data fusion endeavours, while about 25&amp;thinsp;% of the scene are affected by either radar shadow or optical occlusion. Taking the image acquisition parameters into account, our findings can provide support regarding the definition of upper bounds for image fusion tasks, as well as help to improve acquisition planning with respect to different application goals.


Author(s):  
Angel-Ivan Garcia-Moreno

Abstract The digitization of geographic environments, such as cities and archaeological sites, is of priority interest to the scientific community due to its potential applications. But there are still several issues to address. There are various digitization strategies, which include terrestrial/ airborne platforms and composed of various sensors, among the most common, cameras and laser scanners. A comprehensive methodology is presented to reconstruct urban environments using a mobile land platform. All the implemented stages are described, which includes the acquisition, processing, and correlation of the data delivered by a Velodyne HDL-64E scanner, a spherical camera, GPS, and inertial systems. The process to merge several point clouds to build a large-scale map is described, as well as the generation of surfaces. Being able to render large urban areas using a low density of points but without losing the details of the structures within the urban scenes. The proposal is evaluated using several metrics, for example, Coverage and Root-Mean-Square-Error (RSME). The results are compared against 3 methodologies reported in the literature. Obtaining better results in the 2D/3D data fusion process and the generation of surfaces. The described method has a low RMSE (0.79) compared to the other methods and a runtime of approximately 40 seconds to process each data set (point cloud, panoramic image, and inertial data). In general, the proposed methodology shows a more homogeneous density distribution without losing the details, that is, it conserves the spatial distribution of the points, but with fewer data.


2018 ◽  
Vol 7 (9) ◽  
pp. 342 ◽  
Author(s):  
Adam Salach ◽  
Krzysztof Bakuła ◽  
Magdalena Pilarska ◽  
Wojciech Ostrowski ◽  
Konrad Górski ◽  
...  

In this paper, the results of an experiment about the vertical accuracy of generated digital terrain models were assessed. The created models were based on two techniques: LiDAR and photogrammetry. The data were acquired using an ultralight laser scanner, which was dedicated to Unmanned Aerial Vehicle (UAV) platforms that provide very dense point clouds (180 points per square meter), and an RGB digital camera that collects data at very high resolution (a ground sampling distance of 2 cm). The vertical error of the digital terrain models (DTMs) was evaluated based on the surveying data measured in the field and compared to airborne laser scanning collected with a manned plane. The data were acquired in summer during a corridor flight mission over levees and their surroundings, where various types of land cover were observed. The experiment results showed unequivocally, that the terrain models obtained using LiDAR technology were more accurate. An attempt to assess the accuracy and possibilities of penetration of the point cloud from the image-based approach, whilst referring to various types of land cover, was conducted based on Real Time Kinematic Global Navigation Satellite System (GNSS-RTK) measurements and was compared to archival airborne laser scanning data. The vertical accuracy of DTM was evaluated for uncovered and vegetation areas separately, providing information about the influence of the vegetation height on the results of the bare ground extraction and DTM generation. In uncovered and low vegetation areas (0–20 cm), the vertical accuracies of digital terrain models generated from different data sources were quite similar: for the UAV Laser Scanning (ULS) data, the RMSE was 0.11 m, and for the image-based data collected using the UAV platform, it was 0.14 m, whereas for medium vegetation (higher than 60 cm), the RMSE from these two data sources were 0.11 m and 0.36 m, respectively. A decrease in the accuracy of 0.10 m, for every 20 cm of vegetation height, was observed for photogrammetric data; and such a dependency was not noticed in the case of models created from the ULS data.


Author(s):  
X.-F. Xing ◽  
M. A. Mostafavi ◽  
G. Edwards ◽  
N. Sabo

<p><strong>Abstract.</strong> Automatic semantic segmentation of point clouds observed in a 3D complex urban scene is a challenging issue. Semantic segmentation of urban scenes based on machine learning algorithm requires appropriate features to distinguish objects from mobile terrestrial and airborne LiDAR point clouds in point level. In this paper, we propose a pointwise semantic segmentation method based on our proposed features derived from Difference of Normal and the features “directional height above” that compare height difference between a given point and neighbors in eight directions in addition to the features based on normal estimation. Random forest classifier is chosen to classify points in mobile terrestrial and airborne LiDAR point clouds. The results obtained from our experiments show that the proposed features are effective for semantic segmentation of mobile terrestrial and airborne LiDAR point clouds, especially for vegetation, building and ground classes in an airborne LiDAR point clouds in urban areas.</p>


Author(s):  
Leena Matikainen ◽  
Juha Hyyppä ◽  
Paula Litkey

During the last 20 years, airborne laser scanning (ALS), often combined with multispectral information from aerial images, has shown its high feasibility for automated mapping processes. Recently, the first multispectral airborne laser scanners have been launched, and multispectral information is for the first time directly available for 3D ALS point clouds. This article discusses the potential of this new single-sensor technology in map updating, especially in automated object detection and change detection. For our study, Optech Titan multispectral ALS data over a suburban area in Finland were acquired. Results from a random forests analysis suggest that the multispectral intensity information is useful for land cover classification, also when considering ground surface objects and classes, such as roads. An out-of-bag estimate for classification error was about 3% for separating classes asphalt, gravel, rocky areas and low vegetation from each other. For buildings and trees, it was under 1%. According to feature importance analyses, multispectral features based on several channels were more useful that those based on one channel. Automatic change detection utilizing the new multispectral ALS data, an old digital surface model (DSM) and old building vectors was also demonstrated. Overall, our first analyses suggest that the new data are very promising for further increasing the automation level in mapping. The multispectral ALS technology is independent of external illumination conditions, and intensity images produced from the data do not include shadows. These are significant advantages when the development of automated classification and change detection procedures is considered.


2020 ◽  
Author(s):  
Alexander E. Zarebski ◽  
Louis du Plessis ◽  
Kris V. Parag ◽  
Oliver G. Pybus

Inferring the dynamics of pathogen transmission during an outbreak is an important problem in both infectious disease epidemiology and phylodynamics. In mathematical epidemiology, estimates are often informed by time-series of infected cases while in phylodynamics genetic sequences sampled through time are the primary data source. Each data type provides different, and potentially complementary, insights into transmission. However inference methods are typically highly specialised and field-specific. Recent studies have recognised the benefits of combining data sources, which include improved estimates of the transmission rate and number of infected individuals. However, the methods they employ are either computationally prohibitive or require intensive simulation, limiting their real-time utility. We present a novel birth-death phylogenetic model, called TimTam which can be informed by both phylogenetic and epidemiological data. Moreover, we derive a tractable analytic approximation of the TimTam likelihood, the computational complexity of which is linear in the size of the data set. Using the TimTam we show how key parameters of transmission dynamics and the number of unreported infections can be estimated accurately using these heterogeneous data sources. The approximate likelihood facilitates inference on large data sets, an important consideration as such data become increasingly common due to improving sequencing capability.


Author(s):  
J. Schachtschneider ◽  
C. Brenner

Abstract. The development of automated and autonomous vehicles requires highly accurate long-term maps of the environment. Urban areas contain a large number of dynamic objects which change over time. Since a permanent observation of the environment is impossible and there will always be a first time visit of an unknown or changed area, a map of an urban environment needs to model such dynamics.In this work, we use LiDAR point clouds from a large long term measurement campaign to investigate temporal changes. The data set was recorded along a 20 km route in Hannover, Germany with a Mobile Mapping System over a period of one year in bi-weekly measurements. The data set covers a variety of different urban objects and areas, weather conditions and seasons. Based on this data set, we show how scene and seasonal effects influence the measurement likelihood, and that multi-temporal maps lead to the best positioning results.


Author(s):  
Han Hu ◽  
Chongtai Chen ◽  
Bo Wu ◽  
Xiaoxia Yang ◽  
Qing Zhu ◽  
...  

Textureless and geometric discontinuities are major problems in state-of-the-art dense image matching methods, as they can cause visually significant noise and the loss of sharp features. Binary census transform is one of the best matching cost methods but in textureless areas, where the intensity values are similar, it suffers from small random noises. Global optimization for disparity computation is inherently sensitive to parameter tuning in complex urban scenes, and must compromise between smoothness and discontinuities. The aim of this study is to provide a method to overcome these issues in dense image matching, by extending the industry proven Semi-Global Matching through 1) developing a ternary census transform, which takes three outputs in a single order comparison and encodes the results in two bits rather than one, and also 2) by using texture-information to self-tune the parameters, which both preserves sharp edges and enforces smoothness when necessary. Experimental results using various datasets from different platforms have shown that the visual qualities of the triangulated point clouds in urban areas can be largely improved by these proposed methods.


Sign in / Sign up

Export Citation Format

Share Document