Measuring Streetscape Features with High-Density Aerial Light Detection and Ranging

Author(s):  
Yaneev Golombek ◽  
Wesley E. Marshall

This study investigates the feasibility of extracting streetscape features from high-density United States Geological Survey (USGS) quality level 1 (QL1) light detection and ranging (LiDAR) and quantifying the features into three-dimensional (3D) volumetric pixel (voxel) zones. As the USGS embarks on a national LiDAR database with the goal of collecting LiDAR across the continuous U.S.A., the USGS primarily requires QL2 or QL1 as a collection standard. The authors’ previous study thoroughly investigated the limits of extracting streetscape features with QL2 data, which was primarily limited to buildings and street trees. Recent studies published by other researchers that utilize advanced digital mapping techniques for streetscape measuring acknowledge that most features outside of buildings and street trees are too small to detect. QL1 data, however, is four times denser than QL2 data. This study divides streetscapes into Thiessen proximal polygons, sets voxel parameters, classifies QL1 LiDAR point cloud data, and computes quantitative statistics where classified point cloud data intersects voxels within the streetscape polygons. It demonstrates how most other common streetscape features are detectable in a standard urban QL1 dataset. In addition to street trees and buildings, one can also legitimately extract and statistically quantify walls, fences, landscape vegetation, light posts, traffic lights, power poles, power lines, street signs, and miscellaneous street furniture. Furthermore, as these features are processed into their appropriate voxel height zones, this study introduces a new methodology for obtaining comprehensive tabular descriptive statistics describing how streetscape features are truly represented in 3D.

2020 ◽  
Vol 10 (13) ◽  
pp. 4486 ◽  
Author(s):  
Yongbeom Lee ◽  
Seongkeun Park

In this paper, we propose a deep learning-based perception method in autonomous driving systems using a Light Detection and Ranging(LiDAR) point cloud data, which is called a simultaneous segmentation and detection network (SSADNet). SSADNet can be used to recognize both drivable areas and obstacles, which is necessary for autonomous driving. Unlike the previous methods, where separate networks were needed for segmentation and detection, SSADNet can perform segmentation and detection simultaneously based on a single neural network. The proposed method uses point cloud data obtained from a 3D LiDAR for network input to generate a top view image consisting of three channels of distance, height, and reflection intensity. The structure of the proposed network includes a branch for segmentation and a branch for detection as well as a bridge connecting the two parts. The KITTI dataset, which is often used for experiments on autonomous driving, was used for training. The experimental results show that segmentation and detection can be performed simultaneously for drivable areas and vehicles at a quick inference speed, which is appropriate for autonomous driving systems.


2015 ◽  
Vol 159 ◽  
pp. 374-380 ◽  
Author(s):  
Srikant Srinivasan ◽  
Kaustubh Kaluskar ◽  
Scott Broderick ◽  
Krishna Rajan

Author(s):  
X. Jian ◽  
X. Xiao ◽  
H. Chengfang ◽  
Z. Zhizhong ◽  
W. Zhaohui ◽  
...  

Airborne LiDAR technology has proven to be the most powerful tools to obtain high-density, high-accuracy and significantly detailed surface information of terrain and surface objects within a short time, and from which the Digital Elevation Model of high quality can be extracted. Point cloud data generated from the pre-processed data should be classified by segmentation algorithms, so as to differ the terrain points from disorganized points, then followed by a procedure of interpolating the selected points to turn points into DEM data. The whole procedure takes a long time and huge computing resource due to high-density, that is concentrated on by a number of researches. Hadoop is a distributed system infrastructure developed by the Apache Foundation, which contains a highly fault-tolerant distributed file system (HDFS) with high transmission rate and a parallel programming model (Map/Reduce). Such a framework is appropriate for DEM generation algorithms to improve efficiency. Point cloud data of Dongting Lake acquired by Riegl LMS-Q680i laser scanner was utilized as the original data to generate DEM by a Hadoop-based algorithms implemented in Linux, then followed by another traditional procedure programmed by C++ as the comparative experiment. Then the algorithm’s efficiency, coding complexity, and performance-cost ratio were discussed for the comparison. The results demonstrate that the algorithm's speed depends on size of point set and density of DEM grid, and the non-Hadoop implementation can achieve a high performance when memory is big enough, but the multiple Hadoop implementation can achieve a higher performance-cost ratio, while point set is of vast quantities on the other hand.


Forests ◽  
2019 ◽  
Vol 10 (7) ◽  
pp. 537 ◽  
Author(s):  
Jiarong Tian ◽  
Tingting Dai ◽  
Haidong Li ◽  
Chengrui Liao ◽  
Wenxiu Teng ◽  
...  

Research Highlights: This study carried out a feasibility analysis on the tree height extraction of a planted coniferous forest with high canopy density by combining terrestrial laser scanner (TLS) and unmanned aerial vehicle (UAV) image–based point cloud data at small and midsize tree farms. Background and Objectives: Tree height is an important factor for forest resource surveys. This information plays an important role in forest structure evaluation and forest stock estimation. The objectives of this study were to solve the problem of underestimating tree height and to guarantee the precision of tree height extraction in medium and high-density planted coniferous forests. Materials and Methods: This study developed a novel individual tree localization (ITL)-based tree height extraction method to obtain preliminary results in a planted coniferous forest plots with 107 trees (Metasequoia). Then, the final accurate results were achieved based on the canopy height model (CHM) and CHM seed points (CSP). Results: The registration accuracy of the TLS and UAV image-based point cloud data reached 6 cm. The authors optimized the precision of tree height extraction using the ITL-based method by improving CHM resolution from 0.2 m to 0.1 m. Due to the overlapping of forest canopies, the CSP method failed to delineate all individual tree crowns in medium to high-density forest stands with the matching rates of about 75%. However, the accuracy of CSP-based tree height extraction showed obvious advantages compared with the ITL-based method. Conclusion: The proposed method provided a solid foundation for dynamically monitoring forest resources in a high-accuracy and low-cost way, especially in planted tree farms.


Author(s):  
Mas’ud Abdur Rosyid ◽  
Yusuf Suhaimi Daulay ◽  
Denden Mohamad Arifin ◽  
Ardian Infantono ◽  
Arief Suryadi Satyawan ◽  
...  

Penerapan teknologi LiDAR 2 dimensi (Light Detection And Rangging)  terkadang terkendala oleh adanya anomaly data atau noise sehingga mempengaruhi keakuratan dalam mendeteksi objek yang sesungguhnya. Jika tidak diatasi dengan baik maka dapat menggangu operasional kerjanya, terlebih lagi jika diterapkan pada kendaraan listrik otonom. Oleh sebab itu perlu upaya untuk mereduksi noise yang diimplementasikan pada software pemroses data LiDAR. Pada penelitian ini dilakukan pengembangan teknologi pereduksi noise yang muncul  pada point cloud data LiDAR dua dimensi. Adapun konsep yang diterapkan adalah pengembangan algoritma pengolahan data LiDAR secara sistematis. Desain algoritma ini berisikan visualiasi dari pendeteksian objek, penyimpanan point cloud data LiDAR sebagai informasi objek yang terdeteksi, serta metode pengurangan  noise pada point cloud data LiDAR dua dimensi tersebut. Algoritma ini di realisasikan dalam bentuk software pada perangkat keras Raspberry Pi 4, dengan menggunakan bahasa pemrograman Python. Terdapat enam Algoritma yang digunakan untuk mereduksi atau menghilangkan noise, yaitu Algoritma 1, Algoritma 2, Algoritma 3, Algoritma 4, Algoritma 5, Algoritma 6. Hasil percobaan memperlihatkan bahwa dari keenam Algoritma yang di buat mampu menampilkan visualisasi data berdasarkan sistem pemetaan 2 dimensi yang terkoreksi dari noise. Keenam Algoritma tersebut berhasil menyeleksi noise hingga 100%, meskipun kurang lebih 80% data yang dianggap benar tidak dapat disajikan. Meskipun hanya dengan 20% data benar, namun struktur objek masih dapat dikenali.


Author(s):  
Shuzlina Abdul-Rahman ◽  
Mohamad Soffi Abd Razak ◽  
Aliya Hasanah Binti Mohd Mushin ◽  
Raseeda Hamzah ◽  
Nordin Abu Bakar ◽  
...  

<span>Abstract—This paper presents a simulation study of Simultaneous Localization and Mapping (SLAM) using 3D point cloud data from Light Detection and Ranging (LiDAR) technology.  Methods like simulation is useful to simplify the process of learning algorithms particularly when collecting and annotating large volumes of real data is impractical and expensive. In this study, a map of a given environment was constructed in Robotic Operating System platform with Gazebo Simulator. The paper begins by presenting the most currently popular algorithm that are widely used in SLAM namely Extended Kalman Filter, Graph SLAM and Fast SLAM. The study performed the simulations by using standard SLAM with Turtlebot and Husky robots. Husky robot was further compared with ACML algorithm. The results showed that Hector SLAM could reach the goal faster than ACML algorithm in a pre-defined map. Further studies in this field with other SLAM algorithms would certainly beneficial to many parties due to the demands of robotic application.</span>


Author(s):  
C. Altuntas

The urban area should be imaged in three-dimensional (3D) for planning, inspection and management. In addition fast urbanisation requires detection the urban area changes which have been occurred with new buildings, additional floor to current buildings and excavations. 3D surface model of urban area enables to extracting high information from them. On the other hand high density spatial data should be measured to creating 3D digital terrain surface model. The dense image matching method makes 3D measurement with high density from the images in a short time. The aim of this study is detection the urban area changes via comparison of time series point cloud data from historical stereoscopic aerial images. The changes were detected with the difference of these digital elevation models. The study area was selected from Konya city in Turkey, and it has a large number of new buildings and changes in topography. Dense point cloud data were created from historical aerial images belong to years of 1951, 1975 and 2010. Every threedimensional point cloud data were registered to global georeferenced coordinate system with ground control points created from the imaged objects such as building corner, fence, wall and etc. Then urban changes were detected with comparing the dense point cloud data by exploiting the iterative closest point (ICP) algorithm. Consequently, the urban changes were detected from point to surface distances between image based point clouds.


Sign in / Sign up

Export Citation Format

Share Document