Double Adaptive Intensity-Threshold Method for Uneven Lidar Data to Extract Road Markings

2021 ◽  
Vol 87 (9) ◽  
pp. 639-648
Author(s):  
Chengming Ye ◽  
Hongfu Li ◽  
Ruilong Wei ◽  
Lixuan Wang ◽  
Tianbo Sui ◽  
...  

Due to the large volume and high redundancy of point clouds, there are many dilemmas in road-marking extraction algorithms, especially from uneven lidar point clouds. To extract road markings efficiently, this study presents a novel method for handling the uneven density distribution of point clouds and the high reflection intensity of road markings. The method first segments the point-cloud data into blocks perpendicular to the vehicle trajectory. Then it applies the double adaptive intensity-threshold method to extract road markings from road surfaces. Finally, it performs an adaptive spatial density filter based on the density distribution of point-cloud data to remove false road-marking points. The average completeness, correctness, and F measure of road-marking extraction are 0.827, 0.887, and 0.854, respectively, indicating that the proposed method is efficient and robust.

2021 ◽  
Vol 13 (13) ◽  
pp. 2612
Author(s):  
Lianbi Yao ◽  
Changcai Qin ◽  
Qichao Chen ◽  
Hangbin Wu

Automatic driving technology is becoming one of the main areas of development for future intelligent transportation systems. The high-precision map, which is an important supplement of the on-board sensors during shielding or limited observation distance, provides a priori information for high-precision positioning and path planning in automatic driving. The position and semantic information of the road markings, such as absolute coordinates of the solid lines and dashed lines, are the basic components of the high-precision map. In this paper, we study the automatic extraction and vectorization of road markings. Firstly, scan lines are extracted from the vehicle-borne laser point cloud data, and the pavement is extracted from scan lines according to the geometric mutation at the road boundary. On this basis, the pavement point clouds are transformed into raster images with a certain resolution by using the method of inverse distance weighted interpolation. An adaptive threshold segmentation algorithm is used to convert raster images into binary images. Followed by the adaptive threshold segmentation is the Euclidean clustering method, which is used to extract road markings point clouds from the binary image. Solid lines are detected by feature attribute filtering. All of the solid lines and guidelines in the sample data are correctly identified. The deep learning network framework PointNet++ is used for semantic recognition of the remaining road markings, including dashed lines, guidelines and arrows. Finally, the vectorization of the identified solid lines and dashed lines is carried out based on a line segmentation self-growth algorithm. The vectorization of the identified guidelines is carried out according to an alpha shape algorithm. Point cloud data from four experimental areas are used for road marking extraction and identification. The F-scores of the identification of dashed lines, guidelines, straight arrows and right turn arrows are 0.97, 0.66, 0.84 and 1, respectively.


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 884
Author(s):  
Chia-Ming Tsai ◽  
Yi-Horng Lai ◽  
Yung-Da Sun ◽  
Yu-Jen Chung ◽  
Jau-Woei Perng

Numerous sensors can obtain images or point cloud data on land, however, the rapid attenuation of electromagnetic signals and the lack of light in water have been observed to restrict sensing functions. This study expands the utilization of two- and three-dimensional detection technologies in underwater applications to detect abandoned tires. A three-dimensional acoustic sensor, the BV5000, is used in this study to collect underwater point cloud data. Some pre-processing steps are proposed to remove noise and the seabed from raw data. Point clouds are then processed to obtain two data types: a 2D image and a 3D point cloud. Deep learning methods with different dimensions are used to train the models. In the two-dimensional method, the point cloud is transferred into a bird’s eye view image. The Faster R-CNN and YOLOv3 network architectures are used to detect tires. Meanwhile, in the three-dimensional method, the point cloud associated with a tire is cut out from the raw data and is used as training data. The PointNet and PointConv network architectures are then used for tire classification. The results show that both approaches provide good accuracy.


Author(s):  
Y. Hori ◽  
T. Ogawa

The implementation of laser scanning in the field of archaeology provides us with an entirely new dimension in research and surveying. It allows us to digitally recreate individual objects, or entire cities, using millions of three-dimensional points grouped together in what is referred to as "point clouds". In addition, the visualization of the point cloud data, which can be used in the final report by archaeologists and architects, should usually be produced as a JPG or TIFF file. Not only the visualization of point cloud data, but also re-examination of older data and new survey of the construction of Roman building applying remote-sensing technology for precise and detailed measurements afford new information that may lead to revising drawings of ancient buildings which had been adduced as evidence without any consideration of a degree of accuracy, and finally can provide new research of ancient buildings. We used laser scanners at fields because of its speed, comprehensive coverage, accuracy and flexibility of data manipulation. Therefore, we “skipped” many of post-processing and focused on the images created from the meta-data simply aligned using a tool which extended automatic feature-matching algorithm and a popular renderer that can provide graphic results.


2021 ◽  
Vol 10 (9) ◽  
pp. 617
Author(s):  
Su Yang ◽  
Miaole Hou ◽  
Ahmed Shaker ◽  
Songnian Li

The digital documentation of cultural relics plays an important role in archiving, protection, and management. In the field of cultural heritage, three-dimensional (3D) point cloud data is effective at expressing complex geometric structures and geometric details on the surface of cultural relics, but lacks semantic information. To elaborate the geometric information of cultural relics and add meaningful semantic information, we propose a modeling and processing method of smart point clouds of cultural relics with complex geometries. An information modeling framework for complex geometric cultural relics was designed based on the concept of smart point clouds, in which 3D point cloud data are organized through the time dimension and different spatial scales indicating different geometric details. The proposed model allows smart point clouds or a subset to be linked with semantic information or related documents. As such, this novel information modeling framework can be used to describe rich semantic information and high-level details of geometry. The proposed information model not only expresses the complex geometric structure of the cultural relics and the geometric details on the surface, but also has rich semantic information, and can even be associated with documents. A case study of the Dazu Thousand-Hand Bodhisattva Statue, which is characterized by a variety of complex geometries, reveals that our proposed framework is capable of modeling and processing the statue with excellent applicability and expansibility. This work provides insights into the sustainable development of cultural heritage protection globally.


2021 ◽  
Vol 65 (1) ◽  
pp. 10501-1-10501-9
Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian ◽  
Xiushan Lu

Abstract The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


2020 ◽  
Vol 9 (2) ◽  
pp. 72 ◽  
Author(s):  
Sami El-Mahgary ◽  
Juho-Pekka Virtanen ◽  
Hannu Hyyppä

The importance of being able to separate the semantics from the actual (X,Y,Z) coordinates in a point cloud has been actively brought up in recent research. However, there is still no widely used or accepted data layout paradigm on how to efficiently store and manage such semantic point cloud data. In this paper, we present a simple data layout that makes use the semantics and that allows for quick queries. The underlying idea is especially suited for a programming approach (e.g., queries programmed via Python) but we also present an even simpler implementation of the underlying technique on a well known relational database management system (RDBMS), namely, PostgreSQL. The obtained query results suggest that the presented approach can be successfully used to handle point and range queries on large points clouds.


2020 ◽  
Vol 12 (11) ◽  
pp. 1800 ◽  
Author(s):  
Maarten Bassier ◽  
Maarten Vergauwen

The processing of remote sensing measurements to Building Information Modeling (BIM) is a popular subject in current literature. An important step in the process is the enrichment of the geometry with the topology of the wall observations to create a logical model. However, this remains an unsolved task as methods struggle to deal with the noise, incompleteness and the complexity of point cloud data of building scenes. Current methods impose severe abstractions such as Manhattan-world assumptions and single-story procedures to overcome these obstacles, but as a result, a general data processing approach is still missing. In this paper, we propose a method that solves these shortcomings and creates a logical BIM model in an unsupervised manner. More specifically, we propose a connection evaluation framework that takes as input a set of preprocessed point clouds of a building’s wall observations and compute the best fit topology between them. We transcend the current state of the art by processing point clouds of both straight, curved and polyline-based walls. Also, we consider multiple connection types in a novel reasoning framework that decides which operations are best fit to reconstruct the topology of the walls. The geometry and topology produced by our method is directly usable by BIM processes as it is structured conform the IFC data structure. The experimental results conducted on the Stanford 2D-3D-Semantics dataset (2D-3D-S) show that the proposed method is a promising framework to reconstruct complex multi-story wall elements in an unsupervised manner.


Author(s):  
M. Bassier ◽  
R. Klein ◽  
B. Van Genechten ◽  
M. Vergauwen

The automated reconstruction of Building Information Modeling (BIM) objects from point cloud data is still ongoing research. A key aspect is the creation of accurate wall geometry as it forms the basis for further reconstruction of objects in a BIM. After segmenting and classifying the initial point cloud, the labelled segments are processed and the wall topology is reconstructed. However, the preocedure is challenging due to noise, occlusions and the complexity of the input data.<br>In this work, a method is presented to automatically reconstruct consistent wall geometry from point clouds. More specifically, the use of room information is proposed to aid the wall topology creation. First, a set of partial walls is constructed based on classified planar primitives. Next, the rooms are identified using the retrieved wall information along with the floors and ceilings. The wall topology is computed by the intersection of the partial walls conditioned on the room information. The final wall geometry is defined by creating IfcWallStandardCase objects conform the IFC4 standard. The result is a set of walls according to the as-built conditions of a building. The experiments prove that the used method is a reliable framework for wall reconstruction from unstructured point cloud data. Also, the implementation of room information reduces the rate of false positives for the wall topology. Given the walls, ceilings and floors, 94% of the rooms is correctly identified. A key advantage of the proposed method is that it deals with complex rooms and is not bound to single storeys.


Author(s):  
F. Poux ◽  
R. Neuville ◽  
P. Hallot ◽  
R. Billen

This paper proposes an interoperable model for managing high dimensional point clouds while integrating semantics. Point clouds from sensors are a direct source of information physically describing a 3D state of the recorded environment. As such, they are an exhaustive representation of the real world at every scale: 3D reality-based spatial data. Their generation is increasingly fast but processing routines and data models lack of knowledge to reason from information extraction rather than interpretation. The enhanced smart point cloud developed model allows to bring intelligence to point clouds via 3 connected meta-models while linking available knowledge and classification procedures that permits semantic injection. Interoperability drives the model adaptation to potentially many applications through specialized domain ontologies. A first prototype is implemented in Python and PostgreSQL database and allows to combine semantic and spatial concepts for basic hybrid queries on different point clouds.


Sign in / Sign up

Export Citation Format

Share Document