scholarly journals AUTOMATED VISIBILITY FIELD EVALUATION OF TRAFFIC SIGN BASED ON 3D LIDAR POINT CLOUDS

Author(s):  
S. Zhang ◽  
C. Wang ◽  
M. Cheng ◽  
J. Li

<p><strong>Abstract.</strong> Maintaining high visibility of traffic signs is very important for traffic safety. Manual inspection and removal of occlusion in front of traffic signs is one of the daily tasks of the traffic management department. This paper presents a method that can automatically detect the occlusion and continuously quantitative estimate the visibility of traffic sign cover all the road surface based on Mobile Laser Scanning (MLS) systems. The concept of traffic sign’s visibility field is proposed in this paper. One of important innovation of this paper is that we use retinal imaging area to evaluate the visibility of a traffic sign. And this makes our method is in line with human vision. To validate the reasonable and accuracy of our method, we use the 2D and 3D registration technology to observe the consistence of the occlusion ratio in point clouds with it in photo. Experiment of implementation on large scale traffic environments show that our method is feasible and efficient.</p>

2019 ◽  
Vol 11 (12) ◽  
pp. 1453 ◽  
Author(s):  
Shanxin Zhang ◽  
Cheng Wang ◽  
Lili Lin ◽  
Chenglu Wen ◽  
Chenhui Yang ◽  
...  

Maintaining the high visual recognizability of traffic signs for traffic safety is a key matter for road network management. Mobile Laser Scanning (MLS) systems provide efficient way of 3D measurement over large-scale traffic environment. This paper presents a quantitative visual recognizability evaluation method for traffic signs in large-scale traffic environment based on traffic recognition theory and MLS 3D point clouds. We first propose the Visibility Evaluation Model (VEM) to quantitatively describe the visibility of traffic sign from any given viewpoint, then we proposed the concept of visual recognizability field and Traffic Sign Visual Recognizability Evaluation Model (TSVREM) to measure the visual recognizability of a traffic sign. Finally, we present an automatic TSVREM calculation algorithm for MLS 3D point clouds. Experimental results on real MLS 3D point clouds show that the proposed method is feasible and efficient.


2021 ◽  
Vol 11 (8) ◽  
pp. 3666
Author(s):  
Zoltán Fazekas ◽  
László Gerencsér ◽  
Péter Gáspár

For over a decade, urban road environment detection has been a target of intensive research. The topic is relevant for the design and implementation of advanced driver assistance systems. Typically, embedded systems are deployed in these for the operation. The environments can be categorized into road environment-types. Abrupt transitions between these pose a traffic safety risk. Road environment-type transitions along a route manifest themselves also in changes in the distribution of traffic signs and other road objects. Can the placement and the detection of traffic signs be modelled jointly with an easy-to-handle stochastic point process, e.g., an inhomogeneous marked Poisson process? Does this model lend itself for real-time application, e.g., via analysis of a log generated by a traffic sign detection and recognition system? How can the chosen change detector help in mitigating the traffic safety risk? A change detection method frequently used for Poisson processes is the cumulative sum (CUSUM) method. Herein, this method is tailored to the specific stochastic model and tested on realistic logs. The use of several change detectors is also considered. Results indicate that a traffic sign-based road environment-type change detection is feasible, though it is not suitable for an immediate intervention.


Author(s):  
W. Ostrowski ◽  
M. Pilarska ◽  
J. Charyton ◽  
K. Bakuła

Creating 3D building models in large scale is becoming more popular and finds many applications. Nowadays, a wide term “3D building models” can be applied to several types of products: well-known CityGML solid models (available on few Levels of Detail), which are mainly generated from Airborne Laser Scanning (ALS) data, as well as 3D mesh models that can be created from both nadir and oblique aerial images. City authorities and national mapping agencies are interested in obtaining the 3D building models. Apart from the completeness of the models, the accuracy aspect is also important. Final accuracy of a building model depends on various factors (accuracy of the source data, complexity of the roof shapes, etc.). In this paper the methodology of inspection of dataset containing 3D models is presented. The proposed approach check all building in dataset with comparison to ALS point clouds testing both: accuracy and level of details. Using analysis of statistical parameters for normal heights for reference point cloud and tested planes and segmentation of point cloud provides the tool that can indicate which building and which roof plane in do not fulfill requirement of model accuracy and detail correctness. Proposed method was tested on two datasets: solid and mesh model.


2018 ◽  
Vol 8 (2) ◽  
pp. 20170048 ◽  
Author(s):  
M. I. Disney ◽  
M. Boni Vicari ◽  
A. Burt ◽  
K. Calders ◽  
S. L. Lewis ◽  
...  

Terrestrial laser scanning (TLS) is providing exciting new ways to quantify tree and forest structure, particularly above-ground biomass (AGB). We show how TLS can address some of the key uncertainties and limitations of current approaches to estimating AGB based on empirical allometric scaling equations (ASEs) that underpin all large-scale estimates of AGB. TLS provides extremely detailed non-destructive measurements of tree form independent of tree size and shape. We show examples of three-dimensional (3D) TLS measurements from various tropical and temperate forests and describe how the resulting TLS point clouds can be used to produce quantitative 3D models of branch and trunk size, shape and distribution. These models can drastically improve estimates of AGB, provide new, improved large-scale ASEs, and deliver insights into a range of fundamental tree properties related to structure. Large quantities of detailed measurements of individual 3D tree structure also have the potential to open new and exciting avenues of research in areas where difficulties of measurement have until now prevented statistical approaches to detecting and understanding underlying patterns of scaling, form and function. We discuss these opportunities and some of the challenges that remain to be overcome to enable wider adoption of TLS methods.


Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6570
Author(s):  
Chang Sun ◽  
Yibo Ai ◽  
Sheng Wang ◽  
Weidong Zhang

Detecting and classifying real-life small traffic signs from large input images is difficult due to their occupying fewer pixels relative to larger targets. To address this challenge, we proposed a deep-learning-based model (Dense-RefineDet) that applies a single-shot, object-detection framework (RefineDet) to maintain a suitable accuracy–speed trade-off. We constructed a dense connection-related transfer-connection block to combine high-level feature layers with low-level feature layers to optimize the use of the higher layers to obtain additional contextual information. Additionally, we presented an anchor-design method to provide suitable anchors for detecting small traffic signs. Experiments using the Tsinghua-Tencent 100K dataset demonstrated that Dense-RefineDet achieved competitive accuracy at high-speed detection (0.13 s/frame) of small-, medium-, and large-scale traffic signs (recall: 84.3%, 95.2%, and 92.6%; precision: 83.9%, 95.6%, and 94.0%). Moreover, experiments using the Caltech pedestrian dataset indicated that the miss rate of Dense-RefineDet was 54.03% (pedestrian height > 20 pixels), which outperformed other state-of-the-art methods.


Minerals ◽  
2020 ◽  
Vol 10 (2) ◽  
pp. 174 ◽  
Author(s):  
Peter Blistan ◽  
Stanislav Jacko ◽  
Ľudovít Kovanič ◽  
Julián Kondela ◽  
Katarína Pukanská ◽  
...  

A frequently recurring problem in the extraction of mineral resources (especially heterogeneous mineral resources) is the rapid operative determination of the extracted quantity of raw material in a surface quarry. This paper deals with testing and analyzing the possibility of using unconventional methods such as digital close-range photogrammetry and terrestrial laser scanning in the process of determining the bulk density of raw material under in situ conditions. A model example of a heterogeneous deposit is the perlite deposit Lehôtka pod Brehmi (Slovakia). Classical laboratory methods for determining bulk density were used to verify the results of the in situ method of bulk density determination. Two large-scale samples (probes) with an approximate volume of 7 m3 and 9 m3 were realized in situ. 6 point samples (LITH) were taken for laboratory determination. By terrestrial laser scanning (TLS) measurement from 2 scanning stations, point clouds with approximately 163,000/143,000 points were obtained for each probe. For Structure-from-Motion (SfM) photogrammetry, 49/55 images were acquired for both probes, with final point clouds containing approximately 155,000/141,000 points. Subsequently, the bulk densities of the bulk samples were determined by the calculation from in situ measurements by TLS and SfM photogrammetry. Comparison of results of the field in situ measurements (1841 kg∙m−3) and laboratory measurements (1756 kg∙m−3) showed only a 4.5% difference in results between the two methods for determining the density of heterogeneous raw materials, confirming the accuracy of the used in situ methods. For the determination of the loosening coefficient, the material from both large-scale samples was transferred on a horizontal surface. Their volumes were determined by TLS. The loosening coefficient for the raw material of 1.38 was calculated from the resulting values.


2020 ◽  
Vol 12 (1) ◽  
pp. 178 ◽  
Author(s):  
Jinming Zhang ◽  
Xiangyun Hu ◽  
Hengming Dai ◽  
ShenRun Qu

It is difficult to extract a digital elevation model (DEM) from an airborne laser scanning (ALS) point cloud in a forest area because of the irregular and uneven distribution of ground and vegetation points. Machine learning, especially deep learning methods, has shown powerful feature extraction in accomplishing point cloud classification. However, most of the existing deep learning frameworks, such as PointNet, dynamic graph convolutional neural network (DGCNN), and SparseConvNet, cannot consider the particularity of ALS point clouds. For large-scene laser point clouds, the current data preprocessing methods are mostly based on random sampling, which is not suitable for DEM extraction tasks. In this study, we propose a novel data sampling algorithm for the data preparation of patch-based training and classification named T-Sampling. T-Sampling uses the set of the lowest points in a certain area as basic points with other points added to supplement it, which can guarantee the integrity of the terrain in the sampling area. In the learning part, we propose a new convolution model based on terrain named Tin-EdgeConv that fully considers the spatial relationship between ground and non-ground points when constructing a directed graph. We design a new network based on Tin-EdgeConv to extract local features and use PointNet architecture to extract global context information. Finally, we combine this information effectively with a designed attention fusion module. These aspects are important in achieving high classification accuracy. We evaluate the proposed method by using large-scale data from forest areas. Results show that our method is more accurate than existing algorithms.


Author(s):  
F. Li ◽  
S. Oude Elberink ◽  
G. Vosselman

Automatic semantic interpretation of street furniture has become a popular topic in recent years. Current studies detect street furniture as connected components of points above the street level. Street furniture classification based on properties of such components suffers from large intra class variability of shapes and cannot deal with mixed classes like traffic signs attached to light poles. In this paper, we focus on the decomposition of point clouds of pole-like street furniture. A novel street furniture decomposition method is proposed, which consists of three steps: (i) acquirement of prior-knowledge, (ii) pole extraction, (iii) components separation. For the pole extraction, a novel global pole extraction approach is proposed to handle 3 different cases of street furniture. In the evaluation of results, which involves the decomposition of 27 different instances of street furniture, we demonstrate that our method decomposes mixed classes street furniture into poles and different components with respect to different functionalities.


Author(s):  
J. Gehrung ◽  
M. Hebel ◽  
M. Arens ◽  
U. Stilla

Mobile laser scanning has not only the potential to create detailed representations of urban environments, but also to determine changes up to a very detailed level. An environment representation for change detection in large scale urban environments based on point clouds has drawbacks in terms of memory scalability. Volumes, however, are a promising building block for memory efficient change detection methods. The challenge of working with 3D occupancy grids is that the usual raycasting-based methods applied for their generation lead to artifacts caused by the traversal of unfavorable discretized space. These artifacts have the potential to distort the state of voxels in close proximity to planar structures. In this work we propose a raycasting approach that utilizes knowledge about planar surfaces to completely prevent this kind of artifacts. To demonstrate the capabilities of our approach, a method for the iterative volumetric approximation of point clouds that allows to speed up the raycasting by 36 percent is proposed.


Author(s):  
G. Stavropoulou ◽  
G. Tzovla ◽  
A. Georgopoulos

Over the past decade, large-scale photogrammetric products have been extensively used for the geometric documentation of cultural heritage monuments, as they combine metric information with the qualities of an image document. Additionally, the rising technology of terrestrial laser scanning has enabled the easier and faster production of accurate digital surface models (DSM), which have in turn contributed to the documentation of heavily textured monuments. However, due to the required accuracy of control points, the photogrammetric methods are always applied in combination with surveying measurements and hence are dependent on them. Along this line of thought, this paper explores the possibility of limiting the surveying measurements and the field work necessary for the production of large-scale photogrammetric products and proposes an alternative method on the basis of which the necessary control points instead of being measured with surveying procedures are chosen from a dense and accurate point cloud. Using this point cloud also as a surface model, the only field work necessary is the scanning of the object and image acquisition, which need not be subject to strict planning. To evaluate the proposed method an algorithm and the complementary interface were produced that allow the parallel manipulation of 3D point clouds and images and through which single image procedures take place. The paper concludes by presenting the results of a case study in the ancient temple of Hephaestus in Athens and by providing a set of guidelines for implementing effectively the method.


Sign in / Sign up

Export Citation Format

Share Document