scholarly journals Symmetry Detection and Analysis of Chinese Paifang Using 3D Point Clouds

Symmetry ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 2011
Author(s):  
Ting On Chan ◽  
Yeran Sun ◽  
Jiayong Yu ◽  
Juan Zeng ◽  
Lixin Liu

The Chinese paifang is an essential constituent element for Chinese or many other oriental architectures. In this paper, a new method for detection and analysis of the reflection symmetry of the paifang based on 3D point clouds is proposed. The method invokes a new model to simultaneously fit two vertical planes of symmetry to the 3D point cloud of a paifang to support further symmetry analysis. Several simulated datasets were used to verify the proposed method. The results indicated that the proposed method was able to quantity the symmetry of a paifang in terms of the RMSE obtained from the ICP algorithm, with resistance to the presence of some random noise added to the simulated measurements. For real datasets, three old Chinese paifangs (with ages from 90 to 500 years) were scanned as point clouds to input into the proposed method. The method quantified the degree of symmetry for the three Chinese paifangs in terms of the RMSE, which ranged from 20 to 61 mm. One of the paifangs with apparent asymmetry had the highest RMSE (61 mm). Other than the quantification of the symmetry of the paifangs, the proposed method could also locate which portion of the paifang was relatively more symmetric. The proposed method can potentially be used for structural health inspection and cultural studies of the Chinese paifangs and some other similar architecture.

Aerospace ◽  
2018 ◽  
Vol 5 (3) ◽  
pp. 94 ◽  
Author(s):  
Hriday Bavle ◽  
Jose Sanchez-Lopez ◽  
Paloma Puente ◽  
Alejandro Rodriguez-Ramos ◽  
Carlos Sampedro ◽  
...  

This paper presents a fast and robust approach for estimating the flight altitude of multirotor Unmanned Aerial Vehicles (UAVs) using 3D point cloud sensors in cluttered, unstructured, and dynamic indoor environments. The objective is to present a flight altitude estimation algorithm, replacing the conventional sensors such as laser altimeters, barometers, or accelerometers, which have several limitations when used individually. Our proposed algorithm includes two stages: in the first stage, a fast clustering of the measured 3D point cloud data is performed, along with the segmentation of the clustered data into horizontal planes. In the second stage, these segmented horizontal planes are mapped based on the vertical distance with respect to the point cloud sensor frame of reference, in order to provide a robust flight altitude estimation even in presence of several static as well as dynamic ground obstacles. We validate our approach using the IROS 2011 Kinect dataset available in the literature, estimating the altitude of the RGB-D camera using the provided 3D point clouds. We further validate our approach using a point cloud sensor on board a UAV, by means of several autonomous real flights, closing its altitude control loop using the flight altitude estimated by our proposed method, in presence of several different static as well as dynamic ground obstacles. In addition, the implementation of our approach has been integrated in our open-source software framework for aerial robotics called Aerostack.


Author(s):  
Wenju Wang ◽  
Tao Wang ◽  
Yu Cai

AbstractClassifying 3D point clouds is an important and challenging task in computer vision. Currently, classification methods using multiple views lose characteristic or detail information during the representation or processing of views. For this reason, we propose a multi-view attention-convolution pooling network framework for 3D point cloud classification tasks. This framework uses Res2Net to extract the features from multiple 2D views. Our attention-convolution pooling method finds more useful information in the input data related to the current output, effectively solving the problem of feature information loss caused by feature representation and the detail information loss during dimensionality reduction. Finally, we obtain the probability distribution of the model to be classified using a full connection layer and the softmax function. The experimental results show that our framework achieves higher classification accuracy and better performance than other contemporary methods using the ModelNet40 dataset.


Author(s):  
T. Shinohara ◽  
H. Xiu ◽  
M. Matsuoka

Abstract. This study introduces a novel image to a 3D point-cloud translation method with a conditional generative adversarial network that creates a large-scale 3D point cloud. This can generate supervised point clouds observed via airborne LiDAR from aerial images. The network is composed of an encoder to produce latent features of input images, generator to translate latent features to fake point clouds, and discriminator to classify false or real point clouds. The encoder is a pre-trained ResNet; to overcome the difficulty of generating 3D point clouds in an outdoor scene, we use a FoldingNet with features from ResNet. After a fixed number of iterations, our generator can produce fake point clouds that correspond to the input image. Experimental results show that our network can learn and generate certain point clouds using the data from the 2018 IEEE GRSS Data Fusion Contest.


2018 ◽  
Vol 9 (2) ◽  
pp. 37-53
Author(s):  
Sinh Van Nguyen ◽  
Ha Manh Tran ◽  
Minh Khai Tran

Building 3D objects or reconstructing their surfaces from 3D point cloud data are researched activities in the field of geometric modeling and computer graphics. In the recent years, they are also studied and used in some fields such as: graph models and simulation; image processing or restoration of digital heritages. This article presents an improved method for restoring the shape of 3D point cloud surfaces. The method is a combination of creating a Bezier surface patch and computing tangent plane of 3D points to fill holes on a surface of 3D point clouds. This method is described as follows: at first, a boundary for each hole on the surface is identified. The holes are then filled by computing Bezier curves of surface patches to find missing points. After that, the holes are refined based on two steps (rough and elaborate) to adjust the inserted points and preserve the local curvature of the holes. The contribution of the proposed method has been shown in processing time and the novelty of combined computation in this method has preserved the initial shape of the surface


Sensors ◽  
2019 ◽  
Vol 20 (1) ◽  
pp. 143
Author(s):  
Yubo Cui ◽  
Zheng Fang ◽  
Sifan Zhou

Person tracking is an important issue in both computer vision and robotics. However, most existing person tracking methods using 3D point cloud are based on the Bayesian Filtering framework which are not robust in challenging scenes. In contrast with the filtering methods, in this paper, we propose a neural network to cope with person tracking using only 3D point cloud, named Point Siamese Network (PSN). PSN consists of two input branches named template and search, respectively. After finding the target person (by reading the label or using a detector), we get the inputs of the two branches and create feature spaces for them using feature extraction network. Meanwhile, a similarity map based on the feature space is proposed between them. We can obtain the target person from the map. Furthermore, we add an attention module to the template branch to guide feature extraction. To evaluate the performance of the proposed method, we compare it with the Unscented Kalman Filter (UKF) on 3 custom labeled challenging scenes and the KITTI dataset. The experimental results show that the proposed method performs better than UKF in robustness and accuracy and has a real-time speed. In addition, we publicly release our collected dataset and the labeled sequences to the research community.


2010 ◽  
Vol 22 (2) ◽  
pp. 158-166 ◽  
Author(s):  
Taro Suzuki ◽  
◽  
Yoshiharu Amano ◽  
Takumi Hashizume

This paper describes outdoor localization for a mobile robot using a laser scanner and three-dimensional (3D) point cloud data. A Mobile Mapping System (MMS) measures outdoor 3D point clouds easily and precisely. The full six-dimensional state of a mobile robot is estimated combining dead reckoning and 3D point cloud data. Two-dimensional (2D) position and orientation are extended to 3D using 3D point clouds assuming that the mobile robot remains in continuous contact with the road surface. Our approach applies a particle filter to correct position error in the laser measurement model in 3D point cloud space. Field experiments were conducted to evaluate the accuracy of our proposal. As the result of the experiment, it was confirmed that a localization precision of 0.2 m (RMS) is possible using our proposal.


Sensors ◽  
2018 ◽  
Vol 18 (12) ◽  
pp. 4398 ◽  
Author(s):  
Soohee Han

The present study introduces an efficient algorithm to construct a file-based octree for a large 3D point cloud. However, the algorithm was very slow compared with a memory-based approach, and got even worse when using a 3D point cloud scanned in longish objects like tunnels and corridors. The defects were addressed by implementing a semi-isometric octree group. The approach implements several semi-isometric octrees in a group, which tightly covers the 3D point cloud, though each octree along with its leaf node still maintains an isometric shape. The proposed approach was tested using three 3D point clouds captured in a long tunnel and a short tunnel by a terrestrial laser scanner, and in an urban area by an airborne laser scanner. The experimental results showed that the performance of the semi-isometric approach was not worse than a memory-based approach, and quite a lot better than a file-based one. Thus, it was proven that the proposed semi-isometric approach achieves a good balance between query performance and memory efficiency. In conclusion, if given enough main memory and using a moderately sized 3D point cloud, a memory-based approach is preferable. When the 3D point cloud is larger than the main memory, a file-based approach seems to be the inevitable choice, however, the semi-isometric approach is the better option.


2020 ◽  
Vol 12 (18) ◽  
pp. 3043 ◽  
Author(s):  
Juan M. Jurado ◽  
Luís Pádua ◽  
Francisco R. Feito ◽  
Joaquim J. Sousa

The optimisation of vineyards management requires efficient and automated methods able to identify individual plants. In the last few years, Unmanned Aerial Vehicles (UAVs) have become one of the main sources of remote sensing information for Precision Viticulture (PV) applications. In fact, high resolution UAV-based imagery offers a unique capability for modelling plant’s structure making possible the recognition of significant geometrical features in photogrammetric point clouds. Despite the proliferation of innovative technologies in viticulture, the identification of individual grapevines relies on image-based segmentation techniques. In that way, grapevine and non-grapevine features are separated and individual plants are estimated usually considering a fixed distance between them. In this study, an automatic method for grapevine trunk detection, using 3D point cloud data, is presented. The proposed method focuses on the recognition of key geometrical parameters to ensure the existence of every plant in the 3D model. The method was tested in different commercial vineyards and to push it to its limit a vineyard characterised by several missing plants along the vine rows, irregular distances between plants and occluded trunks by dense vegetation in some areas, was also used. The proposed method represents a disruption in relation to the state of the art, and is able to identify individual trunks, posts and missing plants based on the interpretation and analysis of a 3D point cloud. Moreover, a validation process was carried out allowing concluding that the method has a high performance, especially when it is applied to 3D point clouds generated in phases in which the leaves are not yet very dense (January to May). However, if correct flight parametrizations are set, the method remains effective throughout the entire vegetative cycle.


2019 ◽  
Vol 11 (2) ◽  
pp. 198 ◽  
Author(s):  
Chunhua Hu ◽  
Zhou Pan ◽  
Pingping Li

Leaves are used extensively as an indicator in research on tree growth. Leaf area, as one of the most important index in leaf morphology, is also a comprehensive growth index for evaluating the effects of environmental factors. When scanning tree surfaces using a 3D laser scanner, the scanned point cloud data usually contain many outliers and noise. These outliers can be clusters or sparse points, whereas the noise is usually non-isolated but exhibits different attributes from valid points. In this study, a 3D point cloud filtering method for leaves based on manifold distance and normal estimation is proposed. First, leaf was extracted from the tree point cloud and initial clustering was performed as the preprocessing step. Second, outlier clusters filtering and outlier points filtering were successively performed using a manifold distance and truncation method. Third, noise points in each cluster were filtered based on the local surface normal estimation. The 3D reconstruction results of leaves after applying the proposed filtering method prove that this method outperforms other classic filtering methods. Comparisons of leaf areas with real values and area assessments of the mean absolute error (MAE) and mean absolute error percent (MAE%) for leaves in different levels were also conducted. The root mean square error (RMSE) for leaf area was 2.49 cm2. The MAE values for small leaves, medium leaves and large leaves were 0.92 cm2, 1.05 cm2 and 3.39 cm2, respectively, with corresponding MAE% values of 10.63, 4.83 and 3.8. These results demonstrate that the method proposed can be used to filter outliers and noise for 3D point clouds of leaves and improve 3D leaf visualization authenticity and leaf area measurement accuracy.


Sign in / Sign up

Export Citation Format

Share Document