scholarly journals H-RANSAC: A HYBRID POINT CLOUD SEGMENTATION COMBINING 2D AND 3D DATA

Author(s):  
A. Adam ◽  
E. Chatzilari ◽  
S. Nikolopoulos ◽  
I. Kompatsiaris

In this paper, we present a novel 3D segmentation approach operating on point clouds generated from overlapping images. The aim of the proposed hybrid approach is to effectively segment co-planar objects, by leveraging the structural information originating from the 3D point cloud and the visual information from the 2D images, without resorting to learning based procedures. More specifically, the proposed hybrid approach, H-RANSAC, is an extension of the well-known RANSAC plane-fitting algorithm, incorporating an additional consistency criterion based on the results of 2D segmentation. Our expectation that the integration of 2D data into 3D segmentation will achieve more accurate results, is validated experimentally in the domain of 3D city models. Results show that HRANSAC can successfully delineate building components like main facades and windows, and provide more accurate segmentation results compared to the typical RANSAC plane-fitting algorithm.

Author(s):  
A. Adam ◽  
L. Grammatikopoulos ◽  
G. Karras ◽  
E. Protopapadakis ◽  
K. Karantzalos

Abstract. 3D semantic segmentation is the joint task of partitioning a point cloud into semantically consistent 3D regions and assigning them to a semantic class/label. While the traditional approaches for 3D semantic segmentation typically rely only on structural information of the objects (i.e. object geometry and shape), the last years many techniques combining both visual and geometric features have emerged, taking advantage of the progress in SfM/MVS algorithms that reconstruct point clouds from multiple overlapping images. Our work describes a hybrid methodology for 3D semantic segmentation, relying both on 2D and 3D space and aiming at exploring whether image selection is critical as regards the accuracy of 3D semantic segmentation of point clouds. Experimental results are demonstrated on a free online dataset depicting city blocks around Paris. The experimental procedure not only validates that hybrid features (geometric and visual) can achieve a more accurate semantic segmentation, but also demonstrates the importance of the most appropriate view for the 2D feature extraction.


Author(s):  
P. Tutzauer ◽  
N. Haala

This paper aims at façade reconstruction for subsequent enrichment of LOD2 building models. We use point clouds from dense image matching with imagery both from Mobile Mapping systems and oblique airborne cameras. The interpretation of façade structures is based on a geometric reconstruction. For this purpose a pre-segmentation of the point cloud into façade points and non-façade points is necessary. We present an approach for point clouds with limited geometric accuracy where a geometric segmentation might fail. Our contribution is a radiometric segmentation approach. Via local point features, based on a clustering in hue space, the point cloud is segmented into façade-points and non-façade points. This way, the initial geometric reconstruction step can be bypassed and point clouds with limited accuracy can still serve as input for the façade reconstruction and modelling approach.


Author(s):  
M. Chizhova ◽  
A. Gurianov ◽  
M. Hess ◽  
T. Luhmann ◽  
A. Brunn ◽  
...  

For the interpretation of point clouds, the semantic definition of extracted segments from point clouds or images is a common problem. Usually, the semantic of geometrical pre-segmented point cloud elements are determined using probabilistic networks and scene databases. The proposed semantic segmentation method is based on the psychological human interpretation of geometric objects, especially on fundamental rules of primary comprehension. Starting from these rules the buildings could be quite well and simply classified by a human operator (e.g. architect) into different building types and structural elements (dome, nave, transept etc.), including particular building parts which are visually detected. The key part of the procedure is a novel method based on hashing where point cloud projections are transformed into binary pixel representations. A segmentation approach released on the example of classical Orthodox churches is suitable for other buildings and objects characterized through a particular typology in its construction (e.g. industrial objects in standardized enviroments with strict component design allowing clear semantic modelling).


Author(s):  
L. Moradi ◽  
M. Saadatseresht

Abstract. In this paper, a model for simultaneous registration and 3D modelling of Velodyne VLP 32e laser scanner point clouds based on least square adjustment methods was developed. Considering that the most of proposed methods for registration of point clouds which obtained by mobile mapping systems have applications in navigation and visualization. They usually do not pay enough attention to geometric accuracy, error propagation, and weights analysis. In addition, in these methods, some point correspondence solutions are used which increase the computation time and decrease the accuracy. Therefore, the purpose of this paper is to develop a model based on the least square adjustment and focus on the weight of the plane parameters which created by a robust least square fitting algorithm. It also simultaneously creates a 3D environmental model and registers point clouds. To do this, it utilizes both point cloud voxelization and differential planes techniques. The result illustrates the high capability of the proposed solution with the optimum weight of plane parameters to 100, and average distance between two scans can reach to below 10 mm.In addition, the best voxel size was 10 cm which is twice of point cloud resolutions.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3625 ◽  
Author(s):  
Dawei Li ◽  
Yan Cao ◽  
Xue-song Tang ◽  
Siyuan Yan ◽  
Xin Cai

Leaves account for the largest proportion of all organ areas for most kinds of plants, and are comprise the main part of the photosynthetically active material in a plant. Observation of individual leaves can help to recognize their growth status and measure complex phenotypic traits. Current image-based leaf segmentation methods have problems due to highly restricted species and vulnerability toward canopy occlusion. In this work, we propose an individual leaf segmentation approach for dense plant point clouds using facet over-segmentation and facet region growing. The approach can be divided into three steps: (1) point cloud pre-processing, (2) facet over-segmentation, and (3) facet region growing for individual leaf segmentation. The experimental results show that the proposed method is effective and efficient in segmenting individual leaves from 3D point clouds of greenhouse ornamentals such as Epipremnum aureum, Monstera deliciosa, and Calathea makoyana, and the average precision and recall are both above 90%. The results also reveal the wide applicability of the proposed methodology for point clouds scanned from different kinds of 3D imaging systems, such as stereo vision and Kinect v2. Moreover, our method is potentially applicable in a broad range of applications that aim at segmenting regular surfaces and objects from a point cloud.


2007 ◽  
Vol 7 (4) ◽  
pp. 372-381 ◽  
Author(s):  
Tao Peng ◽  
Satyandra K. Gupta

This paper describes a computational framework for constructing point clouds using digital projection patterns. The basic principle behind the approach is to project known patterns on the object using a digital projector. A digital camera is then used to take images of the object with the known projection patterns imposed on it. Due to the presence of 3D faces of the object, the projection patterns appear distorted in the images. The images are analyzed to construct the 3D point cloud that is capable of introducing the observed distortions in the images. The approach described in this paper presents three advances over the previously developed approaches. First, it is capable of working with the projection patterns that have variable fringe widths and curved fringes and hence can provide improved accuracy. Second, our algorithm minimizes the number of images needed for creating the 3D point cloud. Finally, we use a hybrid approach that uses a combination of reference plane images and estimated system parameters to construct the point cloud. This approach provides good run-time computational performance and simplifies the system calibration.


Author(s):  
A. Kharroubi ◽  
R. Hajji ◽  
R. Billen ◽  
F. Poux

Abstract. With the increasing volume of 3D applications using immersive technologies such as virtual, augmented and mixed reality, it is very interesting to create better ways to integrate unstructured 3D data such as point clouds as a source of data. Indeed, this can lead to an efficient workflow from 3D capture to 3D immersive environment creation without the need to derive 3D model, and lengthy optimization pipelines. In this paper, the main focus is on the direct classification and integration of massive 3D point clouds in a virtual reality (VR) environment. The emphasis is put on leveraging open-source frameworks for an easy replication of the findings. First, we develop a semi-automatic segmentation approach to provide semantic descriptors (mainly classes) to groups of points. We then build an octree data structure leveraged through out-of-core algorithms to load in real time and continuously only the points that are in the VR user's field of view. Then, we provide an open-source solution using Unity with a user interface for VR point cloud interaction and visualisation. Finally, we provide a full semantic VR data integration enhanced through developed shaders for future spatio-semantic queries. We tested our approach on several datasets of which a point cloud composed of 2.3 billion points, representing the heritage site of the castle of Jehay (Belgium). The results underline the efficiency and performance of the solution for visualizing classifieds massive point clouds in virtual environments with more than 100 frame per second.


2020 ◽  
Vol 30 (7) ◽  
pp. 12-17
Author(s):  
Thi Kim Cuc Nguyen ◽  
Van Vinh Nguyen ◽  
Xuan Binh Cao

3D shape measurement by structured light is a high-speed method and capable of profiling complex surfaces. In particular, the processing of measuring data also greatly affects the accuracy of obtained point clouds. In this paper, an algorithm to detect multiple planes on point cloud data was developed based on RANSAC algorithm to evaluate the accuracy of point cloud measured by structural light. To evaluate the accuracy of the point cloud obtained, two-step height parts are used. The planes are detected and the distance between them needs to be measured with high accuracy. Therefore, the distance measurement data between the planes found in the point cloud is compared with the data measured by CMM measurement. The experimental results have shown that the proposed algorithm can identify multiple planes at the same time with a maximum standard deviation of 0.068 (mm) and the maximum relative error is 1.46%.


DYNA ◽  
2019 ◽  
Vol 86 (209) ◽  
pp. 238-247 ◽  
Author(s):  
Esmeide Alberto Leal Narvaez ◽  
German Sanchez Torres ◽  
John William Branch Bedoya

The human visual system (HVS) can process large quantities of visual information instantly. Visual saliency perception is the process of locating and identifying regions with a high degree of saliency from a visual standpoint. Mesh saliency detection has been studied extensively in recent years, but few studies have focused on 3D point cloud saliency detection. The estimation of visual saliency is important for computer graphics tasks such as simplification, segmentation, shape matching and resizing. In this paper, we present a method for the direct detection of saliency on unorganized point clouds. First, our method computes a set of overlapping neighborhoods and estimates adescriptor vector for each point inside it. Then, the descriptor vectors are used as a natural dictionary in order to apply a sparse coding process. Finally, we estimate a saliency map of the point neighborhoods based on the Minimum Description Length (MDL) principle.Experiment results show that the proposed method achieves similar results to those from the literature review and in some cases even improves on them. It captures the geometry of the point clouds without using any topological information and achieves an acceptable performance. The effectiveness and robustness of our approach are shown by comparing it to previous studies in the literature review.


Author(s):  
Taemin Lee ◽  
Changhun Jung ◽  
Kyungtaek Lee ◽  
Sanghyun Seo

AbstractAs augmented reality technologies develop, real-time interactions between objects present in the real world and virtual space are required. Generally, recognition and location estimation in augmented reality are carried out using tracking techniques, typically markers. However, using markers creates spatial constraints in simultaneous tracking of space and objects. Therefore, we propose a system that enables camera tracking in the real world and visualizes virtual visual information through the recognition and positioning of objects. We scanned the space using an RGB-D camera. A three-dimensional (3D) dense point cloud map is created using point clouds generated through video images. Among the generated point cloud information, objects are detected and retrieved based on the pre-learned data. Finally, using the predicted pose of the detected objects, other information may be augmented. Our system estimates object recognition and 3D pose based on simple camera information, enabling the viewing of virtual visual information based on object location.


Sign in / Sign up

Export Citation Format

Share Document