scholarly journals Investigation of Algorithms for Generating Surfaces of 3D Models Based on an Unstructured Point Cloud

Author(s):  
Ekaterina Glumova ◽  
Aleksandr Filinskih

Methods of 3D object model creation on the basis of unstructured (sparse) cloud of points are considered in the paper. The issues of combining point cloud compaction methods and subsequent surface generation are described. The comparative analysis of generation surfaces algorithms for the purpose of revealing of more effective method using as input data the depth maps received from the sparse cloud of points is carried out. The comparison is made by qualitative, quantitative and temporal criteria. The optimal method of 3D object model creation on the basis of unstructured (sparse) cloud of points and depth map data is chosen. The mathematical description of the point cloud compaction method on the basis of stereo-matching with application of two-phase algorithm of species search and depth map extraction from Multi-View Stereo for Community Photo Collections source image set is provided. The implementation of the method in open-source software Regard3D is realized in practice.

2020 ◽  
Vol 402 ◽  
pp. 336-345
Author(s):  
Xuzhan Chen ◽  
Youping Chen ◽  
Homayoun Najjaran

Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6043
Author(s):  
Yujun Jiao ◽  
Zhishuai Yin

A two-phase cross-modality fusion detector is proposed in this study for robust and high-precision 3D object detection with RGB images and LiDAR point clouds. First, a two-stream fusion network is built into the framework of Faster RCNN to perform accurate and robust 2D detection. The visible stream takes the RGB images as inputs, while the intensity stream is fed with the intensity maps which are generated by projecting the reflection intensity of point clouds to the front view. A multi-layer feature-level fusion scheme is designed to merge multi-modal features across multiple layers in order to enhance the expressiveness and robustness of the produced features upon which region proposals are generated. Second, a decision-level fusion is implemented by projecting 2D proposals to the space of the point cloud to generate 3D frustums, on the basis of which the second-phase 3D detector is built to accomplish instance segmentation and 3D-box regression on the filtered point cloud. The results on the KITTI benchmark show that features extracted from RGB images and intensity maps complement each other, and our proposed detector achieves state-of-the-art performance on 3D object detection with a substantially lower running time as compared to available competitors.


Author(s):  
Zhiyong Gao ◽  
Jianhong Xiang

Background: While detecting the object directly from the 3D point cloud, the natural 3D patterns and invariance of 3D data are often obscure. Objective: In this work, we aimed at studying the 3D object detection from discrete, disordered and sparse 3D point clouds. Methods: The CNN is composed of the frustum sequence module, 3D instance segmentation module S-NET, 3D point cloud transformation module T-NET, and 3D boundary box estimation module E-NET. The search space of the object is determined by the frustum sequence module. The instance segmentation of the point cloud is performed by the 3D instance segmentation module. The 3D coordinates of the object are confirmed by the transformation module and the 3D bounding box estimation module. Results: Evaluated on KITTI benchmark dataset, our method outperforms the state of the art by remarkable margins while having real-time capability. Conclusion: We achieve real-time 3D object detection by proposing an improved convolutional neural network (CNN) based on image-driven point clouds.


2021 ◽  
Author(s):  
Siddharth Katageri ◽  
Sameer Kulmi ◽  
Ramesh Ashok Tabib ◽  
Uma Mudenagudi

2021 ◽  
Author(s):  
Xinrui Yan ◽  
Yuhao Huang ◽  
Shitao Chen ◽  
Zhixiong Nan ◽  
Jingmin Xin ◽  
...  

2015 ◽  
Vol 764-765 ◽  
pp. 1375-1379 ◽  
Author(s):  
Cheng Tiao Hsieh

This paper aims at presenting a simple approach utilizing a Kinect-based scanner to create models available for 3D printing or other digital manufacturing machines. The outputs of Kinect-based scanners are a depth map and they usually need complicated computational processes to prepare them ready for a digital fabrication. The necessary processes include noise filtering, point cloud alignment and surface reconstruction. Each process may require several functions and algorithms to accomplish these specific tasks. For instance, the Iterative Closest Point (ICP) is frequently used in a 3D registration and the bilateral filter is often used in a noise point filtering process. This paper attempts to develop a simple Kinect-based scanner and its specific modeling approach without involving the above complicated processes.The developed scanner consists of an ASUS’s Xtion Pro and rotation table. A set of organized point cloud can be generated by the scanner. Those organized point clouds can be aligned precisely by a simple transformation matrix instead of the ICP. The surface quality of raw point clouds captured by Kinect are usually rough. For this drawback, this paper introduces a solution to obtain a smooth surface model. Inaddition, those processes have been efficiently developed by free open libraries, VTK, Point Cloud Library and OpenNI.


2014 ◽  
Vol 644-650 ◽  
pp. 2656-2660
Author(s):  
Yao Cheng ◽  
Guang Xue Chen ◽  
Chen Chen ◽  
Jiang Ping Yuan

In the process of 3D printing, stereo image acquisition is the basis and premise of 3D modeling so that it’s important to study the acquisition methods and techniques. This paper will study the process of point cloud data acquisition of a hand model by using handheld laser scanner REVscan, and processed by the reverse engineering software Geomagic Studio. Using the object model captured, we can greatly improve the efficiency and accuracy, as well as reduce the cycle of the 3D printing. This will help achieve the transmission of 3D printing data without geographical restrictions, in which truly realize the concept "What You See Is What You Get".


Sensors ◽  
2019 ◽  
Vol 19 (19) ◽  
pp. 4093 ◽  
Author(s):  
Jun Xu ◽  
Yanxin Ma ◽  
Songhua He ◽  
Jiahua Zhu

Three-dimensional (3D) object detection is an important research in 3D computer vision with significant applications in many fields, such as automatic driving, robotics, and human–computer interaction. However, the low precision is an urgent problem in the field of 3D object detection. To solve it, we present a framework for 3D object detection in point cloud. To be specific, a designed Backbone Network is used to make fusion of low-level features and high-level features, which makes full use of various information advantages. Moreover, the two-dimensional (2D) Generalized Intersection over Union is extended to 3D use as part of the loss function in our framework. Empirical experiments of Car, Cyclist, and Pedestrian detection have been conducted respectively on the KITTI benchmark. Experimental results with average precision (AP) have shown the effectiveness of the proposed network.


Sign in / Sign up

Export Citation Format

Share Document