Image slicing approach to computer aided modeling compatible centerline extraction and geometric imperfections from point cloud data

Author(s):  
Alan Smith ◽  
Rodrigo Sarlo
Bauingenieur ◽  
2017 ◽  
Vol 92 (05) ◽  
pp. 191-199
Author(s):  
T. Rabczuk ◽  
C. Anitescu ◽  
C. L. Chan

Es wird ein Verfahren zur Diskretisierung aus Oberflächenrepräsentierung gewonnener zwei- und drei dimensionaler Modelle vorgeschlagen. Das Verfahren basiert auf einer hierarchischen Quad-Tree-(2D) oder Octree-(3D) Struktur, um die Oberflächendetails ohne signifikante Erhöhung der Anzahl innerer Elemente effizient darzustellen. Darüber hinaus kann die Oberflächengeometrie präzise in das Modell unter Verwendung kubischer (oder höherer Ordnung) Splines abgebildet werden. Das resultierende Modell verwendet dieselben Basisfunktionen (Bézier-Bernstein-Polynome), die in Computer Aided Design (CAD) Software verwendet werden. Die Methode erlaubt eine generelle Oberflächenrepresäntierung aus unterschiedlichen Datenquellen wie bspw. Bilddaten, Daten von 3D-Scannern und B-Rep-Modellen und minimiert Benutzerinterventionen zur Generierung und Vernetzung volumetrischer Modelle. Letztendlich wird das Verfahren zur Erstellung komplexer dreidimensionaler Modelle im Bauingenieurwesen angewendet.


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Author(s):  
Keisuke YOSHIDA ◽  
Shiro MAENO ◽  
Syuhei OGAWA ◽  
Sadayuki ISEKI ◽  
Ryosuke AKOH

2019 ◽  
Author(s):  
Byeongjun Oh ◽  
Minju Kim ◽  
Chanwoo Lee ◽  
Hunhee Cho ◽  
Kyung-In Kang

Sign in / Sign up

Export Citation Format

Share Document