A Feature-Based Section Curve Reconstructing Strategy

2013 ◽  
Vol 572 ◽  
pp. 155-158
Author(s):  
Hai Tao Zhu ◽  
Liang Cong

ntegrating section feature recognition with forward design is an effective method to reconstruct section curve and change feature architecture patterns from 2D to 3D. This paper proposes solutions to filter the points on the slices of point cloud data, automatically sequence the points on slices, recognize section curve feature, fit each curve segment and reconstruct section curves. All the relevant algorithms are implemented in Matlab. The point cloud data of sighting scope is used to validate the strategy. Also, Error analysis is carried out in Geomagic Studio. This strategy proves its feasibility and accuracy of completing reverse modeling process.

2021 ◽  
pp. 1-1
Author(s):  
Masamichi Oka ◽  
Ryoichi Shinkuma ◽  
Takehiro Sato ◽  
Eiji Oki ◽  
Takanori Iwai ◽  
...  

2019 ◽  
Vol 141 (12) ◽  
Author(s):  
Yu Jin ◽  
Harry Pierson ◽  
Haitao Liao

Abstract Additive manufacturing (AM) has the unprecedented ability to create customized, complex, and nonparametric geometry, and it has made this ability accessible to individuals outside of traditional production environments. Geometric inspection technology, however, has yet to adapt to take full advantage of AM’s abilities. Coordinate measuring machines are accurate, but they are also slow, expensive to operate, and inaccessible to many AM users. On the other hand, 3D-scanners provide fast, high-density measurements, but there is a lack of feature-based analysis techniques for point cloud data. There exists a need for developing fast, feature-based geometric inspection techniques that can be implemented by users without specialized training in inspection according to geometric dimensioning and tolerancing conventions. This research proposes a new scale- and pose-invariant quality inspection method based on a novel location-orientation-shape (LOS) distribution derived from point cloud data. The key technique of the new method is to describe the shape and pose of key features via kernel density estimation and detect nonconformities based on statistical divergence. Numerical examples are provided and tests on physical AM builds are conducted to validate the method. The results show that the proposed inspection scheme is able to identify form, position, and orientation defects. The results also demonstrate how datum features can be incorporated into point cloud inspection, that datum features can be complex, nonparametric surfaces, and how the specification of datums can be more intuitive and meaningful, particularly for users without special training.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Kai Chen ◽  
Kai Zhan ◽  
Xiaocong Yang ◽  
Da Zhang

A three-dimensional (3D) laser scanner with characteristics such as acquiring huge point cloud data and noncontact measurement has revolutionized the surveying and mapping industry. Nonetheless, how to guarantee the 3D laser scanner precision remains the critical factor that determines the excellence of 3D laser scanners. Hence, this study proposes a 3D laser scanner error analysis and calibration-method-based D-H model, applies the D-H model method in the robot area to the 3D laser scanner coordinate for calculating the point cloud data and creatively derive the error model, comprehensively analyzes six external parameters and seven inner structure parameters that affect point cloud coordinator error, and designs two calibration platforms for inner structure parameters. To validate the proposed method, we used SOKKIA total station and BLSS-PE 3D laser scanner to attain the center coordinate of the testing target sphere and then evaluate the external parameters and modify the point coordinate. Based on modifying the point coordinate, comparing the point coordinate that considered the inner structure parameters with the point coordinate that did not consider the inner structure parameters, the experiment revealed that the BLSS-PE 3D laser scanner’s precision enhanced after considering the inner structure parameters, demonstrating that the error analysis and calibration method was correct and feasible.


Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 317
Author(s):  
Mehrdad Eslami ◽  
Mohammad Saadatseresht

Cameras and laser scanners are complementary tools for a 2D/3D information generation. Systematic and random errors cause the misalignment of the multi-sensor imagery and point cloud data. In this paper, a novel feature-based approach is proposed for imagery and point cloud fine registration. The tie points and its two neighbor pixels are matched in the overlap images, which are intersected in the object space to create the differential tie plane. A preprocessing is applied to the corresponding tie points and non-robust ones are removed. Initial coarse Exterior Orientation Parameters (EOPs), Interior Orientation Parameters (IOPs), and Additional Parameters (APs) are used to transform tie plane points to the object space. Then, the nearest points of the point cloud data to the transformed tie plane points are estimated. These estimated points are used to calculate Directional Vectors (DV) of the differential planes. As a constraint equation along with the collinearity equation, each object space tie point is forced to be located on the point cloud differential plane. Two different indoor and outdoor experimental data are used to assess the proposed approach. Achieved results show about 2.5 pixels errors on checkpoints. Such results demonstrated the robustness and practicality of the proposed approach.


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Author(s):  
Keisuke YOSHIDA ◽  
Shiro MAENO ◽  
Syuhei OGAWA ◽  
Sadayuki ISEKI ◽  
Ryosuke AKOH

Sign in / Sign up

Export Citation Format

Share Document