Point cloud and visual feature-based tracking method for an augmented reality-aided mechanical assembly system

2018 ◽  
Vol 99 (9-12) ◽  
pp. 2341-2352 ◽  
Author(s):  
Yue Wang ◽  
Shusheng Zhang ◽  
Bile Wan ◽  
Weiping He ◽  
Xiaoliang Bai
Author(s):  
Yue Wang ◽  
Shusheng Zhang ◽  
Xiaoliang Bai

To improve the robustness and applicability of 3D tracking and registration for augmented reality(AR) aided mechanical assembly system, a 3D registration and tracking method based on the point cloud and visual features is proposed. Firstly, the reference model point cloud is used to definite absolute tracking coordinate system, thus the locating datum of the virtual assembly guidance information is determined. Then by adding visual features matching to the iterative closest points (ICP) registration process, the robustness of tracking and registration is improved. In order to obtain sufficient number of visual feature matching points in this process, a visual feature matching strategy based on orientation vector consistency is proposed. Finally, the loop closure detection and global pose optimization from key frames are added in the tracking registration process. The experimental result shows that the proposed method has good real-time performance and accuracy, and the running speed can reach 30 frames per second. Moreover, it also shows good robustness when the camera is moving fast and the depth information is inaccurate, and the comprehensive performance of the proposed method is better than the KinectFusion method.


Author(s):  
Rafael Radkowski

This paper introduces a 3D object tracking method for an augmented reality (AR) assembly assistance application. The tracking method relies on point clouds; it uses 3D feature descriptors and point cloud matching with the iterative closest points (ICP) algorithm. The feature descriptors identify an object in a point cloud; ICP align a reference object with this point cloud. The challenge is to achieve high fidelity while maintaining camera frame rates. The point cloud and reference object sampling density are one of the key factors to meet this challenge. In this research, three-point sampling methods and two-point cloud search algorithms were compared to assess their fidelity when tracking typical products of mechanical engineering. The results indicate that a uniform sampling maintains the best fidelity at camera frame rates.


2016 ◽  
Vol 136 (8) ◽  
pp. 1078-1084
Author(s):  
Shoichi Takei ◽  
Shuichi Akizuki ◽  
Manabu Hashimoto

Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 3848
Author(s):  
Xinyue Zhang ◽  
Gang Liu ◽  
Ling Jing ◽  
Siyao Chen

The heart girth parameter is an important indicator reflecting the growth and development of pigs that provides critical guidance for the optimization of healthy pig breeding. To overcome the heavy workloads and poor adaptability of traditional measurement methods currently used in pig breeding, this paper proposes an automated pig heart girth measurement method using two Kinect depth sensors. First, a two-view pig depth image acquisition platform is established for data collection; the two-view point clouds after preprocessing are registered and fused by feature-based improved 4-Point Congruent Set (4PCS) method. Second, the fused point cloud is pose-normalized, and the axillary contour is used to automatically extract the heart girth measurement point. Finally, this point is taken as the starting point to intercept the circumferential perpendicular to the ground from the pig point cloud, and the complete heart girth point cloud is obtained by mirror symmetry. The heart girth is measured along this point cloud using the shortest path method. Using the proposed method, experiments were conducted on two-view data from 26 live pigs. The results showed that the heart girth measurement absolute errors were all less than 4.19 cm, and the average relative error was 2.14%, which indicating a high accuracy and efficiency of this method.


2021 ◽  
pp. 1-1
Author(s):  
Masamichi Oka ◽  
Ryoichi Shinkuma ◽  
Takehiro Sato ◽  
Eiji Oki ◽  
Takanori Iwai ◽  
...  

2021 ◽  
Author(s):  
Rudieri Dietrich Bauer ◽  
Thiago Luiz Watambak ◽  
Salvador Sergi Agati ◽  
Marcelo da Silva Hounsell ◽  
Andre Tavares da Silva

Author(s):  
L. Zhang ◽  
P. van Oosterom ◽  
H. Liu

Abstract. Point clouds have become one of the most popular sources of data in geospatial fields due to their availability and flexibility. However, because of the large amount of data and the limited resources of mobile devices, the use of point clouds in mobile Augmented Reality applications is still quite limited. Many current mobile AR applications of point clouds lack fluent interactions with users. In our paper, a cLoD (continuous level-of-detail) method is introduced to filter the number of points to be rendered considerably, together with an adaptive point size rendering strategy, thus improve the rendering performance and remove visual artifacts of mobile AR point cloud applications. Our method uses a cLoD model that has an ideal distribution over LoDs, with which can remove unnecessary points without sudden changes in density as present in the commonly used discrete level-of-detail approaches. Besides, camera position, orientation and distance from the camera to point cloud model is taken into consideration as well. With our method, good interactive visualization of point clouds can be realized in the mobile AR environment, with both nice visual quality and proper resource consumption.


Sign in / Sign up

Export Citation Format

Share Document