A Marker-Less Monocular Vision Point Positioning Method for Industrial Manual Operation Environments
Abstract Vision-assisted technologies in industry such as Augmented Reality (AR) are increasingly popular. They require high positioning accuracy and robustness in industrial manual operation environments. However the narrow space and moving hands or tools may occlude or obscure local visual features of operation environments, affect the positioning accuracy and robustness of operating position. It may even cause misoperation of operators because of misguidance. This paper proposes a marker-less monocular vision point positioning method for vision-assisted manual operation in industrial environments. The proposed method can accurately and robustly locate the target point of operation using constraint minimization method even the target area has no corresponding visual features in the case of occlusion and improper illumination. The proposed method has three phases: intersection generation, intersection optimization and target point solving. In the intersection generation stage, a certain number intersections of epipolar lines are generated as candidate target points using fundamental matrices. Here the solving constraint is converted from point-to-line to point-to-points. In the intersection optimization stage, the intersections are optimized to two different sets through the iterative linear fitting and geometric mean absolute error methods. Here the solving constraint is converted from point-to-points to point-to-point sets. In the target point solving stage, the target point is solved as a constrained minimization problem based on the distribution constraint of the two intersection sets. Here the solving constraint is converted from point-to-point sets to point-to-point and the unique optimal solution is obtained as the target point. The experimental results show that this method has a better accuracy and robustness than the traditional homography matrix method for the practical industrial operation scenes.