Key point location method for pedestrians in depth images based on deep learning

2019 ◽  
Vol 36 (2) ◽  
pp. 1143-1151 ◽  
Author(s):  
Liu Jian ◽  
Jin Zequn ◽  
Zhang Rui ◽  
Liu Meiju ◽  
Gao Enyang
Author(s):  
Yi Liu ◽  
Ming Cong ◽  
Hang Dong ◽  
Dong Liu

Purpose The purpose of this paper is to propose a new method based on three-dimensional (3D) vision technologies and human skill integrated deep learning to solve assembly positioning task such as peg-in-hole. Design/methodology/approach Hybrid camera configuration was used to provide the global and local views. Eye-in-hand mode guided the peg to be in contact with the hole plate using 3D vision in global view. When the peg was in contact with the workpiece surface, eye-to-hand mode provided the local view to accomplish peg-hole positioning based on trained CNN. Findings The results of assembly positioning experiments proved that the proposed method successfully distinguished the target hole from the other same size holes according to the CNN. The robot planned the motion according to the depth images and human skill guide line. The final positioning precision was good enough for the robot to carry out force controlled assembly. Practical implications The developed framework can have an important impact on robotic assembly positioning process, which combine with the existing force-guidance assembly technology as to build a whole set of autonomous assembly technology. Originality/value This paper proposed a new approach to the robotic assembly positioning based on 3D visual technologies and human skill integrated deep learning. Dual cameras swapping mode was used to provide visual feedback for the entire assembly motion planning process. The proposed workpiece positioning method provided an effective disturbance rejection, autonomous motion planning and increased overall performance with depth images feedback. The proposed peg-hole positioning method with human skill integrated provided the capability of target perceptual aliasing avoiding and successive motion decision for the robotic assembly manipulation.


2021 ◽  
Author(s):  
Kazuyuki Kaneda ◽  
Tatsuya Ooba ◽  
Hideki Shimada ◽  
Osamu Shiku ◽  
Yuji Teshima

2019 ◽  
Vol 1 (3) ◽  
pp. 883-903 ◽  
Author(s):  
Daulet Baimukashev ◽  
Alikhan Zhilisbayev ◽  
Askat Kuzdeuov ◽  
Artemiy Oleinikov ◽  
Denis Fadeyev ◽  
...  

Recognizing objects and estimating their poses have a wide range of application in robotics. For instance, to grasp objects, robots need the position and orientation of objects in 3D. The task becomes challenging in a cluttered environment with different types of objects. A popular approach to tackle this problem is to utilize a deep neural network for object recognition. However, deep learning-based object detection in cluttered environments requires a substantial amount of data. Collection of these data requires time and extensive human labor for manual labeling. In this study, our objective was the development and validation of a deep object recognition framework using a synthetic depth image dataset. We synthetically generated a depth image dataset of 22 objects randomly placed in a 0.5 m × 0.5 m × 0.1 m box, and automatically labeled all objects with an occlusion rate below 70%. Faster Region Convolutional Neural Network (R-CNN) architecture was adopted for training using a dataset of 800,000 synthetic depth images, and its performance was tested on a real-world depth image dataset consisting of 2000 samples. Deep object recognizer has 40.96% detection accuracy on the real depth images and 93.5% on the synthetic depth images. Training the deep learning model with noise-added synthetic images improves the recognition accuracy for real images to 46.3%. The object detection framework can be trained on synthetically generated depth data, and then employed for object recognition on the real depth data in a cluttered environment. Synthetic depth data-based deep object detection has the potential to substantially decrease the time and human effort required for the extensive data collection and labeling.


2021 ◽  
Vol 106 ◽  
pp. 104484
Author(s):  
David Fuentes-Jimenez ◽  
Cristina Losada-Gutierrez ◽  
David Casillas-Perez ◽  
Javier Macias-Guarasa ◽  
Daniel Pizarro ◽  
...  

2018 ◽  
Vol 147 ◽  
pp. 51-63 ◽  
Author(s):  
Chan Zheng ◽  
Xunmu Zhu ◽  
Xiaofan Yang ◽  
Lina Wang ◽  
Shuqin Tu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document