VisuoTactile 6D Pose Estimation of an In-Hand Object using Vision and Tactile Sensor Data

Author(s):  
Snehal Dikhale ◽  
Karankumar Patel ◽  
Daksh Dhingra ◽  
Itoshi Naramura ◽  
Akinobu Hayashi ◽  
...  
2019 ◽  
Vol 4 (30) ◽  
pp. eaaw4523 ◽  
Author(s):  
Karthik Desingh ◽  
Shiyang Lu ◽  
Anthony Opipari ◽  
Odest Chadwicke Jenkins

Robots working in human environments often encounter a wide range of articulated objects, such as tools, cabinets, and other jointed objects. Such articulated objects can take an infinite number of possible poses, as a point in a potentially high-dimensional continuous space. A robot must perceive this continuous pose to manipulate the object to a desired pose. This problem of perception and manipulation of articulated objects remains a challenge due to its high dimensionality and multimodal uncertainty. Here, we describe a factored approach to estimate the poses of articulated objects using an efficient approach to nonparametric belief propagation. We consider inputs as geometrical models with articulation constraints and observed RGBD (red, green, blue, and depth) sensor data. The described framework produces object-part pose beliefs iteratively. The problem is formulated as a pairwise Markov random field (MRF), where each hidden node (continuous pose variable) is an observed object-part’s pose and the edges denote the articulation constraints between the parts. We describe articulated pose estimation by a “pull” message passing algorithm for nonparametric belief propagation (PMPNBP) and evaluate its convergence properties over scenes with articulated objects. Robot experiments are provided to demonstrate the necessity of maintaining beliefs to perform goal-driven manipulation tasks.


Robotica ◽  
1988 ◽  
Vol 6 (1) ◽  
pp. 31-34 ◽  
Author(s):  
R. Andrew Russell

SUMMARYThis paper describes a novel tactile sensor array designed to provide information about the material constitution and shape of objects held by a robot manipulator. The sensor is modeled on the thermal touch sense which enables humans to distinguish between different materials based on how warm or cold they feel. Some results are presented and methods of analysing the sensor data are discussed.


Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 718 ◽  
Author(s):  
Baohua Qiang ◽  
Shihao Zhang ◽  
Yongsong Zhan ◽  
Wu Xie ◽  
Tian Zhao

In recent years, increasing human data comes from image sensors. In this paper, a novel approach combining convolutional pose machines (CPMs) with GoogLeNet is proposed for human pose estimation using image sensor data. The first stage of the CPMs directly generates a response map of each human skeleton’s key points from images, in which we introduce some layers from the GoogLeNet. On the one hand, the improved model uses deeper network layers and more complex network structures to enhance the ability of low level feature extraction. On the other hand, the improved model applies a fine-tuning strategy, which benefits the estimation accuracy. Moreover, we introduce the inception structure to greatly reduce parameters of the model, which reduces the convergence time significantly. Extensive experiments on several datasets show that the improved model outperforms most mainstream models in accuracy and training time. The prediction efficiency of the improved model is improved by 1.023 times compared with the CPMs. At the same time, the training time of the improved model is reduced 3.414 times. This paper presents a new idea for future research.


2015 ◽  
Vol 109 ◽  
pp. 25-33 ◽  
Author(s):  
Tristan Tzschichholz ◽  
Toralf Boge ◽  
Klaus Schilling

Sign in / Sign up

Export Citation Format

Share Document