A Shape Description Model by Using Sensor Data From Touch

Author(s):  
Venturia Chiroiu ◽  
Ligia Munteanu ◽  
Cornel Mihai Nicolescu

In this paper we consider the problem of recognizing the shape of a 3D object using tactile sensing by a dexterous robot hand. Our approach uses multiple fingers to slide along the surface of the object. From the sensing contact points we extracts a number of 3D points belonging to the surface of the object. The unknown surface Γ of the object is determined by using an “n-ellipsoid” model (Bonnet [1]). The set of parameters that define the surface Γ is determined such that the n-ellipsoid best fits the set of data points.

Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8339
Author(s):  
Patrick Lynch ◽  
Michael F. Cullinan ◽  
Conor McGinn

A robot’s ability to grasp moving objects depends on the availability of real-time sensor data in both the far-field and near-field of the gripper. This research investigates the potential contribution of tactile sensing to a task of grasping an object in motion. It was hypothesised that combining tactile sensor data with a reactive grasping strategy could improve its robustness to prediction errors, leading to a better, more adaptive performance. Using a two-finger gripper, we evaluated the performance of two algorithms to grasp a ball rolling on a horizontal plane at a range of speeds and gripper contact points. The first approach involved an adaptive grasping strategy initiated by tactile sensors in the fingers. The second strategy initiated the grasp based on a prediction of the position of the object relative to the gripper, and provided a proxy to a vision-based object tracking system. It was found that the integration of tactile sensor feedback resulted in a higher observed grasp robustness, especially when the gripper–ball contact point was displaced from the centre of the gripper. These findings demonstrate the performance gains that can be attained by incorporating near-field sensor data into the grasp strategy and motivate further research on how this strategy might be expanded for use in different manipulator designs and in more complex grasp scenarios.


2021 ◽  
pp. 2100038
Author(s):  
Zhen Pei ◽  
Qiang Zhang ◽  
Kun Yang ◽  
Zhongyun Yuan ◽  
Wendong Zhang ◽  
...  

2000 ◽  
Author(s):  
Michael L. Turner ◽  
Ryan P. Findley ◽  
Weston B. Griffin ◽  
Mark R. Cutkosky ◽  
Daniel H. Gomez

Abstract This paper describes the development of a system for dexterous telemanipulation and presents the results of tests involving simple manipulation tasks. The user wears an instrumented glove augmented with an arm-grounded haptic feedback apparatus. A linkage attached to the user’s wrist measures gross motions of the arm. The movements of the user are transferred to a two fingered dexterous robot hand mounted on the end of a 4-DOF industrial robot arm. Forces measured at the robot fingers can be transmitted back to the user via the haptic feedback apparatus. The results obtained in block-stacking and object-rolling experiments indicate that the addition of force feedback to the user did not improve the speed of task execution. In fact, in some cases the presence of incomplete force information is detrimental to performance speed compared to no force information. There are indications that the presence of force feedback did aid in task learning.


Author(s):  
Eun-Hye Kim ◽  
Seok-Won Lee ◽  
Yong-Kwun Lee

Sensors ◽  
2019 ◽  
Vol 19 (6) ◽  
pp. 1434 ◽  
Author(s):  
Minle Li ◽  
Yihua Hu ◽  
Nanxiang Zhao ◽  
Qishu Qian

Three-dimensional (3D) object detection has important applications in robotics, automatic loading, automatic driving and other scenarios. With the improvement of devices, people can collect multi-sensor/multimodal data from a variety of sensors such as Lidar and cameras. In order to make full use of various information advantages and improve the performance of object detection, we proposed a Complex-Retina network, a convolution neural network for 3D object detection based on multi-sensor data fusion. Firstly, a unified architecture with two feature extraction networks was designed, and the feature extraction of point clouds and images from different sensors realized synchronously. Then, we set a series of 3D anchors and projected them to the feature maps, which were cropped into 2D anchors with the same size and fused together. Finally, the object classification and 3D bounding box regression were carried out on the multipath of fully connected layers. The proposed network is a one-stage convolution neural network, which achieves the balance between the accuracy and speed of object detection. The experiments on KITTI datasets show that the proposed network is superior to the contrast algorithms in average precision (AP) and time consumption, which shows the effectiveness of the proposed network.


Sign in / Sign up

Export Citation Format

Share Document