scholarly journals Grasping and Manipulation of Unknown Objects Based on Visual and Tactile Feedback

Author(s):  
Robert Haschke
2018 ◽  
Vol 15 (01) ◽  
pp. 1850012 ◽  
Author(s):  
Eduardo Torres-Jara ◽  
Lorenzo Natale

Object grasping and manipulation in robotics has been largely approached using visual feedback. Human studies on the other hand have demonstrated the importance of tactile and force feedback to guide the interaction between the fingers and the objects. Inspired by these observations, we propose an approach that consists in guiding a robot’s actions mainly by tactile feedback, with remote sensing such as vision, used only as a complement. Directly sensing the interaction forces between the object, the environment, and the robot’s hand enables it to obtain information relevant to the task that can be used to perform it more reliably. This approach (that we call sensitive manipulation) requires important changes in the hardware and in the way the robot is programmed. At the hardware level, we exploit compliant actuators and novel sensors that allow to safely interact and detect the environment. We developed strategies to perform manipulation tasks that take advantage of these new sensing and actuation capabilities. In this paper, we demonstrate that using these strategies the humanoid robot Obrero can safely find, reach and grab unknown objects that are neither held in place by a fixture nor placed in a specific orientation. The robot can also make insertions by “feeling” the hole without specialized mechanisms such as a remote center of compliance (RCC).


2021 ◽  
Vol 8 ◽  
Author(s):  
Muhammad Sami Siddiqui ◽  
Claudio Coppola ◽  
Gokhan Solak ◽  
Lorenzo Jamone

Grasp stability prediction of unknown objects is crucial to enable autonomous robotic manipulation in an unstructured environment. Even if prior information about the object is available, real-time local exploration might be necessary to mitigate object modelling inaccuracies. This paper presents an approach to predict safe grasps of unknown objects using depth vision and a dexterous robot hand equipped with tactile feedback. Our approach does not assume any prior knowledge about the objects. First, an object pose estimation is obtained from RGB-D sensing; then, the object is explored haptically to maximise a given grasp metric. We compare two probabilistic methods (i.e. standard and unscented Bayesian Optimisation) against random exploration (i.e. uniform grid search). Our experimental results demonstrate that these probabilistic methods can provide confident predictions after a limited number of exploratory observations, and that unscented Bayesian Optimisation can find safer grasps, taking into account the uncertainty in robot sensing and grasp execution.


1999 ◽  
Vol 13 (4) ◽  
pp. 234-244
Author(s):  
Uwe Niederberger ◽  
Wolf-Dieter Gerber

Abstract In two experiments with four and two groups of healthy subjects, a novel motor task, the voluntary abduction of the right big toe, was trained. This task cannot usually be performed without training and is therefore ideal for the study of elementary motor learning. A systematic variation of proprioceptive, tactile, visual, and EMG feedback was used. In addition to peripheral measurements such as the voluntary range of motion and EMG output during training, a three-channel EEG was recorded over Cz, C3, and C4. The movement-related brain potential during distinct periods of the training was analyzed as a central nervous parameter of the ongoing learning process. In experiment I, we randomized four groups of 12 subjects each (group P: proprioceptive feedback; group PT: proprioceptive and tactile feedback; group PTV: proprioceptive, tactile, and visual feedback; group PTEMG: proprioceptive, tactile, and EMG feedback). Best training results were reported from the PTEMG and PTV groups. The movement-preceding cortical activity, in the form of the amplitude of the readiness potential at the time of EMG onset, was greatest in these two groups. Results of experiment II revealed a similar effect, with a greater training success and a higher electrocortical activation under additional EMG feedback compared to proprioceptive feedback alone. Sensory EMG feedback as evaluated by peripheral and central nervous measurements appears to be useful in motor training and neuromuscular re-education.


Author(s):  
Hiroaki Nishino ◽  
Ryotaro Goto ◽  
Yuki Fukakusa ◽  
Jiaqing Lin ◽  
Tsuneo Kagawa ◽  
...  

2021 ◽  
Vol 11 (8) ◽  
pp. 991
Author(s):  
Christopher Copeland ◽  
Mukul Mukherjee ◽  
Yingying Wang ◽  
Kaitlin Fraser ◽  
Jorge M. Zuniga

This study aimed to examine the neural responses of children using prostheses and prosthetic simulators to better elucidate the emulation abilities of the simulators. We utilized functional near-infrared spectroscopy (fNIRS) to evaluate the neural response in five children with a congenital upper limb reduction (ULR) using a body-powered prosthesis to complete a 60 s gross motor dexterity task. The ULR group was matched with five typically developing children (TD) using their non-preferred hand and a prosthetic simulator on the same hand. The ULR group had lower activation within the primary motor cortex (M1) and supplementary motor area (SMA) compared to the TD group, but nonsignificant differences in the primary somatosensory area (S1). Compared to using their non-preferred hand, the TD group exhibited significantly higher action in S1 when using the simulator, but nonsignificant differences in M1 and SMA. The non-significant differences in S1 activation between groups and the increased activation evoked by the simulator’s use may suggest rapid changes in feedback prioritization during tool use. We suggest that prosthetic simulators may elicit increased reliance on proprioceptive and tactile feedback during motor tasks. This knowledge may help to develop future prosthesis rehabilitative training or the improvement of tool-based skills.


Author(s):  
Wakana Ishihara ◽  
Karen Moxon ◽  
Sheryl Ehrman ◽  
Mark Yarborough ◽  
Tina L. Panontin ◽  
...  

This systematic review addresses the plausibility of using novel feedback modalities for brain–computer interface (BCI) and attempts to identify the best feedback modality on the basis of the effectiveness or learning rate. Out of the chosen studies, it was found that 100% of studies tested visual feedback, 31.6% tested auditory feedback, 57.9% tested tactile feedback, and 21.1% tested proprioceptive feedback. Visual feedback was included in every study design because it was intrinsic to the response of the task (e.g. seeing a cursor move). However, when used alone, it was not very effective at improving accuracy or learning. Proprioceptive feedback was most successful at increasing the effectiveness of motor imagery BCI tasks involving neuroprosthetics. The use of auditory and tactile feedback resulted in mixed results. The limitations of this current study and further study recommendations are discussed.


2021 ◽  
pp. 1-16
Author(s):  
First A. Wenbo Huang ◽  
Second B. Changyuan Wang ◽  
Third C. Hongbo Jia

Traditional intention inference methods rely solely on EEG, eye movement or tactile feedback, and the recognition rate is low. To improve the accuracy of a pilot’s intention recognition, a human-computer interaction intention inference method is proposed in this paper with the fusion of EEG, eye movement and tactile feedback. Firstly, EEG signals are collected near the frontal lobe of the human brain to extract features, which includes eight channels, i.e., AF7, F7, FT7, T7, AF8, F8, FT8, and T8. Secondly, the signal datas are preprocessed by baseline removal, normalization, and least-squares noise reduction. Thirdly, the support vector machine (SVM) is applied to carry out multiple binary classifications of the eye movement direction. Finally, the 8-direction recognition of the eye movement direction is realized through data fusion. Experimental results have shown that the accuracy of classification with the proposed method can reach 75.77%, 76.7%, 83.38%, 83.64%, 60.49%,60.93%, 66.03% and 64.49%, respectively. Compared with traditional methods, the classification accuracy and the realization process of the proposed algorithm are higher and simpler. The feasibility and effectiveness of EEG signals are further verified to identify eye movement directions for intention recognition.


2020 ◽  
pp. 1-12
Author(s):  
Marios Kiatos ◽  
Sotiris Malassiotis ◽  
Iason Sarantopoulos

Sign in / Sign up

Export Citation Format

Share Document