Visual grasp affordances from appearance-based cues

Author(s):  
Hyun Oh Song ◽  
Mario Fritz ◽  
Chunhui Gu ◽  
Trevor Darrell
Keyword(s):  
NeuroImage ◽  
2014 ◽  
Vol 92 ◽  
pp. 69-73 ◽  
Author(s):  
Simone Kühn ◽  
Anika Werner ◽  
Ulman Lindenberger ◽  
Julius Verrel

2017 ◽  
Vol 17 (10) ◽  
pp. 232
Author(s):  
Robert McManus ◽  
Laura Thomas

Author(s):  
R. Detry ◽  
E. Başeski ◽  
M. Popović ◽  
Y. Touati ◽  
N. Krüger ◽  
...  
Keyword(s):  

Author(s):  
Dirk Kraft ◽  
Renaud Detry ◽  
Nicolas Pugeault ◽  
Emre Başeski ◽  
Justus Piater ◽  
...  

Robotica ◽  
2014 ◽  
Vol 33 (5) ◽  
pp. 1163-1180 ◽  
Author(s):  
Emre Ugur ◽  
Yukie Nagai ◽  
Hande Celikkanat ◽  
Erhan Oztop

SUMMARYParental scaffolding is an important mechanism that speeds up infant sensorimotor development. Infants pay stronger attention to the features of the objects highlighted by parents, and their manipulation skills develop earlier than they would in isolation due to caregivers' support. Parents are known to make modifications in infant-directed actions, which are often called “motionese”7. The features that might be associated with motionese are amplification, repetition and simplification in caregivers' movements, which are often accompanied by increased social signalling. In this paper, we extend our previously developed affordances learning framework to enable our hand-arm robot equipped with a range camera to benefit from parental scaffolding and motionese. We first present our results on how parental scaffolding can be used to guide the robot learning and to modify its crude action execution to speed up the learning of complex skills. For this purpose, an interactive human caregiver-infant scenario was realized with our robotic setup. This setup allowed the caregiver's modification of the ongoing reach and grasp movement of the robot via physical interaction. This enabled the caregiver to make the robot grasp the target object, which in turn could be used by the robot to learn the grasping skill. In addition to this, we also show how parental scaffolding can be used in speeding up imitation learning. We present the details of our work that takes the robot beyond simple goal-level imitation, making it a better imitator with the help of motionese.


2019 ◽  
Vol 72 (11) ◽  
pp. 2605-2613 ◽  
Author(s):  
Shaheed Azaad ◽  
Simon M Laham

Tucker and Ellis found that when participants made left/right button-presses to indicate whether objects were upright or inverted, responses were faster when the response hand aligned with the task-irrelevant handle orientation of the object. The effect of handle orientation on response times has been interpreted as evidence that individuals perceive grasp affordances when viewing briefly presented objects, which in turn activate grasp-related motor systems. Although the effect of handle alignment has since been replicated, there remains doubt regarding the extent to which the effect is indeed driven by affordance perception. Objects that feature in affordance-compatibility paradigms are asymmetrical and have laterally protruding handles (e.g., mugs) and thus confound spatial and affordance properties. Research has attempted to disentangle spatial compatibility and affordance effects with varying results. In this study, we present a novel paradigm with which to study affordance perception while sidestepping spatial confounds. We use the Bimanual Affordance Task (BMAT) to test whether object affordances in symmetrical objects facilitate response times. Participants ( N = 36) used one of three (left unimanual/right unimanual/bimanual) responses to indicate the colour of presented objects. Objects afforded either a unimanual (e.g., handbag) or a bimanual (e.g., laundry hamper) grasp. Responses were faster when the afforded grasp corresponded with the response type (unimanual vs. bimanual), suggesting that affordance effects exist independent of spatial compatibility.


Sign in / Sign up

Export Citation Format

Share Document