grip aperture
Recently Published Documents


TOTAL DOCUMENTS

40
(FIVE YEARS 11)

H-INDEX

14
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Yuqi Liu ◽  
James Caracoglia ◽  
Sriparna Sen ◽  
Ella Striem-Amit

While reaching and grasping are highly prevalent manual actions, neuroimaging studies provide evidence that their neural representations may be shared between different body parts, i.e. effectors. If these actions are guided by effector-independent mechanisms, similar kinematics should be observed when the action is performed by the hand or by a cortically remote and less experienced effector, such as the foot. We tested this hypothesis with two characteristic components of action: the initial ballistic stage of reaching, and the preshaping of the digits during grasping based on object size. We examined if these kinematic features reflect effector-independent mechanisms by asking participants to reach toward and to grasp objects of different widths with their hand and foot. First, during both reaching and grasping, the velocity profile up to peak velocity matched between the hand and the foot, indicating a shared ballistic acceleration phase. Secondly, maximum grip aperture and time of maximum grip aperture of grasping increased with object size for both effectors, indicating encoding of object size during transport. Differences between the hand and foot were found in the deceleration phase and time of maximum grip aperture, likely due to biomechanical differences and the participants' inexperience with foot actions. These findings provide evidence for effector-independent visuomotor mechanisms of reaching and grasping that generalize across body parts.


2021 ◽  
Author(s):  
Irene Caprara ◽  
Peter Janssen

Abstract To perform tasks like grasping, the brain has to process visual object information so that the grip aperture can be adjusted before touching the object. Previous studies have demonstrated that the posterior subsector of the Anterior Intraparietal area (pAIP) is connected to area 45B, and its anterior counterpart (aAIP) to F5a. However, the role of area 45B and F5a in visually-guided grasping is poorly understood. Here, we investigated the role of area 45B, F5a and F5p in object processing during visually-guided grasping in two monkeys. If the presentation of an object activates a motor command related to the preshaping of the hand, as in F5p, such neurons should prefer objects presented within reachable distance. Conversely, neurons encoding a purely visual representation of an object – possibly in area 45B and F5a – should be less affected by viewing distance. Contrary to our expectations, we found that most neurons in area 45B were object- and viewing distance-selective (mostly Near-preferring). Area F5a showed much weaker object selectivity compared to 45B, with a similar preference for objects presented at the Near position. Finally, F5p neurons were less object selective and frequently Far-preferring. In sum, area 45B – but not F5p– prefers objects presented in peripersonal space.


2021 ◽  
Vol 12 ◽  
Author(s):  
Chuyang Sun ◽  
Juan Chen ◽  
Yuting Chen ◽  
Rixin Tang

Previous studies have shown that our perception of stimulus properties can be affected by the emotional nature of the stimulus. It is not clear, however, how emotions affect visually-guided actions toward objects. To address this question, we used toy rats, toy squirrels, and wooden blocks to induce negative, positive, and neutral emotions, respectively. Participants were asked to report the perceived distance and the perceived size of a target object resting on top of one of the three emotion-inducing objects; or to grasp the same target object either without visual feedback (open-loop) or with visual feedback (closed-loop) of both the target object and their grasping hand during the execution of grasping. We found that the target object was perceived closer and larger, but was grasped with a smaller grip aperture in the rat condition than in the squirrel and the wooden-block conditions when no visual feedback was available. With visual feedback present, this difference in grip aperture disappeared. These results showed that negative emotion influences both perceived size and grip aperture, but in opposite directions (larger perceived size but smaller grip aperture) and its influence on grip aperture could be corrected by visual feedback, which revealed different effects of emotion to perception and action. Our results have implications on the understanding of the relationship between perception and action in emotional condition, which showed the novel difference from previous theories.


PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0248084
Author(s):  
Vonne van Polanen

When grasping an object, the opening between the fingertips (grip aperture) scales with the size of the object. If an object changes in size, the grip aperture has to be corrected. In this study, it was investigated whether such corrections would influence the perceived size of objects. The grasping plan was manipulated with a preview of the object, after which participants initiated their reaching movement without vision. In a minority of the grasps, the object changed in size after the preview and participants had to adjust their grasping movement. Visual feedback was manipulated in two experiments. In experiment 1, vision was restored during reach and both visual and haptic information was available to correct the grasp and lift the object. In experiment 2, no visual information was provided during the movement and grasps could only be corrected using haptic information. Participants made reach-to-grasp movements towards two objects and compared these in size. Results showed that participants adjusted their grasp to a change in object size from preview to grasped object in both experiments. However, a change in object size did not bias the perception of object size or alter discrimination performance. In experiment 2, a small perceptual bias was found when objects changed from large to small. However, this bias was much smaller than the difference that could be discriminated and could not be considered meaningful. Therefore, it can be concluded that the planning and execution of reach-to-grasp movements do not reliably affect the perception of object size.


2021 ◽  
Vol 12 ◽  
Author(s):  
Annabel Wing-Yan Fan ◽  
Lin Lawrence Guo ◽  
Adam Frost ◽  
Robert L. Whitwell ◽  
Matthias Niemeier ◽  
...  

The visual system is known to extract summary representations of visually similar objects which bias the perception of individual objects toward the ensemble average. Although vision plays a large role in guiding action, less is known about whether ensemble representation is informative for action. Motor behavior is tuned to the veridical dimensions of objects and generally considered resistant to perceptual biases. However, when the relevant grasp dimension is not available or is unconstrained, ensemble perception may be informative to behavior by providing gist information about surrounding objects. In the present study, we examined if summary representations of a surrounding ensemble display influenced grip aperture and orientation when participants reached-to-grasp a central circular target which had an explicit size but importantly no explicit orientation that the visuomotor system could selectively attend to. Maximum grip aperture and grip orientation were not biased by ensemble statistics during grasping, although participants were able to perceive and provide manual estimations of the average size and orientation of the ensemble display. Support vector machine classification of ensemble statistics achieved above-chance classification accuracy when trained on kinematic and electromyography data of the perceptual but not grasping conditions, supporting our univariate findings. These results suggest that even along unconstrained grasping dimensions, visually-guided behaviors toward real-world objects are not biased by ensemble processing.


2021 ◽  
Author(s):  
Vonne van Polanen

ABSTRACTWhen grasping an object, the opening between the fingertips (grip aperture) scales with the size of the object. If an object changes in size, the grip aperture has to be corrected. In this study, it was investigated whether such corrections would influence the perceived size of objects. The grasping plan was manipulated with a preview of the object, after which participants initiated their reaching movement without vision. In a minority of the grasps, the object changed in size after the preview and participants had to adjust their grasping movement. Visual feedback was manipulated in two experiments. In experiment 1, vision was restored during reach and both visual and haptic information was available to correct the grasp and lift the object. In experiment 2, no visual information was provided during the movement and grasps could only be corrected using haptic information. Participants made reach-to-grasp movements towards two objects and compared these in size. Results showed that participants adjusted their grasp to a change in object size from preview to grasped object in both experiments. However, a change in object size did not bias the perception of object size or alter discrimination performance. In experiment 2, a small perceptual bias was found when objects changed from large to small. However, this bias was much smaller than the difference that could be discriminated and could not be considered meaningful. Therefore, it can be concluded that the planning and execution of reach-to-grasp movements do not affect the perception of object size.


2020 ◽  
Vol 238 (4) ◽  
pp. 969-979 ◽  
Author(s):  
Jeroen B. J. Smeets ◽  
Erik Kleijn ◽  
Marlijn van der Meijden ◽  
Eli Brenner
Keyword(s):  

2019 ◽  
Author(s):  
Jeroen B.J. Smeets ◽  
Erik Kleijn ◽  
Marlijn van der Meijden ◽  
Eli Brenner

AbstractThere is an extensive literature debating whether perceived size is used to guide grasping. A possible reason for not using judged size is that using judged positions might lead to more precise movements. As this argument does not hold for small objects, and all studies showing an effect of the Ebbinghaus illusion on grasping used small objects, we hypothesized that size information is used for small objects but not for large ones. Using a modified diagonal illusion, we obtained an effect of about 10% on perceptual judgements, without an effect on grasping, irrespective of object size. We therefore reject our precision hypothesis. We discuss the results in the framework of grasping as moving digits to positions on an object. We conclude that the reported disagreement on the effect of illusions is because the Ebbinghaus illusion not only affects size, but –unlike most size illusions– also affects perceived positions.


2019 ◽  
Vol 122 (4) ◽  
pp. 1578-1597 ◽  
Author(s):  
Jeroen B. J. Smeets ◽  
Katinka van der Kooij ◽  
Eli Brenner

It is tempting to describe human reach-to-grasp movements in terms of two, more or less independent visuomotor channels, one relating hand transport to the object’s location and the other relating grip aperture to the object’s size. Our review of experimental work questions this framework for reasons that go beyond noting the dependence between the two channels. Both the lack of effect of size illusions on grip aperture and the finding that the variability in grip aperture does not depend on the object’s size indicate that size information is not used to control grip aperture. An alternative is to describe grip formation as emerging from controlling the movements of the digits in space. Each digit’s trajectory when grasping an object is remarkably similar to its trajectory when moving to tap the same position on its own. The similarity is also evident in the fast responses when the object is displaced. This review develops a new description of the speed-accuracy trade-off for multiple effectors that is applied to grasping. The most direct support for the digit-in-space framework is that prism-induced adaptation of each digit’s tapping movements transfers to that digit’s movements when grasping, leading to changes in grip aperture for adaptation in opposite directions for the two digits. We conclude that although grip aperture and hand transport are convenient variables to describe grasping, treating grasping as movements of the digits in space is a more suitable basis for understanding the neural control of grasping.


2019 ◽  
Author(s):  
Evan Cesanek ◽  
Fulvio Domini

AbstractTo perform accurate movements, the sensorimotor system must maintain a delicate calibration of the mapping between visual inputs and motor outputs. Previous work has focused on the mapping between visual inputs and individual locations in egocentric space, but little attention has been paid to the mappings that support interactions with 3D objects. In this study, we investigated sensorimotor adaptation of grasping movements targeting the depth dimension of 3D paraboloid objects. Object depth was specified by separately manipulating binocular disparity (stereo) and texture gradients. At the end of each movement, the fingers closed down on a physical object consistent with one of the two cues, depending on the condition (haptic-for-texture or haptic-for-stereo). Unlike traditional adaptation paradigms, where relevant spatial properties are determined by a single dimension of visual information, this method enabled us to investigate whether adaptation processes can selectively adjust the influence of different sources of visual information depending on their relationship to physical depth. In two experiments, we found short-term changes in grasp performance consistent with a process of cue-selective adaptation: the slope of the grip aperture with respect to a reliable cue (correlated with physical reality) increased, whereas the slope with respect to the unreliable cue (uncorrelated with physical reality) decreased. In contrast, slope changes did not occur during exposure to a set of stimuli where both cues remained correlated with physical reality, but one was rendered with a constant bias of 10 mm; the grip aperture simply became uniformly larger or smaller, as in standard adaptation paradigms. Overall, these experiments support a model of cue-selective adaptation driven by correlations between error signals and input values (i.e., supervised learning), rather than mismatched haptic and visual signals.


Sign in / Sign up

Export Citation Format

Share Document