maximum grip aperture
Recently Published Documents


TOTAL DOCUMENTS

11
(FIVE YEARS 4)

H-INDEX

6
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Yuqi Liu ◽  
James Caracoglia ◽  
Sriparna Sen ◽  
Ella Striem-Amit

While reaching and grasping are highly prevalent manual actions, neuroimaging studies provide evidence that their neural representations may be shared between different body parts, i.e. effectors. If these actions are guided by effector-independent mechanisms, similar kinematics should be observed when the action is performed by the hand or by a cortically remote and less experienced effector, such as the foot. We tested this hypothesis with two characteristic components of action: the initial ballistic stage of reaching, and the preshaping of the digits during grasping based on object size. We examined if these kinematic features reflect effector-independent mechanisms by asking participants to reach toward and to grasp objects of different widths with their hand and foot. First, during both reaching and grasping, the velocity profile up to peak velocity matched between the hand and the foot, indicating a shared ballistic acceleration phase. Secondly, maximum grip aperture and time of maximum grip aperture of grasping increased with object size for both effectors, indicating encoding of object size during transport. Differences between the hand and foot were found in the deceleration phase and time of maximum grip aperture, likely due to biomechanical differences and the participants' inexperience with foot actions. These findings provide evidence for effector-independent visuomotor mechanisms of reaching and grasping that generalize across body parts.


2021 ◽  
Vol 12 ◽  
Author(s):  
Annabel Wing-Yan Fan ◽  
Lin Lawrence Guo ◽  
Adam Frost ◽  
Robert L. Whitwell ◽  
Matthias Niemeier ◽  
...  

The visual system is known to extract summary representations of visually similar objects which bias the perception of individual objects toward the ensemble average. Although vision plays a large role in guiding action, less is known about whether ensemble representation is informative for action. Motor behavior is tuned to the veridical dimensions of objects and generally considered resistant to perceptual biases. However, when the relevant grasp dimension is not available or is unconstrained, ensemble perception may be informative to behavior by providing gist information about surrounding objects. In the present study, we examined if summary representations of a surrounding ensemble display influenced grip aperture and orientation when participants reached-to-grasp a central circular target which had an explicit size but importantly no explicit orientation that the visuomotor system could selectively attend to. Maximum grip aperture and grip orientation were not biased by ensemble statistics during grasping, although participants were able to perceive and provide manual estimations of the average size and orientation of the ensemble display. Support vector machine classification of ensemble statistics achieved above-chance classification accuracy when trained on kinematic and electromyography data of the perceptual but not grasping conditions, supporting our univariate findings. These results suggest that even along unconstrained grasping dimensions, visually-guided behaviors toward real-world objects are not biased by ensemble processing.


PeerJ ◽  
2019 ◽  
Vol 7 ◽  
pp. e7796
Author(s):  
Sonia Betti ◽  
Eris Chinellato ◽  
Silvia Guerra ◽  
Umberto Castiello ◽  
Luisa Sartori

Many daily activities involve synchronizing with other people’s actions. Previous literature has revealed that a slowdown of performance occurs whenever the action to be carried out is different to the one observed (i.e., visuomotor interference). However, action execution can be facilitated by observing a different action if it calls for an interactive gesture (i.e., social motor priming). The aim of this study is to investigate the costs and benefits of spontaneously processing a social response and then executing the same or a different action. Participants performed two different types of grips, which could be either congruent or not with the socially appropriate response and with the observed action. In particular, participants performed a precision grip (PG; thumb-index fingers opposition) or a whole-hand grasp (WHG; fingers-palm opposition) after observing videos showing an actor performing a PG and addressing them (interactive condition) or not (non-interactive condition). Crucially, in the interactive condition, the most appropriate response was a WHG, but in 50 percent of trials participants were asked to perform a PG. This procedure allowed us to measure both the facilitator effect of performing an action appropriate to the social context (WHG)—but different with respect to the observed one (PG)—and the cost of inhibiting it. These effects were measured by means of 3-D kinematical analysis of movement. Results show that, in terms of reaction time and movement time, the interactive request facilitated (i.e., speeded) the socially appropriate action (WHG), whereas interfered with (i.e., delayed) a different action (PG), although observed actions were always PGs. This interference also manifested with an increase of maximum grip aperture, which seemingly reflects the concurrent representation of the socially appropriate response. Overall, these findings extend previous research by revealing that physically incongruent action representations can be integrated into a single action plan even during an offline task and without any training.


2019 ◽  
Author(s):  
Jeroen B. J. Smeets ◽  
Eli Brenner

Illusions are characterized by inconsistencies. For instance, in the motion aftereffect, we see motion without an equivalent change in position. We used a simple pencil-and-paper experiment to determine whether illusions that influence an object’s apparent size give rise to equivalent changes in apparent positions along the object’s outline. We found different results for two equally strong size illusions. The Ebbinghaus illusion affected perceived positions in a way that was consistent with its influence on perceived size, but a modified diagonal illusion did not affect perceived positions. This difference between the illusions might explain why there are so many conflicting reports about the effects of size illusions on the maximum grip aperture during reach-to-grasp movements.


2014 ◽  
Vol 112 (8) ◽  
pp. 2019-2025 ◽  
Author(s):  
Jason W. Flindall ◽  
Claudia L. R. Gonzalez

Evidence from recent neurophysiological studies on nonhuman primates as well as from human behavioral studies suggests that actions with similar kinematic requirements but different end-state goals are supported by separate neural networks. It is unknown whether these different networks supporting seemingly similar reach-to-grasp actions are lateralized, or if they are equally represented in both hemispheres. Recently published behavioral evidence suggests certain networks are lateralized to the left hemisphere. Specifically, when participants used their right hand, their maximum grip aperture (MGA) was smaller when grasping to eat food items than when grasping to place the same items. Left-handed movements showed no difference between tasks. The present study investigates whether the differences between grasp-to-eat and grasp-to-place actions are driven by an intent to eat, or if placing an item into the mouth (sans ingestion) is sufficient to produce asymmetries. Twelve right-handed adults were asked to reach to grasp food items to 1) eat them, 2) place them in a bib, or 3) place them between their lips and then toss them into a nearby receptacle. Participants performed each task with large and small food items, using both their dominant and nondominant hands. The current study replicated the previous finding of smaller MGAs for the eat condition during right-handed but not left-handed grasps. MGAs in the eat and spit conditions did not significantly differ from each other, suggesting that eating and bringing a food item to the mouth both utilize similar motor plans, likely originating within the same neural network. Results are discussed in relation to neurophysiology and development.


2014 ◽  
Vol 232 (11) ◽  
pp. 3569-3578 ◽  
Author(s):  
Rebekka Verheij ◽  
Eli Brenner ◽  
Jeroen B. J. Smeets

2014 ◽  
Vol 40 (2) ◽  
pp. 889-896 ◽  
Author(s):  
Svenja Borchers ◽  
Rebekka Verheij ◽  
Jeroen B. J. Smeets ◽  
Marc Himmelbach

2007 ◽  
Vol 97 (6) ◽  
pp. 4203-4214 ◽  
Author(s):  
Erik J. Schlicht ◽  
Paul R. Schrater

Humans build representations of objects and their locations by integrating imperfect information from multiple perceptual modalities (e.g., visual, haptic). Because sensory information is specified in different frames of reference (i.e., eye- and body-centered), it must be remapped into a common coordinate frame before integration and storage in memory. Such transformations require an understanding of body articulation, which is estimated through noisy sensory data. Consequently, target information acquires additional coordinate transformation uncertainty (CTU) during remapping because of errors in joint angle sensing. As a result, CTU creates differences in the reliability of target information depending on the reference frame used for storage. This paper explores whether the brain represents and compensates for CTU when making grasping movements. To address this question, we varied eye position in the head, while participants reached to grasp a spatially fixed object, both when the object was in view and when it was occluded. Varying eye position changes CTU between eye and head, producing additional uncertainty in remapped information away from forward view. The results showed that people adjust their maximum grip aperture to compensate both for changes in visual information and for changes in CTU when the target is occluded. Moreover, the amount of compensation is predicted by a Bayesian model for location inference that uses eye-centered storage.


2004 ◽  
Vol 91 (6) ◽  
pp. 2598-2606 ◽  
Author(s):  
Raymond H. Cuijpers ◽  
Jeroen B. J. Smeets ◽  
Eli Brenner

Despite the many studies on the visual control of grasping, little is known about how and when small variations in shape affect grasping kinematics. In the present study we asked subjects to grasp elliptical cylinders that were placed 30 and 60 cm in front of them. The cylinders' aspect ratio was varied systematically between 0.4 and 1.6, and their orientation was varied in steps of 30°. Subjects picked up all noncircular cylinders with a hand orientation that approximately coincided with one of the principal axes. The probability of selecting a given principal axis was the highest when its orientation was equal to the preferred orientation for picking up a circular cylinder at the same location. The maximum grip aperture was scaled to the length of the selected principal axis, but the maximum grip aperture was also larger when the length of the axis orthogonal to the grip axis was longer than that of the grip axis. The correlation between the grip aperture— or the hand orientation—at a given instant, and its final value, increased monotonically with the traversed distance. The final hand orientation could already be inferred from its value after 30% of the movement distance with a reliability that explains 50% of the variance. For the final grip aperture, this was only so after 80% of the movement distance. The results indicate that the perceived shape of the cylinder is used for selecting appropriate grasping locations before or early in the movement and that the grip aperture and orientation are gradually attuned to these locations during the movement.


2002 ◽  
Vol 13 (1-2) ◽  
pp. 17-28 ◽  
Author(s):  
Monika Harvey ◽  
Stephen R. Jackson ◽  
Roger Newport ◽  
Tanja Krämer ◽  
D. Llewlyn Morris ◽  
...  

Patients with right unilateral cerebral stroke, four of which showed acute hemispatial neglect, and healthy aged-matched controls were tested for their ability to grasp objects located in either right or left space at near or far distances. Reaches were performed either in free vision or without visual feedback from the hand or target object. It was found that the patient group showed normal grasp kinematics with respect to maximum grip aperture, grip orientation, and the time taken to reach the maximum grip aperture. Analysis of hand path curvature showed that control subjects produced straighter right hand reaches when vision was available compared to when it was not. The right hemisphere lesioned patients, however, showed similar levels of curvature in each of these conditions. No behavioural differences, though, could be found between right hemisphere lesioned patients with or without hemispatial neglect on either grasp parameters, path deviation or temporal kinematics.


Sign in / Sign up

Export Citation Format

Share Document