scholarly journals Grasp aperture corrections in reach-to-grasp movements do not reliably alter size perception

PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0248084
Author(s):  
Vonne van Polanen

When grasping an object, the opening between the fingertips (grip aperture) scales with the size of the object. If an object changes in size, the grip aperture has to be corrected. In this study, it was investigated whether such corrections would influence the perceived size of objects. The grasping plan was manipulated with a preview of the object, after which participants initiated their reaching movement without vision. In a minority of the grasps, the object changed in size after the preview and participants had to adjust their grasping movement. Visual feedback was manipulated in two experiments. In experiment 1, vision was restored during reach and both visual and haptic information was available to correct the grasp and lift the object. In experiment 2, no visual information was provided during the movement and grasps could only be corrected using haptic information. Participants made reach-to-grasp movements towards two objects and compared these in size. Results showed that participants adjusted their grasp to a change in object size from preview to grasped object in both experiments. However, a change in object size did not bias the perception of object size or alter discrimination performance. In experiment 2, a small perceptual bias was found when objects changed from large to small. However, this bias was much smaller than the difference that could be discriminated and could not be considered meaningful. Therefore, it can be concluded that the planning and execution of reach-to-grasp movements do not reliably affect the perception of object size.

2021 ◽  
Author(s):  
Vonne van Polanen

ABSTRACTWhen grasping an object, the opening between the fingertips (grip aperture) scales with the size of the object. If an object changes in size, the grip aperture has to be corrected. In this study, it was investigated whether such corrections would influence the perceived size of objects. The grasping plan was manipulated with a preview of the object, after which participants initiated their reaching movement without vision. In a minority of the grasps, the object changed in size after the preview and participants had to adjust their grasping movement. Visual feedback was manipulated in two experiments. In experiment 1, vision was restored during reach and both visual and haptic information was available to correct the grasp and lift the object. In experiment 2, no visual information was provided during the movement and grasps could only be corrected using haptic information. Participants made reach-to-grasp movements towards two objects and compared these in size. Results showed that participants adjusted their grasp to a change in object size from preview to grasped object in both experiments. However, a change in object size did not bias the perception of object size or alter discrimination performance. In experiment 2, a small perceptual bias was found when objects changed from large to small. However, this bias was much smaller than the difference that could be discriminated and could not be considered meaningful. Therefore, it can be concluded that the planning and execution of reach-to-grasp movements do not affect the perception of object size.


2013 ◽  
Vol 22 (3) ◽  
pp. 255-270 ◽  
Author(s):  
Yuki Ban ◽  
Takuji Narumi ◽  
Tomohiro Tanikawa ◽  
Michitaka Hirose

In this study, we aim to construct a perception-based shape display system to provide users with the sensation of touching virtual objects of varying shapes using only a simple mechanism. Thus far, we have proved that identified curved surface shapes or edge angles can be modified by displacing the visual representation of the user's hand. However, using this method, we cannot emulate multifinger touch, because of spatial unconformity. To solve this problem, we focus on modifying the identification of shapes using two fingers by deforming the visual representation of the user's hand. We devised a video see-through system that enables us to change the perceived shape of an object that a user is touching visually. The visual representation of the user's hand is deformed as if the user were handling a visual object; however, the user is actually handling an object of a different shape. Using this system, we conducted two experiments to investigate the effects of visuo-haptic interaction and evaluate its effectiveness. One is an investigation on the modification of size perception to confirm that the fingers did not stroke the shape but only touched it statically. The other is an investigation on the modification of shape perception for confirming that the fingers dynamically stroked the surface of the shape. The results of these experiments show that the perceived sizes of objects handled using a thumb and other finger(s) could be modified if the difference between the size of physical and visual stimuli was in the −40% to 35% range. In addition, we found that the algorithm can create an effect of shape perception modification when users stroke the shape with multiple fingers.


Motor Control ◽  
1999 ◽  
Vol 3 (3) ◽  
pp. 237-271 ◽  
Author(s):  
Jeroen B.J. Smeets ◽  
Eli Brenner

Reaching out for an object is often described as consisting of two components that are based on different visual information. Information about the object's position and orientation guides the hand to the object, while information about the object's shape and size determines how the fingers move relative to the thumb to grasp it. We propose an alternative description, which consists of determining suitable positions on the object—on the basis of its shape, surface roughness, and so on—and then moving one's thumb and fingers more or less independently to these positions. We modeled this description using a minimum-jerk approach, whereby the finger and thumb approach their respective target positions approximately orthogonally to the surface. Our model predicts how experimental variables such as object size, movement speed, fragility, and required accuracy will influence the timing and size of the maximum aperture of the hand. An extensive review of experimental studies on grasping showed that the predicted influences correspond to human behavior.


2019 ◽  
Author(s):  
Jeroen B.J. Smeets ◽  
Erik Kleijn ◽  
Marlijn van der Meijden ◽  
Eli Brenner

AbstractThere is an extensive literature debating whether perceived size is used to guide grasping. A possible reason for not using judged size is that using judged positions might lead to more precise movements. As this argument does not hold for small objects, and all studies showing an effect of the Ebbinghaus illusion on grasping used small objects, we hypothesized that size information is used for small objects but not for large ones. Using a modified diagonal illusion, we obtained an effect of about 10% on perceptual judgements, without an effect on grasping, irrespective of object size. We therefore reject our precision hypothesis. We discuss the results in the framework of grasping as moving digits to positions on an object. We conclude that the reported disagreement on the effect of illusions is because the Ebbinghaus illusion not only affects size, but –unlike most size illusions– also affects perceived positions.


2020 ◽  
Author(s):  
Han Zhang ◽  
Nicola C Anderson ◽  
Kevin Miller

Recent studies have shown that mind-wandering (MW) is associated with changes in eye movement parameters, but have not explored how MW affects the sequential pattern of eye movements involved in making sense of complex visual information. Eye movements naturally unfold over time and this process may reveal novel information about cognitive processing during MW. The current study used Recurrence Quantification Analysis (Anderson, Bischof, Laidlaw, Risko, & Kingstone, 2013) to describe the pattern of refixations (fixations directed to previously-inspected regions) during MW. Participants completed a real-world scene encoding task and responded to thought probes assessing intentional and unintentional MW. Both types of MW were associated with worse memory of the scenes. Importantly, RQA showed that scanpaths during unintentional MW were more repetitive than during on-task episodes, as indicated by a higher recurrence rate and more stereotypical fixation sequences. This increased repetitiveness suggests an adaptive response to processing failures through re-examining previous locations. Moreover, this increased repetitiveness contributed to fixations focusing on a smaller spatial scale of the stimuli. Finally, we were also able to validate several traditional measures: both intentional and unintentional MW were associated with fewer and longer fixations; Eye-blinking increased numerically during both types of MW but the difference was only significant for unintentional MW. Overall, the results advanced our understanding of how visual processing is affected during MW by highlighting the sequential aspect of eye movements.


2019 ◽  
Author(s):  
Evan Cesanek ◽  
Fulvio Domini

AbstractTo perform accurate movements, the sensorimotor system must maintain a delicate calibration of the mapping between visual inputs and motor outputs. Previous work has focused on the mapping between visual inputs and individual locations in egocentric space, but little attention has been paid to the mappings that support interactions with 3D objects. In this study, we investigated sensorimotor adaptation of grasping movements targeting the depth dimension of 3D paraboloid objects. Object depth was specified by separately manipulating binocular disparity (stereo) and texture gradients. At the end of each movement, the fingers closed down on a physical object consistent with one of the two cues, depending on the condition (haptic-for-texture or haptic-for-stereo). Unlike traditional adaptation paradigms, where relevant spatial properties are determined by a single dimension of visual information, this method enabled us to investigate whether adaptation processes can selectively adjust the influence of different sources of visual information depending on their relationship to physical depth. In two experiments, we found short-term changes in grasp performance consistent with a process of cue-selective adaptation: the slope of the grip aperture with respect to a reliable cue (correlated with physical reality) increased, whereas the slope with respect to the unreliable cue (uncorrelated with physical reality) decreased. In contrast, slope changes did not occur during exposure to a set of stimuli where both cues remained correlated with physical reality, but one was rendered with a constant bias of 10 mm; the grip aperture simply became uniformly larger or smaller, as in standard adaptation paradigms. Overall, these experiments support a model of cue-selective adaptation driven by correlations between error signals and input values (i.e., supervised learning), rather than mismatched haptic and visual signals.


2018 ◽  
Vol 119 (5) ◽  
pp. 1981-1992 ◽  
Author(s):  
Laura Mikula ◽  
Valérie Gaveau ◽  
Laure Pisella ◽  
Aarlenne Z. Khan ◽  
Gunnar Blohm

When reaching to an object, information about the target location as well as the initial hand position is required to program the motor plan for the arm. The initial hand position can be determined by proprioceptive information as well as visual information, if available. Bayes-optimal integration posits that we utilize all information available, with greater weighting on the sense that is more reliable, thus generally weighting visual information more than the usually less reliable proprioceptive information. The criterion by which information is weighted has not been explicitly investigated; it has been assumed that the weights are based on task- and effector-dependent sensory reliability requiring an explicit neuronal representation of variability. However, the weights could also be determined implicitly through learned modality-specific integration weights and not on effector-dependent reliability. While the former hypothesis predicts different proprioceptive weights for left and right hands, e.g., due to different reliabilities of dominant vs. nondominant hand proprioception, we would expect the same integration weights if the latter hypothesis was true. We found that the proprioceptive weights for the left and right hands were extremely consistent regardless of differences in sensory variability for the two hands as measured in two separate complementary tasks. Thus we propose that proprioceptive weights during reaching are learned across both hands, with high interindividual range but independent of each hand’s specific proprioceptive variability. NEW & NOTEWORTHY How visual and proprioceptive information about the hand are integrated to plan a reaching movement is still debated. The goal of this study was to clarify how the weights assigned to vision and proprioception during multisensory integration are determined. We found evidence that the integration weights are modality specific rather than based on the sensory reliabilities of the effectors.


Author(s):  
Pengxin Ding ◽  
Huan Zhou ◽  
Jinxia Shang ◽  
Xiang Zou ◽  
Minghui Wang

This paper designs a method that can generate anchors of various shapes for the object detection framework. This method has the characteristics of novelty and flexibility. Different from the previous anchors generated by a pre-defined manner, our anchors are generated dynamically by an anchor generator. Specially, the anchor generator is not fixed but learned from the hand-designed anchors, which means that our anchor generator is able to work well in various scenes. In the inference time, the weights of anchor generator are estimated by a simple network where the input is some hand-designed anchor. In addition, in order to make the difference between the number of positive and negative samples smaller, we use an adaptive IOU threshold related to the object size to solve this problem. At the same time, we proved that our proposed method is effective and conducted a lot of experiments on the COCO dataset. Experimental results show that after replacing the anchor generation method in the previous object detectors (such as SSD, mask RCNN, and Retinanet) with our proposed method, the detection performance of the model has been greatly improved compared to before the replacement, which proves our method is effective.


2017 ◽  
Vol 29 (6) ◽  
pp. 526-536 ◽  
Author(s):  
Stéphane Frayon ◽  
Yolande Cavaloc ◽  
Guillaume Wattelez ◽  
Sophie Cherrier ◽  
Yannick Lerrant ◽  
...  

We examined the accuracy of self-reported weight and height in New Caledonian school-going adolescents. Self-reported and measured height and weight data were collected from 665 adolescents of New Caledonia and were then compared. Multivariable logistic regressions identified the factors associated with inaccurate self-reports. Sensitivity and specificity of self-reported body mass index values to detect overweight or obesity were evaluated. Self-reported weight was significantly lower than measured weight (boys, −3.56 kg; girls, −3.13 kg). Similar results were found for height (boys, −2.51 cm; girls, −3.23 cm). Multiple regression analyses indicated that the difference between self-reported and measured height was significantly associated with ethnicity and pubertal status. Inaccurate self-reported weight was associated with socioeconomic status, place of residence, body-size perception and weight status. Screening accuracy of self-reported body mass index was low, particularly in the Melanesian subgroup. These findings should be considered when overweight is estimated in the Melanesian adolescent population at individual scale.


Sign in / Sign up

Export Citation Format

Share Document