scholarly journals Differences in adults’ spatial scaling based on visual or haptic information

Author(s):  
Magdalena Szubielska ◽  
Marta Szewczyk ◽  
Wenke Möhring

AbstractThe present study examined differences in adults’ spatial-scaling abilities across three perceptual conditions: (1) visual, (2) haptic, and (3) visual and haptic. Participants were instructed to encode the position of a convex target presented in a simple map without a time limit. Immediately after encoding the map, participants were presented with a referent space and asked to place a disc at the same location from memory. All spaces were designed as tactile graphics. Positions of targets varied along the horizontal dimension. The referent space was constant in size while sizes of maps were systematically varied, resulting in three scaling factor conditions: 1:4, 1:2, 1:1. Sixty adults participated in the study (M = 21.18; SD = 1.05). One-third of them was blindfolded throughout the entire experiment (haptic condition). The second group of participants was allowed to see the graphics (visual condition); the third group were instructed to see and touch the graphics (bimodal condition). An analysis of participants’ absolute errors showed that participants produced larger errors in the haptic condition as opposed to the visual and bimodal conditions. There was also a significant interaction effect between scaling factor and perceptual condition. In the visual and bimodal conditions, results showed a linear increase in errors with higher scaling factors (which may suggest that adults adopted mental transformation strategies during the spatial scaling process), whereas, in the haptic condition, this relation was quadratic. Findings imply that adults’ spatial-scaling performance decreases when visual information is not available.

2016 ◽  
Vol 116 (4) ◽  
pp. 1795-1806 ◽  
Author(s):  
K. Sathian

Haptic sensing of objects acquires information about a number of properties. This review summarizes current understanding about how these properties are processed in the cerebral cortex of macaques and humans. Nonnoxious somatosensory inputs, after initial processing in primary somatosensory cortex, are partially segregated into different pathways. A ventrally directed pathway carries information about surface texture into parietal opercular cortex and thence to medial occipital cortex. A dorsally directed pathway transmits information regarding the location of features on objects to the intraparietal sulcus and frontal eye fields. Shape processing occurs mainly in the intraparietal sulcus and lateral occipital complex, while orientation processing is distributed across primary somatosensory cortex, the parietal operculum, the anterior intraparietal sulcus, and a parieto-occipital region. For each of these properties, the respective areas outside primary somatosensory cortex also process corresponding visual information and are thus multisensory. Consistent with the distributed neural processing of haptic object properties, tactile spatial acuity depends on interaction between bottom-up tactile inputs and top-down attentional signals in a distributed neural network. Future work should clarify the roles of the various brain regions and how they interact at the network level.


2018 ◽  
Vol 29 (7) ◽  
pp. 3023-3033 ◽  
Author(s):  
Johan N Lundström ◽  
Christina Regenbogen ◽  
Kathrin Ohla ◽  
Janina Seubert

Abstract While matched crossmodal information is known to facilitate object recognition, it is unclear how our perceptual systems encode the more gradual congruency variations that occur in our natural environment. Combining visual objects with odor mixtures to create a gradual increase in semantic object overlap, we demonstrate high behavioral acuity to linear variations of olfactory–visual overlap in a healthy adult population. This effect was paralleled by a linear increase in cortical activation at the intersection of occipital fusiform and lingual gyri, indicating linear encoding of crossmodal semantic overlap in visual object recognition networks. Effective connectivity analyses revealed that this integration of olfactory and visual information was achieved by direct information exchange between olfactory and visual areas. In addition, a parallel pathway through the superior frontal gyrus was increasingly recruited towards the most ambiguous stimuli. These findings demonstrate that cortical structures involved in object formation are inherently crossmodal and encode sensory overlap in a linear manner. The results further demonstrate that prefrontal control of these processes is likely required for ambiguous stimulus combinations, a fact of high ecological relevance that may be inappropriately captured by common task designs juxtaposing congruency and incongruency.


2017 ◽  
Vol 41 (S1) ◽  
pp. S558-S559
Author(s):  
G. Risso ◽  
R.M. Martoni ◽  
M.C. Cavallini ◽  
S. Erzegovesi ◽  
G. Baud-Bovy

IntroductionSeveral studies recently investigated how Anorexia Nervosa patients (ANp) process multimodal information. Longo (2015) hypothesized that ANp might be less reliant on visual perception of bodies than healthy controls (HC). Case et al. showed that processing of multimodal information might be disrupted in ANp. Literature lacks of studies that measure precisely and compare directly the contributions of each sensory input.ObjectiveTo investigate the integration of visual and haptic inputs in ANp compared with HC and measure the weight of each input.MethodWe used a visuo-haptic integration task with a setup adapted from Gori et al. (2008) to measure each sensory input's when judging the size of a cube according to Maximum Likelihood Estimation theory which describes the optimal multimodal integration behaviour (Ernst and Banks, 2002). Fifteen ANp and 16 HCs were recruited.ResultsRegardless the group, we found considerable individual variability about the integration processes; moreover, many participants did not integrate optimally. Correlation analysis suggested that ANp rely less on visual information then HC.ConclusionsDespite using a setup previously validated with children, the observation that many HC did not integrate optimally is not in line with the results of previous studies, making it difficult the comparison with the AN group. The setup might not be adapted to adults and it needs to be improved. Our study shows for the first time how it might be possible to measure and compare directly the contribution of two different sensory modalities. This could provide precious information to deeply investigate the pathology.Disclosure of interestThe authors have not supplied their declaration of competing interest.


2018 ◽  
Author(s):  
Janna M. Gottwald ◽  
Gustaf Gredebäck

This study investigates how infants use visual and sensorimotor information to prospectively control their actions. We gave 14-month-olds two objects of different weight and observed how high they were lifted, using a Qualisys Motion Capture System. In one condition, the two objects were visually distinct (different color condition) in another they were visually identical (same color condition). Lifting amplitudes of the first movement unit were analyzed in order to assess prospective control. Results demonstrate that infants lifted a light object higher than a heavy object, especially when vision could be used to assess weight (different color condition). When being confronted with two visually identical objects of different weight (same color condition), infants showed a different lifting pattern than what could be observed in the different color condition, expressed by a significant interaction effect between object weight and color condition on lifting amplitude. These results indicate that (a) visual information about object weight can be used to prospectively control lifting actions and that (b) infants are able to prospectively control their lifting actions even without visual information about object weight. We argue that infants, in the absence of reliable visual information about object weight, heighten their dependence on non-visual information (tactile, sensorimotor memory) in order to estimate weight and pre-adjust their lifting actions in a prospective manner.


PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0248084
Author(s):  
Vonne van Polanen

When grasping an object, the opening between the fingertips (grip aperture) scales with the size of the object. If an object changes in size, the grip aperture has to be corrected. In this study, it was investigated whether such corrections would influence the perceived size of objects. The grasping plan was manipulated with a preview of the object, after which participants initiated their reaching movement without vision. In a minority of the grasps, the object changed in size after the preview and participants had to adjust their grasping movement. Visual feedback was manipulated in two experiments. In experiment 1, vision was restored during reach and both visual and haptic information was available to correct the grasp and lift the object. In experiment 2, no visual information was provided during the movement and grasps could only be corrected using haptic information. Participants made reach-to-grasp movements towards two objects and compared these in size. Results showed that participants adjusted their grasp to a change in object size from preview to grasped object in both experiments. However, a change in object size did not bias the perception of object size or alter discrimination performance. In experiment 2, a small perceptual bias was found when objects changed from large to small. However, this bias was much smaller than the difference that could be discriminated and could not be considered meaningful. Therefore, it can be concluded that the planning and execution of reach-to-grasp movements do not reliably affect the perception of object size.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Martina Pirruccio ◽  
Simona Monaco ◽  
Chiara Della Libera ◽  
Luigi Cattaneo

Abstract Haptic exploration produces mental object representations that can be memorized for subsequent object-directed behaviour. Storage of haptically-acquired object images (HOIs), engages, besides canonical somatosensory areas, the early visual cortex (EVC). Clear evidence for a causal contribution of EVC to HOI representation is still lacking. The use of visual information by the grasping system undergoes necessarily a frame of reference shift by integrating eye-position. We hypothesize that if the motor system uses HOIs stored in a retinotopic coding in the visual cortex, then its use is likely to depend at least in part on eye position. We measured the kinematics of 4 fingers in the right hand of 15 healthy participants during the task of grasping different unseen objects behind an opaque panel, that had been previously explored haptically. The participants never saw the object and operated exclusively based on haptic information. The position of the object was fixed, in front of the participant, but the subject’s gaze varied from trial to trial between 3 possible positions, towards the unseen object or away from it, on either side. Results showed that the middle and little fingers’ kinematics during reaching for the unseen object changed significantly according to gaze position. In a control experiment we showed that intransitive hand movements were not modulated by gaze direction. Manipulating eye-position produces small but significant configuration errors, (behavioural errors due to shifts in frame of reference) possibly related to an eye-centered frame of reference, despite the absence of visual information, indicating sharing of resources between the haptic and the visual/oculomotor system to delayed haptic grasping.


2021 ◽  
Author(s):  
Vonne van Polanen

ABSTRACTWhen grasping an object, the opening between the fingertips (grip aperture) scales with the size of the object. If an object changes in size, the grip aperture has to be corrected. In this study, it was investigated whether such corrections would influence the perceived size of objects. The grasping plan was manipulated with a preview of the object, after which participants initiated their reaching movement without vision. In a minority of the grasps, the object changed in size after the preview and participants had to adjust their grasping movement. Visual feedback was manipulated in two experiments. In experiment 1, vision was restored during reach and both visual and haptic information was available to correct the grasp and lift the object. In experiment 2, no visual information was provided during the movement and grasps could only be corrected using haptic information. Participants made reach-to-grasp movements towards two objects and compared these in size. Results showed that participants adjusted their grasp to a change in object size from preview to grasped object in both experiments. However, a change in object size did not bias the perception of object size or alter discrimination performance. In experiment 2, a small perceptual bias was found when objects changed from large to small. However, this bias was much smaller than the difference that could be discriminated and could not be considered meaningful. Therefore, it can be concluded that the planning and execution of reach-to-grasp movements do not affect the perception of object size.


2017 ◽  
Vol 50 (3) ◽  
pp. 273-297 ◽  
Author(s):  
Cornelia Gerdenitsch ◽  
Christian Korunka ◽  
Guido Hertel

Combinations of concentrated work and interactions are facilitated by office environments such as activity-based flexible offices (A-FOs). A-FOs are characterized by activity-based workspaces, an open-plan layout, and desk sharing. Although there is a growing enthusiasm for replacing cellular offices with A-FOs, the effects of such changes on office workers are still unclear. Within this three-wave longitudinal study, we investigated the changes (time lag of 1 and 8 months after the redesign) in perceived need–supply fit, distraction, interaction across teams, and workspace satisfaction during relocation from a cellular office to an A-FO. Moreover, as previous case studies indicated individual differences in the use of A-FOs, we considered participants’ perceived need–supply fit as a moderator indicating an appropriate use of A-FO supplies. We found a linear increase of perceived need–supply fit, a decrease in distraction, and a significant interaction effect where workspace satisfaction and interaction across teams increased more strongly for participants reporting a better perceived need–supply fit.


2019 ◽  
Vol 6 (3) ◽  
pp. 181563 ◽  
Author(s):  
Wouter M. Bergmann Tiest ◽  
Astrid M. L. Kappers

In this paper, we assess the importance of visual and haptic information about materials for scaling the grasping force when picking up an object. We asked 12 participants to pick up and lift objects with six different textures, either blindfolded or with visual information present. We measured the grip force and estimated the load force from the object’s weight and vertical acceleration. The coefficient of friction of the materials was measured separately. Already at an early phase in the grasp (before lift-off), the grip force correlated highly with the textures’ static coefficient of friction. However, no strong influence on the presence of visual information was found. We conclude that the main mechanism for modulation of grip force in the early phase of grasping is the real-time sensation of the texture’s friction.


Sign in / Sign up

Export Citation Format

Share Document