scholarly journals Perceptual attraction in tool use: evidence for a reliability-based weighting mechanism

2017 ◽  
Vol 117 (4) ◽  
pp. 1569-1580 ◽  
Author(s):  
Nienke B. Debats ◽  
Marc O. Ernst ◽  
Herbert Heuer

Humans are well able to operate tools whereby their hand movement is linked, via a kinematic transformation, to a spatially distant object moving in a separate plane of motion. An everyday example is controlling a cursor on a computer monitor. Despite these separate reference frames, the perceived positions of the hand and the object were found to be biased toward each other. We propose that this perceptual attraction is based on the principles by which the brain integrates redundant sensory information of single objects or events, known as optimal multisensory integration. That is, 1) sensory information about the hand and the tool are weighted according to their relative reliability (i.e., inverse variances), and 2) the unisensory reliabilities sum up in the integrated estimate. We assessed whether perceptual attraction is consistent with optimal multisensory integration model predictions. We used a cursor-control tool-use task in which we manipulated the relative reliability of the unisensory hand and cursor position estimates. The perceptual biases shifted according to these relative reliabilities, with an additional bias due to contextual factors that were present in experiment 1 but not in experiment 2. The biased position judgments’ variances were, however, systematically larger than the predicted optimal variances. Our findings suggest that the perceptual attraction in tool use results from a reliability-based weighting mechanism similar to optimal multisensory integration, but that certain boundary conditions for optimality might not be satisfied. NEW & NOTEWORTHY Kinematic tool use is associated with a perceptual attraction between the spatially separated hand and the effective part of the tool. We provide a formal account for this phenomenon, thereby showing that the process behind it is similar to optimal integration of sensory information relating to single objects.

2020 ◽  
Vol 7 (8) ◽  
pp. 192056
Author(s):  
Nienke B. Debats ◽  
Herbert Heuer

Successful computer use requires the operator to link the movement of the cursor to that of his or her hand. Previous studies suggest that the brain establishes this perceptual link through multisensory integration, whereby the causality evidence that drives the integration is provided by the correlated hand and cursor movement trajectories. Here, we explored the temporal window during which this causality evidence is effective. We used a basic cursor-control task, in which participants performed out-and-back reaching movements with their hand on a digitizer tablet. A corresponding cursor movement could be shown on a monitor, yet slightly rotated by an angle that varied from trial to trial. Upon completion of the backward movement, participants judged the endpoint of the outward hand or cursor movement. The mutually biased judgements that typically result reflect the integration of the proprioceptive information on hand endpoint with the visual information on cursor endpoint. We here manipulated the time period during which the cursor was visible, thereby selectively providing causality evidence either before or after sensory information regarding the to-be-judged movement endpoint was available. Specifically, the cursor was visible either during the outward or backward hand movement (conditions Out and Back , respectively). Our data revealed reduced integration in the condition Back compared with the condition Out , suggesting that causality evidence available before the to-be-judged movement endpoint is more powerful than later evidence in determining how strongly the brain integrates the endpoint information. This finding further suggests that sensory integration is not delayed until a judgement is requested.


2012 ◽  
Vol 107 (11) ◽  
pp. 3135-3143 ◽  
Author(s):  
Verena N. Buchholz ◽  
Samanthi C. Goonetilleke ◽  
W. Pieter Medendorp ◽  
Brian D. Corneil

Multisensory integration enables rapid and accurate behavior. To orient in space, sensory information registered initially in different reference frames has to be integrated with the current postural information to produce an appropriate motor response. In some postures, multisensory integration requires convergence of sensory evidence across hemispheres, which would presumably lessen or hinder integration. Here, we examined orienting gaze shifts in humans to visual, tactile, or visuotactile stimuli when the hands were either in a default uncrossed posture or a crossed posture requiring convergence across hemispheres. Surprisingly, we observed the greatest benefits of multisensory integration in the crossed posture, as indexed by reaction time (RT) decreases. Moreover, such shortening of RTs to multisensory stimuli did not come at the cost of increased error propensity. To explain these results, we propose that two accepted principles of multisensory integration, the spatial principle and inverse effectiveness, dynamically interact to aid the rapid and accurate resolution of complex sensorimotor transformations. First, early mutual inhibition of initial visual and tactile responses registered in different hemispheres reduces error propensity. Second, inverse effectiveness in the integration of the weakened visual response with the remapped tactile representation expedites the generation of the correct motor response. Our results imply that the concept of inverse effectiveness, which is usually associated with external stimulus properties, might extend to internal spatial representations that are more complex given certain body postures.


2019 ◽  
Author(s):  
David A. Tovar ◽  
Micah M. Murray ◽  
Mark T. Wallace

AbstractObjects are the fundamental building blocks of how we create a representation of the external world. One major distinction amongst objects is between those that are animate versus inanimate. Many objects are specified by more than a single sense, yet the nature by which multisensory objects are represented by the brain remains poorly understood. Using representational similarity analysis of human EEG signals, we show enhanced encoding of audiovisual objects when compared to their corresponding visual and auditory objects. Surprisingly, we discovered the often-found processing advantages for animate objects was not evident in a multisensory context due to greater neural enhancement of inanimate objects—the more weakly encoded objects under unisensory conditions. Further analysis showed that the selective enhancement of inanimate audiovisual objects corresponded with an increase in shared representations across brain areas, suggesting that neural enhancement was mediated by multisensory integration. Moreover, a distance-to-bound analysis provided critical links between neural findings and behavior. Improvements in neural decoding at the individual exemplar level for audiovisual inanimate objects predicted reaction time differences between multisensory and unisensory presentations during a go/no-go animate categorization task. Interestingly, links between neural activity and behavioral measures were most prominent 100 to 200ms and 350 to 500ms after stimulus presentation, corresponding to time periods associated with sensory evidence accumulation and decision-making, respectively. Collectively, these findings provide key insights into a fundamental process the brain uses to maximize information it captures across sensory systems to perform object recognition.Significance StatementOur world is filled with an ever-changing milieu of sensory information that we are able to seamlessly transform into meaningful perceptual experience. We accomplish this feat by combining different features from our senses to construct objects. However, despite the fact that our senses do not work in isolation but rather in concert with each other, little is known about how the brain combines the senses together to form object representations. Here, we used EEG and machine learning to study how the brain processes auditory, visual, and audiovisual objects. Surprisingly, we found that non-living objects, the objects which were more difficult to process with one sense alone, benefited the most from engaging multiple senses.


2021 ◽  
pp. 1-29
Author(s):  
Lisa Lorentz ◽  
Kaian Unwalla ◽  
David I. Shore

Abstract Successful interaction with our environment requires accurate tactile localization. Although we seem to localize tactile stimuli effortlessly, the processes underlying this ability are complex. This is evidenced by the crossed-hands deficit, in which tactile localization performance suffers when the hands are crossed. The deficit results from the conflict between an internal reference frame, based in somatotopic coordinates, and an external reference frame, based in external spatial coordinates. Previous evidence in favour of the integration model employed manipulations to the external reference frame (e.g., blindfolding participants), which reduced the deficit by reducing conflict between the two reference frames. The present study extends this finding by asking blindfolded participants to visually imagine their crossed arms as uncrossed. This imagery manipulation further decreased the magnitude of the crossed-hands deficit by bringing information in the two reference frames into alignment. This imagery manipulation differentially affected males and females, which was consistent with the previously observed sex difference in this effect: females tend to show a larger crossed-hands deficit than males and females were more impacted by the imagery manipulation. Results are discussed in terms of the integration model of the crossed-hands deficit.


2015 ◽  
Vol 114 (6) ◽  
pp. 3211-3219 ◽  
Author(s):  
J. J. Tramper ◽  
W. P. Medendorp

It is known that the brain uses multiple reference frames to code spatial information, including eye-centered and body-centered frames. When we move our body in space, these internal representations are no longer in register with external space, unless they are actively updated. Whether the brain updates multiple spatial representations in parallel, or whether it restricts its updating mechanisms to a single reference frame from which other representations are constructed, remains an open question. We developed an optimal integration model to simulate the updating of visual space across body motion in multiple or single reference frames. To test this model, we designed an experiment in which participants had to remember the location of a briefly presented target while being translated sideways. The behavioral responses were in agreement with a model that uses a combination of eye- and body-centered representations, weighted according to the reliability in which the target location is stored and updated in each reference frame. Our findings suggest that the brain simultaneously updates multiple spatial representations across body motion. Because both representations are kept in sync, they can be optimally combined to provide a more precise estimate of visual locations in space than based on single-frame updating mechanisms.


2013 ◽  
Vol 227 (4) ◽  
pp. 497-507 ◽  
Author(s):  
Ali Sengül ◽  
Giulio Rognini ◽  
Michiel van Elk ◽  
Jane Elizabeth Aspell ◽  
Hannes Bleuler ◽  
...  

2012 ◽  
Vol 25 (0) ◽  
pp. 122
Author(s):  
Michael Barnett-Cowan ◽  
Jody C. Culham ◽  
Jacqueline C. Snow

The orientation at which objects are most easily recognized — the perceptual upright (PU) — is influenced by body orientation with respect to gravity. To date, the influence of these cues on object recognition has only been measured within the visual system. Here we investigate whether objects explored through touch alone are similarly influenced by body and gravitational information. Using the Oriented CHAracter Recognition Test (OCHART) adapted for haptics, blindfolded right-handed observers indicated whether the symbol ‘p’ presented in various orientations was the letter ‘p’ or ‘d’ following active touch. The average of ‘p-to-d’ and ‘d-to-p’ transitions was taken as the haptic PU. Sensory information was manipulated by positioning observers in different orientations relative to gravity with the head, body, and hand aligned. Results show that haptic object recognition is equally influenced by body and gravitational references frames, but with a constant leftward bias. This leftward bias in the haptic PU resembles leftward biases reported for visual object recognition. The influence of body orientation and gravity on the haptic PU was well predicted by an equally weighted vectorial sum of the directions indicated by these cues. Our results demonstrate that information from different reference frames influence the perceptual upright in haptic object recognition. Taken together with similar investigations in vision, our findings suggest that reliance on body and gravitational frames of reference helps maintain optimal object recognition. Equally relying on body and gravitational information may facilitate haptic exploration with an upright posture, while compensating for poor vestibular sensitivity when tilted.


2004 ◽  
Vol 5 (3) ◽  
Author(s):  
Marie Avillac ◽  
Etienne Olivier ◽  
Sophie Den�ve ◽  
Suliann Ben Hamed ◽  
Jean-Ren� Duhamel

2018 ◽  
Author(s):  
Gareth Harris ◽  
Taihong Wu ◽  
Gaia Linfield ◽  
Myung-Kyu Choi ◽  
He Liu ◽  
...  

AbstractIn the natural environment, animals often encounter multiple sensory cues that are simultaneously present. The nervous system integrates the relevant sensory information to generate behavioral responses that have adaptive values. However, the signal transduction pathways and the molecules that regulate integrated behavioral response to multiple sensory cues are not well defined. Here, we characterize a collective modulatory basis for a behavioral decision in C. elegans when the animal is presented with an attractive food source together with a repulsive odorant. We show that distributed neuronal components in the worm nervous system and several neuromodulators orchestrate the decision-making process, suggesting that various states and contexts may modulate the multisensory integration. Among these modulators, we identify a new function of a conserved TGF-β pathway that regulates the integrated decision by inhibiting the signaling from a set of central neurons. Interestingly, we find that a common set of modulators, including the TGF-β pathway, regulate the integrated response to the pairing of different foods and repellents. Together, our results provide insights into the modulatory signals regulating multisensory integration and reveal potential mechanistic basis for the complex pathology underlying defects in multisensory processing shared by common neurological diseases.Author SummaryThe present study characterizes the modulation of a behavioral decision in C. elegans when the worm is presented with a food lawn that is paired with a repulsive smell. We show that multiple sensory neurons and interneurons play roles in making the decision. We also identify several modulatory molecules that are essential for the integrated decision when the animal faces a choice between the cues of opposing valence. We further show that many of these factors, which often represent different states and contexts, are common for behavioral decisions that integrate sensory information from different types of foods and repellents. Overall, our results reveal a collective molecular and cellular basis for integration of simultaneously present attractive and repulsive cues to fine-tune decision-making.


2021 ◽  
Vol 15 ◽  
Author(s):  
Patricia Cornelio ◽  
Carlos Velasco ◽  
Marianna Obrist

Multisensory integration research has allowed us to better understand how humans integrate sensory information to produce a unitary experience of the external world. However, this field is often challenged by the limited ability to deliver and control sensory stimuli, especially when going beyond audio–visual events and outside laboratory settings. In this review, we examine the scope and challenges of new technology in the study of multisensory integration in a world that is increasingly characterized as a fusion of physical and digital/virtual events. We discuss multisensory integration research through the lens of novel multisensory technologies and, thus, bring research in human–computer interaction, experimental psychology, and neuroscience closer together. Today, for instance, displays have become volumetric so that visual content is no longer limited to 2D screens, new haptic devices enable tactile stimulation without physical contact, olfactory interfaces provide users with smells precisely synchronized with events in virtual environments, and novel gustatory interfaces enable taste perception through levitating stimuli. These technological advances offer new ways to control and deliver sensory stimulation for multisensory integration research beyond traditional laboratory settings and open up new experimentations in naturally occurring events in everyday life experiences. Our review then summarizes these multisensory technologies and discusses initial insights to introduce a bridge between the disciplines in order to advance the study of multisensory integration.


Sign in / Sign up

Export Citation Format

Share Document