The Crossmodal between the Visual and Tactile for Motion Perception

Author(s):  
Min Guo ◽  
Yinghua Yu ◽  
Jiajia Yang ◽  
Jinglong Wu

To perceive our world, we make full use of multiple sources of sensory information derived from different modalities which include five basic sensory systems; visual, auditory, tactile, olfactory, and gustatory. In the real world, we normally simultaneously acquire information from different sensory receptors. Therefore, multisensory integration in the brain plays an important role in performance and perception. This review focuses on the crossmodal between the visual and tactile. Many previous studies have indicated that visual information effects tactile perception and in return, tactile perception is also active in the MT, the main visual motion information processing area. However, few studies have explored how information of the crossmodal between the visual and tactile is processed. Here, the authors highlight the processing mechanism of the crossmodal in the brain. They show that integration between the visual and tactile has two stages: combination and integration.

1999 ◽  
Vol 13 (2) ◽  
pp. 117-125 ◽  
Author(s):  
Laurence Casini ◽  
Françoise Macar ◽  
Marie-Hélène Giard

Abstract The experiment reported here was aimed at determining whether the level of brain activity can be related to performance in trained subjects. Two tasks were compared: a temporal and a linguistic task. An array of four letters appeared on a screen. In the temporal task, subjects had to decide whether the letters remained on the screen for a short or a long duration as learned in a practice phase. In the linguistic task, they had to determine whether the four letters could form a word or not (anagram task). These tasks allowed us to compare the level of brain activity obtained in correct and incorrect responses. The current density measures recorded over prefrontal areas showed a relationship between the performance and the level of activity in the temporal task only. The level of activity obtained with correct responses was lower than that obtained with incorrect responses. This suggests that a good temporal performance could be the result of an efficacious, but economic, information-processing mechanism in the brain. In addition, the absence of this relation in the anagram task results in the question of whether this relation is specific to the processing of sensory information only.


Author(s):  
Martin V. Butz ◽  
Esther F. Kutter

While bottom-up visual processing is important, the brain integrates this information with top-down, generative expectations from very early on in the visual processing hierarchy. Indeed, our brain should not be viewed as a classification system, but rather as a generative system, which perceives something by integrating sensory evidence with the available, learned, predictive knowledge about that thing. The involved generative models continuously produce expectations over time, across space, and from abstracted encodings to more concrete encodings. Bayesian information processing is the key to understand how information integration must work computationally – at least in approximation – also in the brain. Bayesian networks in the form of graphical models allow the modularization of information and the factorization of interactions, which can strongly improve the efficiency of generative models. The resulting generative models essentially produce state estimations in the form of probability densities, which are very well-suited to integrate multiple sources of information, including top-down and bottom-up ones. A hierarchical neural visual processing architecture illustrates this point even further. Finally, some well-known visual illusions are shown and the perceptions are explained by means of generative, information integrating, perceptual processes, which in all cases combine top-down prior knowledge and expectations about objects and environments with the available, bottom-up visual information.


Author(s):  
Farran Briggs

Many mammals, including humans, rely primarily on vision to sense the environment. While a large proportion of the brain is devoted to vision in highly visual animals, there are not enough neurons in the visual system to support a neuron-per-object look-up table. Instead, visual animals evolved ways to rapidly and dynamically encode an enormous diversity of visual information using minimal numbers of neurons (merely hundreds of millions of neurons and billions of connections!). In the mammalian visual system, a visual image is essentially broken down into simple elements that are reconstructed through a series of processing stages, most of which occur beneath consciousness. Importantly, visual information processing is not simply a serial progression along the hierarchy of visual brain structures (e.g., retina to visual thalamus to primary visual cortex to secondary visual cortex, etc.). Instead, connections within and between visual brain structures exist in all possible directions: feedforward, feedback, and lateral. Additionally, many mammalian visual systems are organized into parallel channels, presumably to enable efficient processing of information about different and important features in the visual environment (e.g., color, motion). The overall operations of the mammalian visual system are to: (1) combine unique groups of feature detectors in order to generate object representations and (2) integrate visual sensory information with cognitive and contextual information from the rest of the brain. Together, these operations enable individuals to perceive, plan, and act within their environment.


Perception ◽  
2020 ◽  
Vol 49 (10) ◽  
pp. 1101-1114
Author(s):  
Laurens A. M. H. Kirkels ◽  
Reinder Dorman ◽  
Richard J. A. van Wezel

When an object is partially occluded, the different parts of the object have to be perceptually coupled. Cues that can be used for perceptual coupling are, for instance, depth ordering and visual motion information. In subjects with impaired stereovision, the brain is less able to use stereoscopic depth cues, making them more reliant on other cues. Therefore, our hypothesis is that stereovision-impaired subjects have stronger motion coupling than stereoscopic subjects. We compared perceptual coupling in 8 stereoscopic and 10 stereovision-impaired subjects, using random moving dot patterns that defined an ambiguous rotating cylinder and a coaxially presented nonambiguous half cylinder. Our results show that, whereas stereoscopic subjects exhibit significant coupling in the far plane, stereovision-impaired subjects show no coupling and under our conditions also no stronger motion coupling than stereoscopic subjects.


2020 ◽  
Vol 117 (13) ◽  
pp. 7510-7515 ◽  
Author(s):  
Tessel Blom ◽  
Daniel Feuerriegel ◽  
Philippa Johnson ◽  
Stefan Bode ◽  
Hinze Hogendoorn

The transmission of sensory information through the visual system takes time. As a result of these delays, the visual information available to the brain always lags behind the timing of events in the present moment. Compensating for these delays is crucial for functioning within dynamic environments, since interacting with a moving object (e.g., catching a ball) requires real-time localization of the object. One way the brain might achieve this is via prediction of anticipated events. Using time-resolved decoding of electroencephalographic (EEG) data, we demonstrate that the visual system represents the anticipated future position of a moving object, showing that predictive mechanisms activate the same neural representations as afferent sensory input. Importantly, this activation is evident before sensory input corresponding to the stimulus position is able to arrive. Finally, we demonstrate that, when predicted events do not eventuate, sensory information arrives too late to prevent the visual system from representing what was expected but never presented. Taken together, we demonstrate how the visual system can implement predictive mechanisms to preactivate sensory representations, and argue that this might allow it to compensate for its own temporal constraints, allowing us to interact with dynamic visual environments in real time.


2005 ◽  
Vol 94 (1) ◽  
pp. 119-135 ◽  
Author(s):  
E. S. Frechette ◽  
A. Sher ◽  
M. I. Grivich ◽  
D. Petrusca ◽  
A. M. Litke ◽  
...  

Sensory experience typically depends on the ensemble activity of hundreds or thousands of neurons, but little is known about how populations of neurons faithfully encode behaviorally important sensory information. We examined how precisely speed of movement is encoded in the population activity of magnocellular-projecting parasol retinal ganglion cells (RGCs) in macaque monkey retina. Multi-electrode recordings were used to measure the activity of ∼100 parasol RGCs simultaneously in isolated retinas stimulated with moving bars. To examine how faithfully the retina signals motion, stimulus speed was estimated directly from recorded RGC responses using an optimized algorithm that resembles models of motion sensing in the brain. RGC population activity encoded speed with a precision of ∼1%. The elementary motion signal was conveyed in ∼10 ms, comparable to the interspike interval. Temporal structure in spike trains provided more precise speed estimates than time-varying firing rates. Correlated activity between RGCs had little effect on speed estimates. The spatial dispersion of RGC receptive fields along the axis of motion influenced speed estimates more strongly than along the orthogonal direction, as predicted by a simple model based on RGC response time variability and optimal pooling. on and off cells encoded speed with similar and statistically independent variability. Simulation of downstream speed estimation using populations of speed-tuned units showed that peak (winner take all) readout provided more precise speed estimates than centroid (vector average) readout. These findings reveal how faithfully the retinal population code conveys information about stimulus speed and the consequences for motion sensing in the brain.


2020 ◽  
Vol 7 (8) ◽  
pp. 192056
Author(s):  
Nienke B. Debats ◽  
Herbert Heuer

Successful computer use requires the operator to link the movement of the cursor to that of his or her hand. Previous studies suggest that the brain establishes this perceptual link through multisensory integration, whereby the causality evidence that drives the integration is provided by the correlated hand and cursor movement trajectories. Here, we explored the temporal window during which this causality evidence is effective. We used a basic cursor-control task, in which participants performed out-and-back reaching movements with their hand on a digitizer tablet. A corresponding cursor movement could be shown on a monitor, yet slightly rotated by an angle that varied from trial to trial. Upon completion of the backward movement, participants judged the endpoint of the outward hand or cursor movement. The mutually biased judgements that typically result reflect the integration of the proprioceptive information on hand endpoint with the visual information on cursor endpoint. We here manipulated the time period during which the cursor was visible, thereby selectively providing causality evidence either before or after sensory information regarding the to-be-judged movement endpoint was available. Specifically, the cursor was visible either during the outward or backward hand movement (conditions Out and Back , respectively). Our data revealed reduced integration in the condition Back compared with the condition Out , suggesting that causality evidence available before the to-be-judged movement endpoint is more powerful than later evidence in determining how strongly the brain integrates the endpoint information. This finding further suggests that sensory integration is not delayed until a judgement is requested.


e-Neuroforum ◽  
2012 ◽  
Vol 18 (3) ◽  
Author(s):  
S. Treue ◽  
J.C. Martinez-Trujillo

AbstractIn the visual system receptive fields repre­sent the spatial selectivity of neurons for a given set of visual inputs. Their invariance is thought to be caused by a hardwired in­put configuration, which ensures a stable ‘la­beled line’ code for the spatial position of vi­sual stimuli. On the other hand, changeable receptive fields can provide the visual system with flexibility for allocating processing re­sources in space. The allocation of spatial at­tention, often referred to as the spotlight of attention, is a behavioral equivalent of visu­al receptive fields. It dynamically modulates the spatial sensitivity to visual information as a function of the current attentional focus of the organism. Here we focus on the brain sys­tem for encoding visual motion information and review recent findings documenting in­teractions between spatial attention and re­ceptive fields in the visual cortex of primates. Such interactions create a careful balance be­tween the benefits of invariance with those derived from the attentional modulation of information processing according to the cur­rent behavioral goals.


eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Frédéric Crevecoeur ◽  
Konrad P Kording

Humans perform saccadic eye movements two to three times per second. When doing so, the nervous system strongly suppresses sensory feedback for extended periods of time in comparison to movement time. Why does the brain discard so much visual information? Here we suggest that perceptual suppression may arise from efficient sensorimotor computations, assuming that perception and control are fundamentally linked. More precisely, we show theoretically that a Bayesian estimator should reduce the weight of sensory information around the time of saccades, as a result of signal dependent noise and of sensorimotor delays. Such reduction parallels the behavioral suppression occurring prior to and during saccades, and the reduction in neural responses to visual stimuli observed across the visual hierarchy. We suggest that saccadic suppression originates from efficient sensorimotor processing, indicating that the brain shares neural resources for perception and control.


2017 ◽  
Author(s):  
F. Crevecoeur ◽  
K. P. Kording

AbstractHumans perform saccadic eye movements two to three times per second. When doing so, the nervous system strongly suppresses sensory feedback for extended periods of time in comparison with the movement time. Why does the brain discard so much visual information? Here we suggest that perceptual suppression may arise from efficient sensorimotor computations, assuming that perception and control are fundamentally linked. More precisely, we show that a Bayesian estimator should reduce the weight of sensory information around the time of saccades, as a result of signal dependent noise and of sensorimotor delays. Such reduction parallels the behavioral suppression occurring prior to and during saccades, and the reduction in neural responses to visual stimuli observed across the visual hierarchy. We suggest that saccadic suppression originates from efficient sensorimotor processing, indicating that the brain shares neural resources for perception and control.


Sign in / Sign up

Export Citation Format

Share Document