Synaesthesia

Author(s):  
Bruno and

Synaesthesia is a curious anomaly of multisensory perception. When presented with stimulation in one sensory channel, in addition to the percept usually associated with that channel (inducer) a true synaesthetic experiences a second percept in another perceptual modality (concurrent). Although synaesthesia is not pathological, true synaesthetes are relatively rare and their synaesthetic associations tend to be quite idiosyncratic. For this reason, studying synaesthesia is difficult, but exciting new experimental results are beginning to clarify what makes the brain of synaesthetes special and the mechanisms that may produce the condition. Even more importantly, the related phenomenon known as ‘natural’ crossmodal associations is instead experienced by everyone, providing another useful domain for studying multisensory interactions with important implications for understanding our preferences for products in terms of spontaneously evoked associations, as well as for choosing appropriate names, labels, and packaging in marketing applications.

Author(s):  
Bruno and

Multisensory interactions in perception are pervasive and fundamental, as we have documented throughout this book. In this final chapter, we propose that contemporary work on multisensory processing is a paradigm shift in perception science, calling for a radical reconsideration of empirical and theoretical questions within an entirely new perspective. In making our case, we emphasize that multisensory perception is the norm, not the exception, and we remark that multisensory interactions can occur early in sensory processing. We reiterate the key notions that multisensory interactions come in different kinds and that principles of multisensory processing must be considered when tackling multisensory daily-life problems. We discuss the role of unisensory processing in a multisensory world, and we conclude by suggesting future directions for the multisensory field.


2021 ◽  
Vol 11 (7) ◽  
pp. 2987
Author(s):  
Takumi Okumura ◽  
Yuichi Kurita

Image therapy, which creates illusions with a mirror and a head mount display, assists movement relearning in stroke patients. Mirror therapy presents the movement of the unaffected limb in a mirror, creating the illusion of movement of the affected limb. As the visual information of images cannot create a fully immersive experience, we propose a cross-modal strategy that supplements the image with sensual information. By interacting with the stimuli received from multiple sensory organs, the brain complements missing senses, and the patient experiences a different sense of motion. Our system generates the sense of stair-climbing in a subject walking on a level floor. The force sensation is presented by a pneumatic gel muscle (PGM). Based on motion analysis in a human lower-limb model and the characteristics of the force exerted by the PGM, we set the appropriate air pressure of the PGM. The effectiveness of the proposed system was evaluated by surface electromyography and a questionnaire. The experimental results showed that by synchronizing the force sensation with visual information, we could match the motor and perceived sensations at the muscle-activity level, enhancing the sense of stair-climbing. The experimental results showed that the visual condition significantly improved the illusion intensity during stair-climbing.


PLoS Biology ◽  
2021 ◽  
Vol 19 (11) ◽  
pp. e3001465
Author(s):  
Ambra Ferrari ◽  
Uta Noppeney

To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals’ causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via 2 distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy.


2019 ◽  
Author(s):  
Brandon M. Sexton ◽  
Yang Liu ◽  
Hannah J. Block

AbstractHand position can be encoded by vision, via an image on the retina, and proprioception (position sense), via sensors in the joints and muscles. The brain is thought to weight and combine available sensory estimates to form an integrated multisensory estimate of hand position with which to guide movement. Force field adaptation, a form of cerebellum-dependent motor learning in which reaches are systematically adjusted to compensate for a somatosensory perturbation, is associated with both motor and proprioceptive changes. The cerebellum has connections with parietal regions thought to be involved in multisensory integration; however, it is unknown if force adaptation is associated with changes in multisensory perception. One possibility is that force adaptation affects all relevant sensory modalities similarly, such that the brain’s weighting of vision vs. proprioception is maintained. Alternatively, the somatosensory perturbation might be interpreted as proprioceptive unreliability, resulting in vision being up-weighted relative to proprioception. We assessed visuo-proprioceptive weighting with a perceptual estimation task before and after subjects performed straight-ahead reaches grasping a robotic manipulandum. Each subject performed one session with a clockwise or counter-clockwise velocity-dependent force field, and one session in a null field to control for perceptual changes not specific to force adaptation. Subjects increased their weight of vision vs. proprioception in the force field session relative to the null field session, regardless of force field direction, in the straight-ahead dimension (F1,44 = 5.13, p = 0.029). This suggests that force field adaptation is associated with an increase in the brain’s weighting of vision vs. proprioception.


2021 ◽  
Vol 14 ◽  
Author(s):  
Tawan T. A. Carvalho ◽  
Antonio J. Fontenele ◽  
Mauricio Girardi-Schappo ◽  
Thaís Feliciano ◽  
Leandro A. A. Aguiar ◽  
...  

Recent experimental results on spike avalanches measured in the urethane-anesthetized rat cortex have revealed scaling relations that indicate a phase transition at a specific level of cortical firing rate variability. The scaling relations point to critical exponents whose values differ from those of a branching process, which has been the canonical model employed to understand brain criticality. This suggested that a different model, with a different phase transition, might be required to explain the data. Here we show that this is not necessarily the case. By employing two different models belonging to the same universality class as the branching process (mean-field directed percolation) and treating the simulation data exactly like experimental data, we reproduce most of the experimental results. We find that subsampling the model and adjusting the time bin used to define avalanches (as done with experimental data) are sufficient ingredients to change the apparent exponents of the critical point. Moreover, experimental data is only reproduced within a very narrow range in parameter space around the phase transition.


2020 ◽  
Author(s):  
Morfoisse Theo ◽  
Herrera Altamira Gabriela ◽  
Angelini Leonardo ◽  
Clément Gilles ◽  
Beraneck Mathieu ◽  
...  

AbstractHuman visual 3D perception is flawed by distortions, which are influenced by non-visual factors, such as gravitational vestibular signals. Distinct hypotheses regarding the sensory processing stage at which gravity acts may explain the influence of gravity: 1) a direct effect on the visual system, 2) a shaping of the internal representation of space that is used to interpret sensory signals, or 3) a role in the ability to build multiple, modalityspecific, internal depictions of the perceived object. To test these hypotheses, we performed experiments comparing visual versus haptic 3D perception, and the effects of microgravity on these two senses. The results show that visual and haptic perceptual anisotropies reside in body-centered, and not gravity-centered, planes, suggesting an ego-centric encoding of the information for both sensory modalities. Although coplanar, the perceptual distortions of the two sensory modalities are in opposite directions: depth is visually underestimated, but haptically overestimated, with respect to height and width. Interestingly microgravity appears to amplify the ‘terrestrial’ distortions of both senses. Through computational modeling, we show that these findings are parsimoniously predicted only by a gravity facilitation of cross-modal sensory reconstructions, corresponding to Hypothesis 3. This theory is able to explain not only how gravity can shape egocentric perceptions, but also the unexpected opposite effect of gravity on visual and haptic 3D perception. Overall, these results suggest that the brain uses gravity as a stable reference cue to reconstruct concurrent, modality-specific internal representations of 3D objects even when they are sensed through only one sensory channel.


2019 ◽  
Author(s):  
Thomas Hörberg ◽  
Maria Larsson ◽  
Ingrid Ekström ◽  
Camilla Sandöy ◽  
Jonas Olofsson

Visual stimuli often dominate non-visual stimuli during multisensory perception, and evidence suggests higher cognitive processes prioritize visual over non-visual stimuli during divided attention. Visual stimuli may therefore have privileged access to higher mental processing resources, relative to other senses, and should be disproportionally distracting when processing incongruent cross-sensory stimuli. We tested this assumption by comparing visual processing with olfaction, a “primitive” sensory channel that detects potentially hazardous chemicals by alerting attention. Behavioral and event-related brain potentials (ERPs) were assessed in a bimodal object categorization task with congruent or incongruent odor-picture pairings and a delayed auditory response target. For congruent pairings, accuracy was higher for visual compared to olfactory decisions. However, for incongruent pairings, reaction times (RTs) were faster for olfactory decisions, suggesting incongruent odors interfered more with visual decisions, thereby showing an “olfactory dominance effect”. Categorization of incongruent pairings engendered a late “slow wave” ERP effect. Importantly, this effect had a later amplitude peak and longer latency during visual decisions, likely reflecting additional categorization effort for visual stimuli. In sum, contrary to what might be inferred from theories of ”visual dominance”, incongruent odors may in fact uniquely attract mental processing resources during perceptual incongruence.


eLife ◽  
2016 ◽  
Vol 5 ◽  
Author(s):  
Jeremie Gaveau ◽  
Bastien Berret ◽  
Dora E Angelaki ◽  
Charalambos Papaxanthis

The brain has evolved an internal model of gravity to cope with life in the Earth's gravitational environment. How this internal model benefits the implementation of skilled movement has remained unsolved. One prevailing theory has assumed that this internal model is used to compensate for gravity's mechanical effects on the body, such as to maintain invariant motor trajectories. Alternatively, gravity force could be used purposely and efficiently for the planning and execution of voluntary movements, thereby resulting in direction-depending kinematics. Here we experimentally interrogate these two hypotheses by measuring arm kinematics while varying movement direction in normal and zero-G gravity conditions. By comparing experimental results with model predictions, we show that the brain uses the internal model to implement control policies that take advantage of gravity to minimize movement effort.


2020 ◽  
Vol 33 (4-5) ◽  
pp. 383-416 ◽  
Author(s):  
Arianna Zuanazzi ◽  
Uta Noppeney

Abstract Attention (i.e., task relevance) and expectation (i.e., signal probability) are two critical top-down mechanisms guiding perceptual inference. Attention prioritizes processing of information that is relevant for observers’ current goals. Prior expectations encode the statistical structure of the environment. Research to date has mostly conflated spatial attention and expectation. Most notably, the Posner cueing paradigm manipulates spatial attention using probabilistic cues that indicate where the subsequent stimulus is likely to be presented. Only recently have studies attempted to dissociate the mechanisms of attention and expectation and characterized their interactive (i.e., synergistic) or additive influences on perception. In this review, we will first discuss methodological challenges that are involved in dissociating the mechanisms of attention and expectation. Second, we will review research that was designed to dissociate attention and expectation in the unisensory domain. Third, we will review the broad field of crossmodal endogenous and exogenous spatial attention that investigates the impact of attention across the senses. This raises the critical question of whether attention relies on amodal or modality-specific mechanisms. Fourth, we will discuss recent studies investigating the role of both spatial attention and expectation in multisensory perception, where the brain constructs a representation of the environment based on multiple sensory inputs. We conclude that spatial attention and expectation are closely intertwined in almost all circumstances of everyday life. Yet, despite their intimate relationship, attention and expectation rely on partly distinct neural mechanisms: while attentional resources are mainly shared across the senses, expectations can be formed in a modality-specific fashion.


Sign in / Sign up

Export Citation Format

Share Document