multisensory processing
Recently Published Documents


TOTAL DOCUMENTS

207
(FIVE YEARS 74)

H-INDEX

32
(FIVE YEARS 5)

Insects ◽  
2022 ◽  
Vol 13 (1) ◽  
pp. 81
Author(s):  
Kenna D. S. Lehmann ◽  
Fiona G. Shogren ◽  
Mariah Fallick ◽  
James Colton Watts ◽  
Daniel Schoenberg ◽  
...  

Comparative cognition aims to understand the evolutionary history and current function of cognitive abilities in a variety of species with diverse natural histories. One characteristic often attributed to higher cognitive abilities is higher-order conceptual learning, such as the ability to learn concepts independent of stimuli—e.g., ‘same’ or ‘different’. Conceptual learning has been documented in honeybees and a number of vertebrates. Amblypygids, nocturnal enigmatic arachnids, are good candidates for higher-order learning because they are excellent associational learners, exceptional navigators, and they have large, highly folded mushroom bodies, which are brain regions known to be involved in learning and memory in insects. In Experiment 1, we investigate if the amblypygid Phrynus marginimaculatus can learn the concept of same with a delayed odor matching task. In Experiment 2, we test if Paraphrynus laevifrons can learn same/different with delayed tactile matching and nonmatching tasks before testing if they can transfer this learning to a novel cross-modal odor stimulus. Our data provide no evidence of conceptual learning in amblypygids, but more solid conclusions will require the use of alternative experimental designs to ensure our negative results are not simply a consequence of the designs we employed.


2021 ◽  
Author(s):  
Guangyao Qi ◽  
Wen Fang ◽  
Shenghao Li ◽  
Junru Li ◽  
Liping Wang

ABSTRACTNatural perception relies inherently on inferring causal structure in the environment. However, the neural mechanisms and functional circuits that are essential for representing and updating the hidden causal structure and corresponding sensory representations during multisensory processing are unknown. To address this, monkeys were trained to infer the probability of a potential common source from visual and proprioceptive signals on the basis of their spatial disparity in a virtual reality system. The proprioceptive drift reported by monkeys demonstrated that they combined historical information and current multisensory signals to estimate the hidden common source and subsequently updated both the causal structure and sensory representation. Single-unit recordings in premotor and parietal cortices revealed that neural activity in premotor cortex represents the core computation of causal inference, characterizing the estimation and update of the likelihood of integrating multiple sensory inputs at a trial-by-trial level. In response to signals from premotor cortex, neural activity in parietal cortex also represents the causal structure and further dynamically updates the sensory representation to maintain consistency with the causal inference structure. Thus, our results indicate how premotor cortex integrates historical information and sensory inputs to infer hidden variables and selectively updates sensory representations in parietal cortex to support behavior. This dynamic loop of frontal-parietal interactions in the causal inference framework may provide the neural mechanism to answer long-standing questions regarding how neural circuits represent hidden structures for body-awareness and agency.


2021 ◽  
Author(s):  
Zhichao Xia ◽  
Ting Yang ◽  
Xin Cui ◽  
Fumiko Hoeft ◽  
Hong Liu ◽  
...  

Conquering grapheme-phoneme correspondence is necessary for developing fluent reading in alphabetic orthographies. In neuroimaging research, this ability is associated with brain activation differences between the audiovisual congruent against incongruent conditions, especially in the left superior temporal cortex. Studies have also shown such a neural audiovisual integration effect is reduced in individuals with dyslexia. However, existing evidence is almost restricted to alphabetic languages. Whether and how multisensory processing of print and sound is impaired in Chinese dyslexia remains underexplored. Of note, semantic information is deeply involved in Chinese character processing. In this study, we applied a functional magnetic resonance imaging audiovisual integration paradigm to investigate the possible dysfunctions in processing character-sound pairs and pinyin-sound pairs in Chinese dyslexic children compared with typically developing readers. Unexpectedly, no region displayed significant group difference in the audiovisual integration effect in either the character or pinyin experiment. However, the results revealed atypical correlations between neurofunctional features accompanying audiovisual integration with reading abilities in Chinese children with dyslexia. Specifically, while the audiovisual integration effect in the left inferior cortex in processing character-sound pairs correlated with silent reading comprehension proficiency in both dyslexia and control group, it was associated with morphological awareness in the control group but with rapid naming in dyslexics. As for pinyin-sound associations processing, while the stronger activation in the congruent than incongruent conditions in the left occipito-temporal cortex and bilateral superior temporal cortices was associated with better oral word reading in the control group, an opposite pattern was found in children with dyslexia. On the one hand, this pattern suggests Chinese dyslexic children have yet to develop an efficient grapho-semantic processing system as typically developing children do. On the other hand, it indicates dysfunctional recruitment of the regions that process pinyin-sound pairs in dyslexia, which may impede character learning.


2021 ◽  
Author(s):  
Tiziana Vercillo ◽  
Edward G. Freedman ◽  
Joshua B. Ewen ◽  
Sophie Molholm ◽  
John J. Foxe

Multisensory objects that are frequently encountered in the natural environment lead to strong associations across a distributed sensory cortical network, with the end result experience of a unitary percept. Remarkably little is known, however, about the cortical processes sub-serving multisensory object formation and recognition. To advance our understanding in this important domain, the present study investigated the brain processes involved in learning and identification of novel visual-auditory objects. Specifically, we introduce and test a rudimentary three-stage model of multisensory object-formation and processing. Thirty adults were remotely trained for a week to recognize a novel class of multisensory objects (3D shapes paired to complex sounds), and high-density event related potentials (ERPs) were recorded to the corresponding unisensory (shapes or sounds only) and multisensory (shapes and sounds) stimuli, before and after intensive training. We identified three major stages of multisensory processing: 1) an early, multisensory, automatic effect (<100 ms) in occipital areas, related to the detection of simultaneous audiovisual signals and not related to multisensory learning 2) an intermediate object-processing stage (100-200 ms) in occipital and parietal areas, sensitive to the learned multi-sensory associations and 3) a late multisensory processing stage (>250 ms) that appears to be involved in both object recognition and possibly memory consolidation. Results from this study provide support for multiple stages of multisensory object learning and recognition that are subserved by an extended network of cortical areas.


2021 ◽  
Vol 15 ◽  
Author(s):  
Seongmi Song ◽  
Andrew D. Nordin

Walking or running in real-world environments requires dynamic multisensory processing within the brain. Studying supraspinal neural pathways during human locomotion provides opportunities to better understand complex neural circuity that may become compromised due to aging, neurological disorder, or disease. Knowledge gained from studies examining human electrical brain dynamics during gait can also lay foundations for developing locomotor neurotechnologies for rehabilitation or human performance. Technical barriers have largely prohibited neuroimaging during gait, but the portability and precise temporal resolution of non-invasive electroencephalography (EEG) have expanded human neuromotor research into increasingly dynamic tasks. In this narrative mini-review, we provide a (1) brief introduction and overview of modern neuroimaging technologies and then identify considerations for (2) mobile EEG hardware, (3) and data processing, (4) including technical challenges and possible solutions. Finally, we summarize (5) knowledge gained from human locomotor control studies that have used mobile EEG, and (6) discuss future directions for real-world neuroimaging research.


2021 ◽  
Vol 12 ◽  
Author(s):  
Fabrice Damon ◽  
Nawel Mezrai ◽  
Logan Magnier ◽  
Arnaud Leleu ◽  
Karine Durand ◽  
...  

A recent body of research has emerged regarding the interactions between olfaction and other sensory channels to process social information. The current review examines the influence of body odors on face perception, a core component of human social cognition. First, we review studies reporting how body odors interact with the perception of invariant facial information (i.e., identity, sex, attractiveness, trustworthiness, and dominance). Although we mainly focus on the influence of body odors based on axillary odor, we also review findings about specific steroids present in axillary sweat (i.e., androstenone, androstenol, androstadienone, and estratetraenol). We next survey the literature showing body odor influences on the perception of transient face properties, notably in discussing the role of body odors in facilitating or hindering the perception of emotional facial expression, in relation to competing frameworks of emotions. Finally, we discuss the developmental origins of these olfaction-to-vision influences, as an emerging literature indicates that odor cues strongly influence face perception in infants. Body odors with a high social relevance such as the odor emanating from the mother have a widespread influence on various aspects of face perception in infancy, including categorization of faces among other objects, face scanning behavior, or facial expression perception. We conclude by suggesting that the weight of olfaction might be especially strong in infancy, shaping social perception, especially in slow-maturing senses such as vision, and that this early tutoring function of olfaction spans all developmental stages to disambiguate a complex social environment by conveying key information for social interactions until adulthood.


2021 ◽  
Vol 39 (1) ◽  
pp. 1-20
Author(s):  
Zachary Wallmark ◽  
Linh Nghiem ◽  
Lawrence E. Marks

Musical timbre is often described using terms from non-auditory senses, mainly vision and touch; but it is not clear whether crossmodality in timbre semantics reflects multisensory processing or simply linguistic convention. If multisensory processing is involved in timbre perception, the mechanism governing the interaction remains unknown. To investigate whether timbres commonly perceived as “bright-dark” facilitate or interfere with visual perception (darkness-brightness), we designed two speeded classification experiments. Participants were presented consecutive images of slightly varying (or the same) brightness along with task-irrelevant auditory primes (“bright” or “dark” tones) and asked to quickly identify whether the second image was brighter/darker than the first. Incongruent prime-stimulus combinations produced significantly more response errors compared to congruent combinations but choice reaction time was unaffected. Furthermore, responses in a deceptive identical-image condition indicated subtle semantically congruent response bias. Additionally, in Experiment 2 (which also incorporated a spatial texture task), measures of reaction time (RT) and accuracy were used to construct speed-accuracy tradeoff functions (SATFs) in order to critically compare two hypothesized mechanisms for timbre-based crossmodal interactions, sensory response change vs. shift in response criterion. Results of the SATF analysis are largely consistent with the response criterion hypothesis, although without conclusively ruling out sensory change.


2021 ◽  
Vol 11 (8) ◽  
pp. 1111
Author(s):  
Brigitta Tele-Heri ◽  
Karoly Dobos ◽  
Szilvia Harsanyi ◽  
Judit Palinkas ◽  
Fanni Fenyosi ◽  
...  

At birth, the vestibular system is fully mature, whilst higher order sensory processing is yet to develop in the full-term neonate. The current paper lays out a theoretical framework to account for the role vestibular stimulation may have driving multisensory and sensorimotor integration. Accordingly, vestibular stimulation, by activating the parieto-insular vestibular cortex, and/or the posterior parietal cortex may provide the cortical input for multisensory neurons in the superior colliculus that is needed for multisensory processing. Furthermore, we propose that motor development, by inducing change of reference frames, may shape the receptive field of multisensory neurons. This, by leading to lack of spatial contingency between formally contingent stimuli, may cause degradation of prior motor responses. Additionally, we offer a testable hypothesis explaining the beneficial effect of sensory integration therapies regarding attentional processes. Key concepts of a sensorimotor integration therapy (e.g., targeted sensorimotor therapy (TSMT)) are also put into a neurological context. TSMT utilizes specific tools and instruments. It is administered in 8-weeks long successive treatment regimens, each gradually increasing vestibular and postural stimulation, so sensory-motor integration is facilitated, and muscle strength is increased. Empirically TSMT is indicated for various diseases. Theoretical foundations of this sensorimotor therapy are discussed.


2021 ◽  
pp. 1-12
Author(s):  
Anna Borgolte ◽  
Ahmad Bransi ◽  
Johanna Seifert ◽  
Sermin Toto ◽  
Gregor R. Szycik ◽  
...  

Abstract Synaesthesia is a multimodal phenomenon in which the activation of one sensory modality leads to an involuntary additional experience in another sensory modality. To date, normal multisensory processing has hardly been investigated in synaesthetes. In the present study we examine processes of audiovisual separation in synaesthesia by using a simultaneity judgement task. Subjects were asked to indicate whether an acoustic and a visual stimulus occurred simultaneously or not. Stimulus onset asynchronies (SOA) as well as the temporal order of the stimuli were systematically varied. Our results demonstrate that synaesthetes are better in separating auditory and visual events than control subjects, but only when vision leads.


Sign in / Sign up

Export Citation Format

Share Document