sensory modalities
Recently Published Documents


TOTAL DOCUMENTS

798
(FIVE YEARS 266)

H-INDEX

51
(FIVE YEARS 7)

2022 ◽  
Vol 14 ◽  
Author(s):  
Miguel Skirzewski ◽  
Stéphane Molotchnikoff ◽  
Luis F. Hernandez ◽  
José Fernando Maya-Vetencourt

In the mammalian brain, information processing in sensory modalities and global mechanisms of multisensory integration facilitate perception. Emerging experimental evidence suggests that the contribution of multisensory integration to sensory perception is far more complex than previously expected. Here we revise how associative areas such as the prefrontal cortex, which receive and integrate inputs from diverse sensory modalities, can affect information processing in unisensory systems via processes of down-stream signaling. We focus our attention on the influence of the medial prefrontal cortex on the processing of information in the visual system and whether this phenomenon can be clinically used to treat higher-order visual dysfunctions. We propose that non-invasive and multisensory stimulation strategies such as environmental enrichment and/or attention-related tasks could be of clinical relevance to fight cerebral visual impairment.


Author(s):  
Mengxin He ◽  
Lin-Xuan Xu ◽  
Chiang-shan R. Li ◽  
Zihan Liu ◽  
Jiaqi Hu ◽  
...  

Objective Do real-time strategy (RTS) video gamers have better attentional control? To examine this issue, we tested experienced versus inexperienced RTS video gamers on multi-object tracking tasks (MOT) and dual-MOT tasks with visual or auditory secondary tasks (dMOT). We employed a street-crossing task with a visual working memory task as a secondary task in a virtual reality (VR) environment to examine any generalized attentional advantage. Background Similar to action video games, RTS video games require players to switch attention between multiple visual objects and views. However, whether the attentional control advantage is limited by sensory modalities or generalizes to real-life tasks remains unclear. Method In study 1, 25 RTS video game players (SVGP) and 25 non-video game players (NVGP) completed the MOT task and two dMOT tasks. In study 2, a different sample with 25 SVGP and 25 NVGP completed a simulated street-crossing task with the visual dual task in a VR environment. Results After controlling the effects of the speed-accuracy trade-off, SVGP showed better performance than NVGP in the MOT task and the visual dMOT task, but SVGP did not perform better in either the auditory dMOT task or the street-crossing task. Conclusion RTS video gamers had better attentional control in visual computer tasks, but not in the auditory tasks and the VR tasks. Attentional control benefits associated with RTS video game experience may be limited by sensory modalities, and may not translate to performance benefits in real-life tasks.


2022 ◽  
Author(s):  
Sebastian Korb ◽  
Nace Mikus ◽  
Claudia Massaccesi ◽  
Jack Grey ◽  
Suvarnalata Xanthate Duggirala ◽  
...  

Appraisals can be influenced by cultural beliefs and stereotypes. In line with this, past research has shown that judgments about the emotional expression of a face are influenced by the face’s sex, and vice versa that judgments about the sex of a person somewhat depend on the person’s facial expression. For example, participants associate anger with male faces, and female faces with happiness or sadness. However, the strength and the bidirectionality of these effects remain debated. Moreover, the interplay of a stimulus’ emotion and sex remains mostly unknown in the auditory domain. To investigate these questions, we created a novel stimulus set of 121 avatar faces and 121 human voices (available at https://bit.ly/2JkXrpy) with matched, fine-scale changes along the emotional (happy to angry) and sexual (male to female) dimensions. In a first experiment (N=76), we found clear evidence for the mutual influence of facial emotion and sex cues on ratings, and moreover for larger implicit (task-irrelevant) effects of stimulus’ emotion than of sex. These findings were replicated and extended in two preregistered studies – one laboratory categorisation study using the same face stimuli (N=108; https://osf.io/ve9an), and one online study with vocalisations (N=72; https://osf.io/vhc9g). Overall, results show that the associations of maleness-anger and femaleness-happiness exist across sensory modalities, and suggest that emotions expressed in the face and voice cannot be entirely disregarded, even when attention is mainly focused on determining stimulus’ sex. We discuss the relevance of these findings for cognitive and neural models of face and voice processing.


2021 ◽  
pp. 1-15
Author(s):  
Yen-Han Chang ◽  
Mingxue Zhao ◽  
Yi-Chuan Chen ◽  
Pi-Chun Huang

Abstract Crossmodal correspondences refer to when specific domains of features in different sensory modalities are mapped. We investigated how vowels and lexical tones drive sound–shape (rounded or angular) and sound–size (large or small) mappings among native Mandarin Chinese speakers. We used three vowels (/i/, /u/, and /a/), and each vowel was articulated in four lexical tones. In the sound–shape matching, the tendency to match the rounded shape was decreased in the following order: /u/, /i/, and /a/. Tone 2 was more likely to be matched to the rounded pattern, whereas Tone 4 was more likely to be matched to the angular pattern. In the sound–size matching, /a/ was matched to the larger object more than /u/ and /i/, and Tone 2 and Tone 4 correspond to the large–small contrast. The results demonstrated that both vowels and tones play prominent roles in crossmodal correspondences, and sound–shape and sound–size mappings are heterogeneous phenomena.


Author(s):  
Malcolm A. MacIver ◽  
Barbara L. Finlay

The water-to-land transition in vertebrate evolution offers an unusual opportunity to consider computational affordances of a new ecology for the brain. All sensory modalities are changed, particularly a greatly enlarged visual sensorium owing to air versus water as a medium, and expanded by mobile eyes and neck. The multiplication of limbs, as evolved to exploit aspects of life on land, is a comparable computational challenge. As the total mass of living organisms on land is a hundredfold larger than the mass underwater, computational improvements promise great rewards. In water, the midbrain tectum coordinates approach/avoid decisions, contextualized by water flow and by the animal’s body state and learning. On land, the relative motions of sensory surfaces and effectors must be resolved, adding on computational architectures from the dorsal pallium, such as the parietal cortex. For the large-brained and long-living denizens of land, making the right decision when the wrong one means death may be the basis of planning, which allows animals to learn from hypothetical experience before enactment. Integration of value-weighted, memorized panoramas in basal ganglia/frontal cortex circuitry, with allocentric cognitive maps of the hippocampus and its associated cortices becomes a cognitive habit-to-plan transition as substantial as the change in ecology. This article is part of the theme issue ‘Systems neuroscience through the lens of evolutionary theory’.


2021 ◽  
pp. 214-234
Author(s):  
Renee Timmers

This chapter explores the insights that research into cross-modal correspondences and multisensory integration offer to our understanding and investigation of tempo and timing in music performance. As tempo and timing are generated through action, actions and sensory modalities are coupled in performance and form a multimodal unit of intention. This coupled intention is likely to demonstrate characteristics of cross-modal correspondences, linking movement and sound. Testable properties predictions are offered by research into cross-modal correspondences that have so far mainly found confirmation in controlled perceptual experiments. For example, fast tempo is predicted to be linked to smaller movement that is higher in space. Confirmation in the context of performance is complicated by interacting associations with intentions related to e.g. dynamics and energy, which can be addressed through appropriate experimental manipulation. This avenue of research highlights the close association between action and cross-modality, conceiving action as a source of cross-modal correspondences as well as indicating the cross-modal basis of actions. For timing and tempo concepts, action and cross-modality offer concrete and embodied modalities of expression.


Author(s):  
Aleena R. Garner ◽  
Georg B. Keller

AbstractLearned associations between stimuli in different sensory modalities can shape the way we perceive these stimuli. However, it is not well understood how these interactions are mediated or at what level of the processing hierarchy they occur. Here we describe a neural mechanism by which an auditory input can shape visual representations of behaviorally relevant stimuli through direct interactions between auditory and visual cortices in mice. We show that the association of an auditory stimulus with a visual stimulus in a behaviorally relevant context leads to experience-dependent suppression of visual responses in primary visual cortex (V1). Auditory cortex axons carry a mixture of auditory and retinotopically matched visual input to V1, and optogenetic stimulation of these axons selectively suppresses V1 neurons that are responsive to the associated visual stimulus after, but not before, learning. Our results suggest that cross-modal associations can be communicated by long-range cortical connections and that, with learning, these cross-modal connections function to suppress responses to predictable input.


Author(s):  
Yongsuk Seo ◽  
Jung-Hyun Kim

Introduction: The method of limits (MLI) and method of level (MLE) are commonly employed for the quantitative assessment of cutaneous thermal sensitivity. Thermal sensation and thermal comfort are closely related and thermal sensations evoked from the peripheral thermoreceptors play an important role in thermoregulatory response to maintain normal body temperature. The purpose of this study was to compare the regional distribution of cutaneous warm and cold sensitivity between MLI and the method of sensation magnitude (MSM). Method: Twenty healthy men completed MLI and MSM to compare the regional distribution of cutaneous warm and cold sensitivity in the thermal neutral condition. The subjects rested on a bed in a supine position for 20 min. Next, the cutaneous thermal sensitivity of ten body sites was assessed by the means of MLI and MSM for both warmth and cold stimuli. Results: The absolute mean heat flux in MLI and thermal sensation magnitude in MSM showed significantly greater sensitivity to cold than to warm stimulation (p < 0.01), together with a similar pattern of regional differences across ten body sites. Both sensory modalities indicated acceptable reliability (SRD%: 6.29–8.66) and excellent reproducibility (ICC: 0.826–0.906; p < 0.01). However, the Z-sore distribution in MSM was much narrower than in MLI, which may limit the test sensitivity for the detection of sensory disorders and/or comparison between individuals. Conclusion: The present results showed that both MLI and MSM are effective means for evaluating regional cutaneous thermal sensitivity to innocuous warm and cold stimulations to a strong degree of reliability and reproducibility.


2021 ◽  
Author(s):  
Anouk Keizer ◽  
Manja Engel

Anorexia nervosa (AN) is an eating disorder that mainly affects young women. One of the most striking symptoms of this disorder is the distorted experience of body size and shape. Patients are by definition underweight, but experience and perceive their body as bigger than it in reality is. This body representation disturbance has fascinated scientists for many decades, leading to a rich and diverse body of literature on this topic. Research shows that AN patients do not only think that their body is bigger than reality, and visually perceive it as such, but that other sensory modalities also play an important role in oversized body experiences. Patients for example have an altered (enlarged) size perception of tactile stimuli, and move their body as if it is larger than it actually is. Moreover, patients with AN appear to process and integrate multisensory information differently than healthy individuals, especially in relation to body size. This leads to the conclusion that the representation of the size of the body in the brain is enlarged. This conclusion has important implications for the treatment of body representation disturbances in AN. Traditionally treatment of AN is very cognitive in nature, it is possible however that changed cognitions with respect to body size experiences do not lead to actual changes in metric representations of body size stored in the brain. Recently a few studies have been published in which a multisensory approach in treatment of body representation disturbance in AN has been found to be effective in treating this symptom of AN.


2021 ◽  
Author(s):  
◽  
Paige Badart

<p>Failures of attention can be hazardous, especially within the workplace where sustaining attention has become an increasingly important skill. This has produced a necessity for the development of methods to improve attention. One such method is the practice of meditation. Previous research has shown that meditation can produce beneficial changes to attention and associated brain regions. In particular, sustained attention has shown to be significantly improved by meditation. While this effect has shown to occur in the visual modality, there is less research on the effects of meditation and auditory sustained attention. Furthermore, there is currently no research which examines meditation on crossmodal sustained attention. This is relevant not only because visual and auditory are perceived simultaneously in reality, but also as it may assist in the debate as to whether sustained attention is managed by modality-specific systems or a single overarching supramodal system.  The current research was conducted to examine the effects of meditation on visual, auditory and audiovisual crossmodal sustained attention by using variants of the Sustained Attention to Response Task. In these tasks subjects were presented with either visual, auditory, or a combination of visual and auditory stimuli, and were required to respond to infrequent targets over an extended period of time. It was found that for all of the tasks, meditators significantly differed in accuracy compared to non-meditating control groups. The meditators made less errors without sacrificing response speed, with the exception of the Auditory-target crossmodal task. This demonstrates the benefit of meditation for improving sustained attention across sensory modalities and also lends support to the argument that sustained attention is governed by a supramodal system rather than modality-specific systems.</p>


Sign in / Sign up

Export Citation Format

Share Document