scholarly journals Cross-modal Emotional Attention: Emotional Voices Modulate Early Stages of Visual Processing

2009 ◽  
Vol 21 (9) ◽  
pp. 1670-1679 ◽  
Author(s):  
Tobias Brosch ◽  
Didier Grandjean ◽  
David Sander ◽  
Klaus R. Scherer

Emotional attention, the boosting of the processing of emotionally relevant stimuli, has, up to now, mainly been investigated within a sensory modality, for instance, by using emotional pictures to modulate visual attention. In real-life environments, however, humans typically encounter simultaneous input to several different senses, such as vision and audition. As multiple signals entering different channels might originate from a common, emotionally relevant source, the prioritization of emotional stimuli should be able to operate across modalities. In this study, we explored cross-modal emotional attention. Spatially localized utterances with emotional and neutral prosody served as cues for a visually presented target in a cross-modal dot-probe task. Participants were faster to respond to targets that appeared at the spatial location of emotional compared to neutral prosody. Event-related brain potentials revealed emotional modulation of early visual target processing at the level of the P1 component, with neural sources in the striate visual cortex being more active for targets that appeared at the spatial location of emotional compared to neutral prosody. These effects were not found using synthesized control sounds matched for mean fundamental frequency and amplitude envelope. These results show that emotional attention can operate across sensory modalities by boosting early sensory stages of processing, thus facilitating the multimodal assessment of emotionally relevant stimuli in the environment.

2021 ◽  
pp. 214-220
Author(s):  
Wei Lin Toh ◽  
Neil Thomas ◽  
Susan L. Rossell

There has been burgeoning interest in studying hallucinations in psychosis occurring across multiple sensory modalities. The current study aimed to characterize the auditory hallucination and delusion profiles in patients with auditory hallucinations only versus those with multisensory hallucinations. Participants with psychosis were partitioned into groups with voices only (AVH; <i>n</i> = 50) versus voices plus hallucinations in at least one other sensory modality (AVH+; <i>n</i> = 50), based on their responses on the Scale for the Assessment of Positive Symptoms (SAPS). Basic demographic and clinical information was collected, and the Questionnaire for Psychotic Experiences (QPE) was used to assess psychosis phenomenology. Relative to the AVH group, greater compliance to perceived commands, auditory illusions, and sensed presences was significantly elevated in the AVH+ group. The latter group also had greater levels of delusion-related distress and functional impairment and was more likely to endorse delusions of reference and misidentification. This preliminary study uncovered important phenomenological differences in those with multisensory hallucinations. Future hallucination research extending beyond the auditory modality is needed.


2016 ◽  
Vol 14 (3) ◽  
pp. 21-31 ◽  
Author(s):  
O.B. Bogdashina

Synaesthesia — a phenomenon of perception, when stimulation of one sensory modality triggers a perception in one or more other sensory modalities. Synaesthesia is not uniform and can manifest itself in different ways. As the sensations and their interpretation vary in different periods of time, it makes it hard to study this phenom¬enon. The article presents the classification of different forms of synaesthesia, including sensory and cognitive; and bimodal and multimodal synaesthesia. Some synaesthetes have several forms and variants of synaesthesia, while others – just one form of it. Although synaesthesia is not specific to autism spectrum disorders, it is quite common among autistic individuals. The article deals with the most common forms of synaesthesia in autism, advantages and problems of synesthetic perception in children with autism spectrum disorders, and provides some advice to parents how to recognise synaesthesia in children with autism.


Author(s):  
Drew McRacken ◽  
Maddie Dyson ◽  
Kevin Hu

Over the past few decades, there has been a significant number of reports that suggested that reaction times for different sensory modalities were different – e.g., that visual reaction time was slower than tactile reaction time. A recent report by Holden and colleagues stated that (1) there has been a significant historic upward drift in reaction times reported in the literature, (2) that this drift or degradation in reaction times could be accounted for by inaccuracies in the methods used and (3) that these inaccurate methods led to inaccurate reporting of differences between visual and tactile based reaction time testing.  The Holden study utilized robotics (i.e., no human factors) to test visual and tactile reaction time methods but did not assess how individuals would perform on different sensory modalities.  This study utilized three different sensory modalities: visual, auditory, and tactile, to test reaction time. By changing the way in which the subjects were prompted and measuring subsequent reaction time, the impact of sensory modality could be analyzed. Reaction time testing for two sensory modalities, auditory and visual, were administered through an Arduino Uno microcontroller device, while tactile-based reaction time testing was administered with the Brain Gauge. A range of stimulus intensities was delivered for the reaction times delivered by each sensory modality. The average reaction time and reaction time variability was assessed and a trend could be identified for the reaction time measurements of each of the sensory modalities. Switching the sensory modality did not result in a difference in reaction time and it was concluded that this was due to the implementation of accurate circuitry used to deliver each test. Increasing stimulus intensity for each sensory modality resulted in faster reaction times. The results of this study confirm the findings of Holden and colleagues and contradict the results reported in countless studies that conclude that (1) reaction times are historically slower now than they were 50 years ago and (2) that there are differences in reaction times for different sensory modalities (vision, hearing, tactile). The implications of this are that utilization of accurate reaction time methods could have a significant impact on clinical outcomes and that many methods in current clinical use are basically perpetuating poor methods and wasting time and money of countless subjects or patients.


Author(s):  
Marcella Montagnese ◽  
Pantelis Leptourgos ◽  
Charles Fernyhough ◽  
Flavie Waters ◽  
Frank Larøi ◽  
...  

Abstract Hallucinations can occur in different sensory modalities, both simultaneously and serially in time. They have typically been studied in clinical populations as phenomena occurring in a single sensory modality. Hallucinatory experiences occurring in multiple sensory systems—multimodal hallucinations (MMHs)—are more prevalent than previously thought and may have greater adverse impact than unimodal ones, but they remain relatively underresearched. Here, we review and discuss: (1) the definition and categorization of both serial and simultaneous MMHs, (2) available assessment tools and how they can be improved, and (3) the explanatory power that current hallucination theories have for MMHs. Overall, we suggest that current models need to be updated or developed to account for MMHs and to inform research into the underlying processes of such hallucinatory phenomena. We make recommendations for future research and for clinical practice, including the need for service user involvement and for better assessment tools that can reliably measure MMHs and distinguish them from other related phenomena.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Lucilla Cardinali ◽  
Andrea Serino ◽  
Monica Gori

Abstract Cortical body size representations are distorted in the adult, from low-level motor and sensory maps to higher levels multisensory and cognitive representations. Little is known about how such representations are built and evolve during infancy and childhood. Here we investigated how hand size is represented in typically developing children aged 6 to 10. Participants were asked to estimate their hand size using two different sensory modalities (visual or haptic). We found a distortion (underestimation) already present in the youngest children. Crucially, such distortion increases with age and regardless of the sensory modality used to access the representation. Finally, underestimation is specific for the body as no bias was found for object estimation. This study suggests that the brain does not keep up with the natural body growth. However, since motor behavior nor perception were impaired, the distortion seems functional and/or compensated for, for proper interaction with the external environment.


2012 ◽  
Vol 25 (0) ◽  
pp. 43 ◽  
Author(s):  
Brenda Malcolm ◽  
Karen Reilly ◽  
Jérémie Mattout ◽  
Roméo Salemme ◽  
Olivier Bertrand ◽  
...  

Our ability to accurately discriminate information from one sensory modality is often influenced by information from the other senses. Previous research indicates that tactile perception on the hand may be enhanced if participants look at a hand (compared to a neutral object) and if visual information about the origin of touch conveys temporal and/or spatial congruency. The current experiment further assessed the effects of non-informative vision on tactile perception. Participants made speeded discrimination responses (digit 2 or digit 5 of their right hand) to supra-threshold electro-cutaneous stimulation while viewing a video showing a pointer, in a static position or moving (dynamic), towards the same or different digit of a hand or to the corresponding spatial location on a non-corporeal object (engine). Therefore, besides manipulating whether a visual contact was spatially congruent to the simultaneously felt touch, we also manipulated the nature of the recipient object (hand vs. engine). Behaviourally, the temporal cues provided by the dynamic visual information about an upcoming touch decreased reaction times. Additionally, a greater enhancement in tactile discrimination was present when participants viewed a spatially congruent contact compared to a spatially incongruent contact. Most importantly, this visually driven improvement was greater for the view-hand condition compared to the view-object condition. Spatially-congruent, hand-specific visual events also produced the greatest amplitude in the P50 somatosensory evoked potential (SEP). We conclude that tactile perception is enhanced when vision provides non-predictive spatio-temporal cues and that these effects are specifically enhanced when viewing a hand.


1962 ◽  
Vol 203 (5) ◽  
pp. 799-802 ◽  
Author(s):  
S. T. Kitai ◽  
F. Morin

The dorsal spinocerebellar tract (DSCT) at C-1, C-2, and the lower medulla level was studied with microelectrodes in lightly anesthetized cats. All responses were obtained from the stimulation of the ipsilateral side of the body. The sensory modalities activating the total of 242 fibers studied were touch (53%), pressure (31%), touch and pressure (2%), and joint movement (14%). Responses to touch were more numerous for the forelimb, while responses to pressure and to joint movement were more numerous for the hind limb. Regardless of modalities the trunk was significantly less represented in the DSCT than the limbs. Tactile and pressure peripheral fields were restricted (i.e., a few hairs of a paw) and large (i.e., more than one segment of a limb). The ratio of restricted to large fields for touch was 7 to 1, and for pressure 5 to 1. Fibers activated by joint movements adjusted their frequency of firing to the degree of displacement and to the rate of the movement. There was no evidence for a separate anatomical segregation of fibers responding to a single sensory modality.


2020 ◽  
Vol 287 (1928) ◽  
pp. 20200944 ◽  
Author(s):  
Nicholas M. Michalak ◽  
Oliver Sng ◽  
Iris M. Wang ◽  
Joshua Ackerman

Cough, cough. Is that person sick, or do they just have a throat tickle? A growing body of research suggests pathogen threats shape key aspects of human sociality. However, less research has investigated specific processes involved in pathogen threat detection. Here, we examine whether perceivers can accurately detect pathogen threats using an understudied sensory modality—sound. Participants in four studies judged whether cough and sneeze sounds were produced by people infected with a communicable disease or not. We found no evidence that participants could accurately identify the origins of these sounds. Instead, the more disgusting they perceived a sound to be, the more likely they were to judge that it came from an infected person (regardless of whether it did). Thus, unlike research indicating perceivers can accurately diagnose infection using other sensory modalities (e.g. sight, smell), we find people overperceive pathogen threat in subjectively disgusting sounds.


1998 ◽  
Vol 86 (3_suppl) ◽  
pp. 1375-1391 ◽  
Author(s):  
Y. Laufer ◽  
S. Hocherman

The study investigated the contribution of kinesthetic and visual input to the performance of reaching movements and identified rules governing the transformation of information between these two sensory modalities. The study examined the accuracy by which 39 subjects reproduced locations of five targets in a horizontal plane. Mode of target presentation and feedback during reproduction of a target's location was either visual, kinesthetic or a combination of both modalities. Thus, it was possible to examine performance when target presentation and reproduction involved feedback from the same sensory modality (intramodal) as well as from different sensory modalities (intermodal). Errors in target reproduction were calculated in terms of distance and systematic biases in movement extent. The major findings of the study are (1) Intramodal reproduction of a target's location on the basis of kinesthetic feedback is somewhat less accurate than intramodal reproduction on the basis of visual feedback. (2) Intermodal performance is significantly less accurate than intramodal performance. (3) Accuracy of performance does not depend on the direction of information transfer between sensory modalities. (4) Intermodal performance is characterized by systematic biases in extent of movement which are dependent on the direction of information transfer between modalities. (5) When presentation of the target's location is bimodal, reproduction is adversely affected by the conflicting input. The results suggest that transformation rules, used to combine input from various sensory modalities, depend on environmental conditions and attention.


Sign in / Sign up

Export Citation Format

Share Document