inverse effectiveness
Recently Published Documents


TOTAL DOCUMENTS

26
(FIVE YEARS 0)

H-INDEX

11
(FIVE YEARS 0)

2020 ◽  
Author(s):  
Guochun Yang ◽  
Di Fu ◽  
Li Zhenghan ◽  
Haiyan Wu ◽  
Honghui Xu ◽  
...  

Multisensory integration and crossmodal attention are two of the basic mechanisms in processing multisensory inputs, and they are usually mixed. Whether these two processes are dependent or independent remains controversial. To examine the relationship between multisensory integration and crossmodal attention, we adopted modified multilevel audiovisual gender judgment paradigms and evaluated the congruency effects in reaction time (RT) and the inverse effectiveness (IE) effects. If they were dependent, the occurrence of one effect would be accompanied with that of the other. Using both morphed faces and voices, we first performed a speeded classification task, in which participants were either asked to attend to faces (experiment 1a) or attend to voices (experiment 1b); then, we performed an unspeeded rating task with faces as the targets (experiment 2). We observed both a congruency effect in RT and an IE effect in experiment 1a, a congruency effect in RT alone in experiment 1b, and an IE effect alone in experiment 2. These results indicate that the two processes are independent of each other.



Author(s):  
Luuk P. H. van de Rijt ◽  
Anja Roye ◽  
Emmanuel A. M. Mylanus ◽  
A. John van Opstal ◽  
Marc M. van Wanrooij


2019 ◽  
Author(s):  
Luuk P.H. van de Rijt ◽  
Anja Roye ◽  
Emmanuel A.M. Mylanus ◽  
A. John van Opstal ◽  
Marc M. van Wanrooij

AbstractWe assessed how synchronous speech listening and lip reading affects speech recognition in acoustic noise. In simple audiovisual perceptual tasks, inverse effectiveness is often observed, which holds that the weaker the unimodal stimuli, or the poorer their signal-to-noise ratio, the stronger the audiovisual benefit. So far, however, inverse effectiveness has not been demonstrated for complex audiovisual speech stimuli. Here we assess whether this multisensory integration effect can also be observed for the recognizability of spoken words.To that end, we presented audiovisual sentences to 18 native-Dutch normal-hearing participants, who had to identify the spoken words from a finite list. Speech-recognition performance was determined for auditory-only, visual-only (lipreading) and auditory-visual conditions. To modulate acoustic task difficulty, we systematically varied the auditory signal-to-noise ratio. In line with a commonly-observed multisensory enhancement on speech recognition, audiovisual words were more easily recognized than auditory-only words (recognition thresholds of −15 dB and −12 dB, respectively).We here show that the difficulty of recognizing a particular word, either acoustically or visually, determines the occurrence of inverse effectiveness in audiovisual word integration. Thus, words that are better heard or recognized through lipreading, benefit less from bimodal presentation.Audiovisual performance at the lowest acoustic signal-to-noise ratios (45%) fell below the visual recognition rates (60%), reflecting an actual deterioration of lipreading in the presence of excessive acoustic noise. This suggests that the brain may adopt a strategy in which attention has to be divided between listening and lip reading.



2018 ◽  
Vol 72 (4) ◽  
pp. 922-929 ◽  
Author(s):  
Katsumi Minakata ◽  
Matthias Gondan

When participants respond to stimuli of two sources, response times (RTs) are often faster when both stimuli are presented together relative to the RTs obtained when presented separately (redundant signals effect [RSE]). Race models and coactivation models can explain the RSE. In race models, separate channels process the two stimulus components, and the faster processing time determines the overall RT. In audiovisual experiments, the RSE is often higher than predicted by race models, and coactivation models have been proposed that assume integrated processing of the two stimuli. Where does coactivation occur? We implemented a go/no-go task with randomly intermixed weak and strong auditory, visual, and audiovisual stimuli. In one experimental session, participants had to respond to strong stimuli and withhold their response to weak stimuli. In the other session, these roles were reversed. Interestingly, coactivation was only observed in the experimental session in which participants had to respond to strong stimuli. If weak stimuli served as targets, results were widely consistent with the race model prediction. The pattern of results contradicts the inverse effectiveness law. We present two models that explain the result in terms of absolute and relative thresholds.



2017 ◽  
Author(s):  
Nicholas Paul Holmes

This brief commentary criticises the use of flawed statistical methods in the paper by Kayser and colleagues (2010), published in Current Biology. The commentary was not accepted for publication, despite broad agreement that the statistical methods were indeed flawed. Reviews and responses are included with the main text.



eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Torrey LS Truszkowski ◽  
Oscar A Carrillo ◽  
Julia Bleier ◽  
Carolina M Ramirez-Vizcarrondo ◽  
Daniel L Felch ◽  
...  

To build a coherent view of the external world, an organism needs to integrate multiple types of sensory information from different sources, a process known as multisensory integration (MSI). Previously, we showed that the temporal dependence of MSI in the optic tectum of Xenopus laevis tadpoles is mediated by the network dynamics of the recruitment of local inhibition by sensory input (Felch et al., 2016). This was one of the first cellular-level mechanisms described for MSI. Here, we expand this cellular level view of MSI by focusing on the principle of inverse effectiveness, another central feature of MSI stating that the amount of multisensory enhancement observed inversely depends on the size of unisensory responses. We show that non-linear summation of crossmodal synaptic responses, mediated by NMDA-type glutamate receptor (NMDARs) activation, form the cellular basis for inverse effectiveness, both at the cellular and behavioral levels.



Author(s):  
Torrey LS Truszkowski ◽  
Oscar A Carrillo ◽  
Julia Bleier ◽  
Carolina M Ramirez-Vizcarrondo ◽  
Daniel L Felch ◽  
...  


Author(s):  
Fengxia Wu ◽  
Xiaoyu Tang ◽  
Yanna Ren ◽  
Weiping Yang ◽  
Satoshi Takahashi ◽  
...  

Bimodal audiovisual signals can be detected more quickly and accurately than unimodal visual signals or auditory signals. This beneficial effect is called audiovisual integration. Audiovisual integration has often been described according to the spatial principle, the temporal principle and the inverse effectiveness principle. Inverse effectiveness indicates that the largest audiovisual enhancements are inversely correlated with the strength of the response to unisensory component stimuli; thus, weaker stimuli generate greater enhancement when presented together. In addition, some studies have suggested that the visual contrast feature can modulate audiovisual integration and obtained an inverse relationship between visual contrast and audiovisual integration. This review aims to summarize previous studies and describe the relationship between visual contrast and inverse effectiveness by behavior, ERP and fMRI experimental methods. By summarizing previous studies, we have determined the direction of future work.





Sign in / Sign up

Export Citation Format

Share Document