scholarly journals Cross-modal suppression model of speech perception: Visual information drives suppressive interactions between visual and auditory speech in pSTG

2020 ◽  
Vol 20 (11) ◽  
pp. 434
Author(s):  
Brian A., Metzger ◽  
John F., Magnotti ◽  
Elizabeth Nesbitt ◽  
Daniel Yoshor ◽  
Michael S., Beauchamp
2012 ◽  
Vol 25 (0) ◽  
pp. 148
Author(s):  
Marcia Grabowecky ◽  
Emmanuel Guzman-Martinez ◽  
Laura Ortega ◽  
Satoru Suzuki

Watching moving lips facilitates auditory speech perception when the mouth is attended. However, recent evidence suggests that visual attention and awareness are mediated by separate mechanisms. We investigated whether lip movements suppressed from visual awareness can facilitate speech perception. We used a word categorization task in which participants listened to spoken words and determined as quickly and accurately as possible whether or not each word named a tool. While participants listened to the words they watched a visual display that presented a video clip of the speaker synchronously speaking the auditorily presented words, or the same speaker articulating different words. Critically, the speaker’s face was either visible (the aware trials), or suppressed from awareness using continuous flash suppression. Aware and suppressed trials were randomly intermixed. A secondary probe-detection task ensured that participants attended to the mouth region regardless of whether the face was visible or suppressed. On the aware trials responses to the tool targets were no faster with the synchronous than asynchronous lip movements, perhaps because the visual information was inconsistent with the auditory information on 50% of the trials. However, on the suppressed trials responses to the tool targets were significantly faster with the synchronous than asynchronous lip movements. These results demonstrate that even when a random dynamic mask renders a face invisible, lip movements are processed by the visual system with sufficiently high temporal resolution to facilitate speech perception.


2016 ◽  
Vol 21 (03) ◽  
pp. 206-212 ◽  
Author(s):  
Grace Ciscare ◽  
Erika Mantello ◽  
Carla Fortunato-Queiroz ◽  
Miguel Hyppolito ◽  
Ana Reis

Introduction A cochlear implant in adolescent patients with pre-lingual deafness is still a debatable issue. Objective The objective of this study is to analyze and compare the development of auditory speech perception in children with pre-lingual auditory impairment submitted to cochlear implant, in different age groups in the first year after implantation. Method This is a retrospective study, documentary research, in which we analyzed 78 reports of children with severe bilateral sensorineural hearing loss, unilateral cochlear implant users of both sexes. They were divided into three groups: G1, 22 infants aged less than 42 months; G2, 28 infants aged between 43 to 83 months; and G3, 28 older than 84 months. We collected medical record data to characterize the patients, auditory thresholds with cochlear implants, assessment of speech perception, and auditory skills. Results There was no statistical difference in the association of the results among groups G1, G2, and G3 with sex, caregiver education level, city of residence, and speech perception level. There was a moderate correlation between age and hearing aid use time, age and cochlear implants use time. There was a strong correlation between age and the age cochlear implants was performed, hearing aid use time and age CI was performed. Conclusion There was no statistical difference in the speech perception in relation to the patient's age when cochlear implant was performed. There were statistically significant differences for the variables of auditory deprivation time between G3 - G1 and G2 - G1 and hearing aid use time between G3 - G2 and G3 - G1.


2019 ◽  
Vol 128 ◽  
pp. 290-296 ◽  
Author(s):  
Judith Schmitz ◽  
Eleonora Bartoli ◽  
Laura Maffongelli ◽  
Luciano Fadiga ◽  
Nuria Sebastian-Galles ◽  
...  

2000 ◽  
Vol 23 (3) ◽  
pp. 327-328 ◽  
Author(s):  
Lawrence Brancazio ◽  
Carol A. Fowler

The present description of the Merge model addresses only auditory, not audiovisual, speech perception. However, recent findings in the audiovisual domain are relevant to the model. We outline a test that we are conducting of the adequacy of Merge, modified to accept visual information about articulation.


2016 ◽  
Vol 44 (1) ◽  
pp. 185-215 ◽  
Author(s):  
SUSAN JERGER ◽  
MARKUS F. DAMIAN ◽  
NANCY TYE-MURRAY ◽  
HERVÉ ABDI

AbstractAdults use vision to perceive low-fidelity speech; yet how children acquire this ability is not well understood. The literature indicates that children show reduced sensitivity to visual speech from kindergarten to adolescence. We hypothesized that this pattern reflects the effects of complex tasks and a growth period with harder-to-utilize cognitive resources, not lack of sensitivity. We investigated sensitivity to visual speech in children via the phonological priming produced by low-fidelity (non-intact onset) auditory speech presented audiovisually (see dynamic face articulate consonant/rhyme b/ag; hear non-intact onset/rhyme: –b/ag) vs. auditorily (see still face; hear exactly same auditory input). Audiovisual speech produced greater priming from four to fourteen years, indicating that visual speech filled in the non-intact auditory onsets. The influence of visual speech depended uniquely on phonology and speechreading. Children – like adults – perceive speech onsets multimodally. Findings are critical for incorporating visual speech into developmental theories of speech perception.


2012 ◽  
Vol 132 (3) ◽  
pp. 2050-2050
Author(s):  
Qudsia Tahmina ◽  
Moulesh Bhandary ◽  
Behnam Azimi ◽  
Yi Hu ◽  
Rene L. Utianski ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document