scholarly journals Lip-Reading Enables the Brain to Synthesize Auditory Features of Unknown Silent Speech

2019 ◽  
Vol 40 (5) ◽  
pp. 1053-1065 ◽  
Author(s):  
Mathieu Bourguignon ◽  
Martijn Baart ◽  
Efthymia C. Kapnoula ◽  
Nicola Molinaro
2020 ◽  
Author(s):  
Sreejan Kumar ◽  
Cameron T. Ellis ◽  
Thomas O’Connell ◽  
Marvin M Chun ◽  
Nicholas B. Turk-Browne

AbstractThe extent to which brain functions are localized or distributed is a foundational question in neuroscience. In the human brain, common fMRI methods such as cluster correction, atlas parcellation, and anatomical searchlight are biased by design toward finding localized representations. Here we introduce the functional searchlight approach as an alternative to anatomical searchlight analysis, the most commonly used exploratory multivariate fMRI technique. Functional searchlight removes any anatomical bias by grouping voxels based only on functional similarity and ignoring anatomical proximity. We report evidence that visual and auditory features from deep neural networks and semantic features from a natural language processing model are more widely distributed across the brain than previously acknowledged. This approach provides a new way to evaluate and constrain computational models with brain activity and pushes our understanding of human brain function further along the spectrum from strict modularity toward distributed representation.


Recent evidence from experiments on immediate memory indicates unambiguously that silent speech perception can produce typically ‘auditory’ effects while there is either active or passive mouthing of the relevant articulatory gestures. This result falsifies previous theories of auditory sensory memory (pre-categorical acoustic store) that insisted on external auditory stimulation as indispensible for access to the system. A resolution is proposed that leaves the properties of pre-categorical acoustic store much as they were assumed to be before but adds the possibility that visual information can affect the selection of auditory features in a pre-categorical stage of speech perception. In common terms, a speaker’s facial gestures (or one’s own) can influence auditory experience independently of determining what it was that was said. Some results in word perception that encourage this view are discussed.


2007 ◽  
Vol 7 (4) ◽  
pp. 110-111
Author(s):  
Nicholas P. Poolos

Epilepsy-Related Ligand/Receptor Complex LGI1 and ADAM22 Regulate Synaptic Transmission. Fukata Y, Adesnik H, Iwanaga T, Bredt DS, Nicoll RA, Fukata M. Science 2006;313(5794):1792–1795. Abnormally synchronized synaptic transmission in the brain causes epilepsy. Most inherited forms of epilepsy result from mutations in ion channels. However, one form of epilepsy, autosomal dominant partial epilepsy with auditory features (ADPEAF), is characterized by mutations in a secreted neuronal protein, LGI1. We show that ADAM22, a transmembrane protein that when mutated itself causes seizure, serves as a receptor for LGI1. LGI1 enhances AMPA receptor-mediated synaptic transmission in hippocampal slices. The mutated form of LGI1 fails to bind to ADAM22. ADAM22 is anchored to the postsynaptic density by cytoskeletal scaffolds containing stargazin. These studies in rat brain indicate possible avenues for understanding human epilepsy.


2018 ◽  
Author(s):  
Mathieu Bourguignon ◽  
Martijn Baart ◽  
Efthymia C. Kapnoula ◽  
Nicola Molinaro

AbstractLip-reading is crucial to understand speech in challenging conditions. Neuroimaging investigations have revealed that lip-reading activates auditory cortices in individuals covertly repeating absent—but known—speech. However, in real-life, one usually has no detailed information about the content of upcoming speech. Here we show that during silent lip-reading of unknown speech, activity in auditory cortices entrains more to absent speech than to seen lip movements at frequencies below 1 Hz. This entrainment to absent speech was characterized by a speech-to-brain delay of 50–100 ms as when actually listening to speech. We also observed entrainment to lip movements at the same low frequency in the right angular gyrus, an area involved in processing biological motion. These findings demonstrate that the brain can synthesize high-level features of absent unknown speech sounds from lip-reading that can facilitate the processing of the auditory input. Such a synthesis process may help explain well-documented bottom-up perceptual effects.


2008 ◽  
Vol 17 (6) ◽  
pp. 405-409 ◽  
Author(s):  
Lawrence D. Rosenblum

Speech perception is inherently multimodal. Visual speech (lip-reading) information is used by all perceivers and readily integrates with auditory speech. Imaging research suggests that the brain treats auditory and visual speech similarly. These findings have led some researchers to consider that speech perception works by extracting amodal information that takes the same form across modalities. From this perspective, speech integration is a property of the input information itself. Amodal speech information could explain the reported automaticity, immediacy, and completeness of audiovisual speech integration. However, recent findings suggest that speech integration can be influenced by higher cognitive properties such as lexical status and semantic context. Proponents of amodal accounts will need to explain these results.


2020 ◽  
Author(s):  
Evan K. Noch ◽  
Isaiah Yim ◽  
Teresa A. Milner ◽  
Lewis C. Cantley
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document