scholarly journals Recognition of amodal language identity emerges in infancy

2013 ◽  
Vol 37 (2) ◽  
pp. 90-94 ◽  
Author(s):  
David J. Lewkowicz ◽  
Ferran Pons

Audiovisual speech consists of overlapping and invariant patterns of dynamic acoustic and optic articulatory information. Research has shown that infants can perceive a variety of basic auditory-visual (A-V) relations but no studies have investigated whether and when infants begin to perceive higher order A-V relations inherent in speech. Here, we asked whether and when do infants become capable of recognizing amodal language identity, a critical perceptual skill that is necessary for the development of multisensory communication. Because, at a minimum, such a skill requires the ability to perceive suprasegmental auditory and visual linguistic information, we predicted that this skill would not emerge before higher-level speech processing and multisensory perceptual skills emerge. Consistent with this prediction, we found that recognition of the amodal identity of language emerges at 10–12 months of age but that when it emerges it is restricted to infants’ native language.

2006 ◽  
Vol 98 (1) ◽  
pp. 66-73 ◽  
Author(s):  
Roy H. Hamilton ◽  
Jeffrey T. Shenton ◽  
H. Branch Coslett

2015 ◽  
Vol 19 (2) ◽  
pp. 77-100 ◽  
Author(s):  
Przemysław Tomalski

Abstract Apart from their remarkable phonological skills young infants prior to their first birthday show ability to match the mouth articulation they see with the speech sounds they hear. They are able to detect the audiovisual conflict of speech and to selectively attend to articulating mouth depending on audiovisual congruency. Early audiovisual speech processing is an important aspect of language development, related not only to phonological knowledge, but also to language production during subsequent years. Th is article reviews recent experimental work delineating the complex developmental trajectory of audiovisual mismatch detection. Th e central issue is the role of age-related changes in visual scanning of audiovisual speech and the corresponding changes in neural signatures of audiovisual speech processing in the second half of the first year of life. Th is phenomenon is discussed in the context of recent theories of perceptual development and existing data on the neural organisation of the infant ‘social brain’.


Author(s):  
Masha Etkind ◽  
Ron S. Kenett ◽  
Uri Shafrir

In this chapter we describe a novel pedagogy for conceptual thinking and peer cooperation with Meaning Equivalence Reusable Learning Objects (MERLO) that enhances higher-order thinking; deepen comprehension of conceptual content; and improves learning outcomes. The evolution of this instructional methodology follows insights from four recent developments: analysis of patterns of content and structure of labeled patterns in human experience, that led to the emergence of concept science; development of digital cyber-infrastructure of networked information; research in neuroscience and brain imaging, showing that exposure of learners to multi-semiotic inductive problems enhance cognitive control of inter-hemispheric attentional processing in the lateral brain, and increase higher-order thinking; research in evolutionary dynamics on peer cooperation and indirect reciprocity, that document the motivational effect of knowledge of being observed, a psychological imperative that motivate individuals to cooperate and to contribute to the common good.


2021 ◽  
Vol 33 (1) ◽  
pp. 8-27
Author(s):  
Mylène Barbaroux ◽  
Arnaud Norena ◽  
Maud Rasamimanana ◽  
Eric Castet ◽  
Mireille Besson

Musical expertise has been shown to positively influence high-level speech abilities such as novel word learning. This study addresses the question whether low-level enhanced perceptual skills causally drives successful novel word learning. We used a longitudinal approach with psychoacoustic procedures to train 2 groups of nonmusicians either on pitch discrimination or on intensity discrimination, using harmonic complex sounds. After short (approximately 3 hr) psychoacoustic training, discrimination thresholds were lower on the specific feature (pitch or intensity) that was trained. Moreover, compared to the intensity group, participants trained on pitch were faster to categorize words varying in pitch. Finally, although the N400 components in both the word learning phase and in the semantic task were larger in the pitch group than in the intensity group, no between-group differences were found at the behavioral level in the semantic task. Thus, these results provide mixed evidence that enhanced perception of relevant features through a few hours of acoustic training with harmonic sounds causally impacts the categorization of speech sounds as well as novel word learning. These results are discussed within the framework of near and far transfer effects from music training to speech processing.


2001 ◽  
Vol 18 (1) ◽  
pp. 9-21 ◽  
Author(s):  
Tsuhan Chen

2018 ◽  
Author(s):  
Bohan Dai ◽  
James M. McQueen ◽  
René Terporten ◽  
Peter Hagoort ◽  
Anne Kösem

AbstractListening to speech is difficult in noisy environments, and is even harder when the interfering noise consists of intelligible speech as compared to non-intelligible sounds. This suggests that the ignored speech is not fully ignored, and that competing linguistic information interferes with the neural processing of target speech. We tested this hypothesis using magnetoencephalography (MEG) while participants listened to target clear speech in the presence of distracting noise-vocoded signals. Crucially, the noise vocoded distractors were initially unintelligible but were perceived as intelligible speech after a small training session. We compared participants’ performance in the speech-in-noise task before and after training, and neural entrainment to both target and distracting speech. The comprehension of the target clear speech was reduced in the presence of intelligible distractors as compared to when they were unintelligible. The neural entrainment to target speech in the delta range (1–4 Hz) reduced in strength in the presence of an intelligible distractor. In contrast, neural entrainment to distracting signals was not significantly modulated by intelligibility. These results support and extend previous findings, showing, first, that the masking effects of distracting speech originate from the degradation of the linguistic representation of target speech, and second, that delta entrainment reflects linguistic processing of speech.Significance StatementComprehension of speech in noisy environments is impaired due to interference from background sounds. The magnitude of interference depends on the intelligibility of the distracting speech signals. In a magnetoencephalography experiment with a highly-controlled training paradigm, we show that the linguistic information of distracting speech imposes higher-order interference on the processing of the target speech, as indexed by a decline of comprehension of target speech and a reduction of delta entrainment to target speech. This work demonstrates the importance of neural oscillations for speech processing. It shows that delta oscillations reflect linguistic analysis during speech comprehension, which can critically be affected by the presence of other speech.


Rhema ◽  
2019 ◽  
pp. 100-117
Author(s):  
O. Cherepanova

This article discusses the possibilities of using linguistic information in CAPTsystems. Information about the user's native language allows to partially predict the pronunciation errors that he or she will make in the target language. This data can be used in phonetic simulators to improve error localization at the level of phonemes. In the first part of the article we summarize the results of two experiments in which we conducted an acoustic contrastive analysis of the Russian and German vowel subsystems and predicted three pronunciation error types that Russian speakers are most likely to make in their German speech. The second part of the article discusses the possibilities of correcting typical pronunciation errors by means of special exercises.


2019 ◽  
Author(s):  
Violet Aurora Brown ◽  
Julia Feld Strand

It is widely accepted that seeing a talker improves a listener’s ability to understand what a talker is saying in background noise (e.g., Erber, 1969; Sumby & Pollack, 1954). The literature is mixed, however, regarding the influence of the visual modality on the listening effort required to recognize speech (e.g., Fraser, Gagné, Alepins, & Dubois, 2010; Sommers & Phelps, 2016). Here, we present data showing that even when the visual modality robustly benefits recognition, processing audiovisual speech can still result in greater cognitive load than processing speech in the auditory modality alone. We show using a dual-task paradigm that the costs associated with audiovisual speech processing are more pronounced in easy listening conditions, in which speech can be recognized at high rates in the auditory modality alone—indeed, effort did not differ between audiovisual and audio-only conditions when the background noise was presented at a more difficult level. Further, we show that though these effects replicate with different stimuli and participants, they do not emerge when effort is assessed with a recall paradigm rather than a dual-task paradigm. Together, these results suggest that the widely cited audiovisual recognition benefit may come at a cost under more favorable listening conditions, and add to the growing body of research suggesting that various measures of effort may not be tapping into the same underlying construct (Strand et al., 2018).


2021 ◽  
pp. 1-59
Author(s):  
Nikolay Novitskiy ◽  
Akshay R Maggu ◽  
Ching Man Lai ◽  
Peggy H Y Chan ◽  
Kay H Y Wong ◽  
...  

Abstract We investigated the development of early-latency and long-latency brain responses to native and non-native speech to shed light on the neurophysiological underpinnings of perceptual narrowing and early language development. Specifically, we postulated a two-level process to explain the decrease in sensitivity to non-native phonemes towards the end of infancy. Neurons at the earlier stages of the ascending auditory pathway mature rapidly during infancy facilitating the encoding of both native and non-native sounds. This growth enables neurons at the later stages of the auditory pathway to assign phonological status to speech according to the infant’s native language environment. To test this hypothesis, we collected early- latency and long-latency neural responses to native and non-native lexical tones from 85 Cantoneselearning children aged between 23 days and 24 months and 16 days. As expected, a broad range of presumably subcortical early-latency neural encoding measures grew rapidly and substantially during the first two years for both native and non-native tones. By contrast, longlatency cortical electrophysiological changes occurred on a much slower scale and showed sensitivity to nativeness at around six months. Our study provided a comprehensive understanding of early language development by revealing the complementary roles of earlier and later stages of speech processing in the developing brain.


Sign in / Sign up

Export Citation Format

Share Document