scholarly journals Visual speech fills in both discrimination and identification of non-intact auditory speech in children

2017 ◽  
Vol 45 (2) ◽  
pp. 392-414 ◽  
Author(s):  
SUSAN JERGER ◽  
MARKUS F. DAMIAN ◽  
RACHEL P. MCALPINE ◽  
HERVÉ ABDI

AbstractTo communicate, children must discriminate and identify speech sounds. Because visual speech plays an important role in this process, we explored how visual speech influences phoneme discrimination and identification by children. Critical items had intact visual speech (e.g.bæz) coupled to non-intact (excised onsets) auditory speech (signified by /–b/æz). Children discriminated syllable pairs that differed in intactness (i.e. bæz:/–b/æz) and identified non-intact nonwords (/–b/æz). We predicted that visual speech would cause children to perceive the non-intact onsets as intact, resulting in moresameresponses for discrimination and more intact (i.e.bæz) responses for identification in the audiovisual than auditory mode. Visual speech for the easy-to-speechread /b/ but not for the difficult-to-speechread /g/ boosted discrimination and identification (about 35–45%) in children from four to fourteen years. The influence of visual speech on discrimination was uniquely associated with the influence of visual speech on identification and receptive vocabulary skills.

2003 ◽  
Vol 12 (4) ◽  
pp. 463-471 ◽  
Author(s):  
Susan Rvachew ◽  
Alyssa Ohberg ◽  
Meghann Grawburg ◽  
Joan Heyding

The purpose of this study was to compare the phonological awareness abilities of 2 groups of 4-year-old children: one with normally developing speech and language skills and the other with moderately or severely delayed expressive phonological skills but age-appropriate receptive vocabulary skills. Each group received tests of articulation, receptive vocabulary, phonemic perception, early literacy, and phonological awareness skills. The groups were matched for receptive language skills, age, socioeconomic status, and emergent literacy knowledge. The children with expressive phonological delays demonstrated significantly poorer phonemic perception and phonological awareness skills than their normally developing peers. The results suggest that preschool children with delayed expressive phonological abilities should be screened for their phonological awareness skills even when their language skills are otherwise normally developing.


2020 ◽  
Vol 63 (5) ◽  
pp. 1340-1351
Author(s):  
Françoise Brosseau-Lapré ◽  
Wan Hee Kim

Purpose The aim of this study was to investigate the ability of preschoolers with speech sound disorder (SSD) and with typical speech and language development (TD) to understand foreign-accented words, providing a window into the quality of their underlying phonological representations. We also investigated the relationship between vocabulary skills and the ability to identify words that are frequent and have few neighbors (lexically easy words) and words that are less frequent and have many neighbors (lexically hard words). Method Thirty-two monolingual English-speaking children (16 with SSD, 16 with TD), ages 4 and 5 years, completed standardized speech and language tests and a two-alternative forced-choice word identification task of English words produced by a native English speaker and a native Korean speaker. Results Children with SSD had more difficulty identifying words produced by both talkers than children with TD and showed a larger difficulty identifying Korean-accented words. Both groups of children identified lexically easy words more accurately than lexically hard words, although this difference was not significant when including receptive vocabulary skills in the analysis. Identification of lexically hard words, both those produced by the native English speaker and the nonnative English speaker, increased with vocabulary size. Conclusion Considering the performance of the children with SSD under ideal listening conditions in this study, we can assume that, as a group, children with SSD may experience greater difficulty identifying foreign-accented words in environments with background noise.


Author(s):  
Si-Wei Ma ◽  
Li Lu ◽  
Ting-Ting Zhang ◽  
Dan-Tong Zhao ◽  
Bin-Ting Yang ◽  
...  

Background: Vocabulary skills in infants with cleft lip and/or palate (CL/P) are related to various factors. They remain underexplored among Mandarin-speaking infants with CL/P. This study identified receptive and expressive vocabulary skills among Mandarin-speaking infants with unrepaired CL/P prior to cleft palate surgery and their associated factors. Methods: This is a cross-sectional study involving patients at the Cleft Lip and Palate Center of the Stomatological Hospital of Xi’an Jiaotong University between July 2017 and December 2018. The Putonghua Communicative Development Inventories-Short Form (PCDI-SF) was used to assess early vocabulary skills. Results: A total of 134 children aged 9–16 months prior to cleft palate surgery were included in the study. The prevalences of delays in receptive and expressive vocabulary skills were 72.39% (95% CI: 64.00–79.76%) and 85.07% (95% CI: 77.89–90.64%), respectively. Multiple logistic regression identified that children aged 11–13 months (OR = 6.46, 95% CI: 1.76–23.76) and 14–16 months (OR = 24.32, 95% CI: 3.86–153.05), and those with hard/soft cleft palate and soft cleft palate (HSCP/SCP) (OR = 5.63, 95% CI: 1.02–31.01) were more likely to be delayed in receptive vocabulary skills. Conclusions: Delays in vocabulary skills were common among Mandarin-speaking CL/P infants, and age was positively associated with impaired and lagging vocabulary skills. The findings suggest the necessity and importance of early and effective identification of CL/P, and early intervention programs and effective treatment are recommended for Chinese CL/P infants.


2019 ◽  
Vol 35 (1) ◽  
pp. 55-74
Author(s):  
Katrina Nicholas ◽  
Mary Alt ◽  
Ella Hauwiller

The purpose of this study was to investigate the role of variability in teaching prepositions to preschoolers with typical development (TD) and developmental language disorder (DLD). Input variability during teaching can enhance learning, but is target dependent. We hypothesized that high variability of objects would improve preposition learning. We also examined other characteristics (e.g. vocabulary skills) of children who responded to treatment. We used a case series design, repeated across children ( n = 18) to contrast how preschoolers learned prepositions in conditions that manipulated variability of objects and labels across three treatment sessions. We contrasted a high versus low variability condition for objects and labels for one group of typically-developing (TD) children ( n = 6). In other groups (TD, n = 6; DLD, n = 6), we contrasted high versus low object variability only. Visual inspection and descriptive statistics were used to characterize gains. Half ( n = 3) of TD participants showed a low variability advantage for the condition that combined object and label variability. In the condition that only contrasted object variability, the majority ( n = 4) of TD participants showed a high variability advantage, compared to only two participants with DLD. In the high object variability condition, high receptive vocabulary scores were significantly correlated with high performance of learning prepositions ( rs = 0.71, p < 0.05). Combining high variability for objects and labels when teaching prepositions was not effective. However, high variability for objects can create a learning advantage for learning prepositions for children with typically developing language, but not all learners. Characteristics of different learners (e.g. receptive vocabulary scores) and language status (impaired or unimpaired) should be taken into consideration for future studies.


2016 ◽  
Vol 44 (1) ◽  
pp. 185-215 ◽  
Author(s):  
SUSAN JERGER ◽  
MARKUS F. DAMIAN ◽  
NANCY TYE-MURRAY ◽  
HERVÉ ABDI

AbstractAdults use vision to perceive low-fidelity speech; yet how children acquire this ability is not well understood. The literature indicates that children show reduced sensitivity to visual speech from kindergarten to adolescence. We hypothesized that this pattern reflects the effects of complex tasks and a growth period with harder-to-utilize cognitive resources, not lack of sensitivity. We investigated sensitivity to visual speech in children via the phonological priming produced by low-fidelity (non-intact onset) auditory speech presented audiovisually (see dynamic face articulate consonant/rhyme b/ag; hear non-intact onset/rhyme: –b/ag) vs. auditorily (see still face; hear exactly same auditory input). Audiovisual speech produced greater priming from four to fourteen years, indicating that visual speech filled in the non-intact auditory onsets. The influence of visual speech depended uniquely on phonology and speechreading. Children – like adults – perceive speech onsets multimodally. Findings are critical for incorporating visual speech into developmental theories of speech perception.


2002 ◽  
Vol 112 (5) ◽  
pp. 2245-2245
Author(s):  
Paul Bertelson ◽  
Jean Vroomen ◽  
Beatrice de Gelder

2014 ◽  
Vol 57 (5) ◽  
pp. 1804-1816 ◽  
Author(s):  
Milijana Buac ◽  
Megan Gross ◽  
Margarita Kaushanskaya

Purpose The present study examined the impact of environmental factors (socioeconomic status [SES], the percent of language exposure to English and to Spanish, and primary caregivers' vocabulary knowledge) on bilingual children's vocabulary skills. Method Vocabulary skills were measured in 58 bilingual children between the ages of 5 and 7 who spoke Spanish as their native language and English as their second language. Data related to language environment in the home, specifically, the percent of language exposure to each language and SES, were obtained from primary caregiver interviews. Primary caregivers' vocabulary knowledge was measured directly using expressive and receptive vocabulary assessments in both languages. Results Multiple regression analyses indicated that primary caregivers' vocabulary knowledge, the child's percent exposure to each language, and SES were robust predictors of children's English, but not Spanish, vocabulary skills. Conclusion These findings indicate that in the early school ages, primary caregiver vocabulary skills have a stronger impact on bilingual children's second-language than native-language vocabulary.


2021 ◽  
pp. 014272372110495
Author(s):  
Katariina Rantalainen ◽  
Leila Paavola-Ruotsalainen ◽  
Sari Kunnari

This study investigated responsive and directive speech from 60 Finnish mothers to their 2-year-old children, as well as correlations with concurrent and later vocabulary. Possible gender differences with regard to both maternal speech and children’s vocabulary skills were considered. There were no gender differences in maternal utterance frequencies or in maternal utterance types. Girls scored statistically significantly higher in receptive and expressive vocabulary tests at 24, 30 and 36 months. The effect sizes were large. Maternal Other Utterances (fillers like yes, oh, umm) were correlated with children’s concurrent receptive vocabulary. However, there was no relationship between Other Utterances and children’s later vocabulary after controlling for vocabulary size at 24 months. This association may reflect an attempt by mothers to elicit speech from more linguistically advanced children. Furthermore, mothers’ Intrusive Directives towards 2-year-olds correlated negatively with receptive vocabulary at 30 months, particularly for boys. Surprisingly, Intrusive Attentional Directives correlated positively with expressive vocabulary in the group of 30-month-old girls. The results of this study demonstrate relationships between maternal verbal interactional style and both concurrent and future child vocabulary.


2019 ◽  
Author(s):  
Franziska Knolle ◽  
Michael Schwartze ◽  
Erich Schröger ◽  
Sonja A. Kotz

AbstractIt has been suggested that speech production is accomplished by an internal forward model, reducing processing activity directed to self-produced speech in the auditory cortex. The current study uses an established N1-suppression paradigm comparing self- and externally-initiated natural speech sounds to answer two questions:Are forward predictions generated to process complex speech sounds, such as vowels, initiated via a button press?Are prediction errors regarding self-initiated deviant vowels reflected in the corresponding ERP components?Results confirm an N1-suppression in response to self-initiated speech sounds. Furthermore, our results suggest that predictions leading to the N1-suppression effect are specific, as self-initiated deviant vowels do not elicit an N1-suppression effect. Rather, self-initiated deviant vowels elicit an enhanced N2b and P3a compared to externally-generated deviants, externally-generated standard, or self-initiated standards, again confirming prediction specificity.Results show that prediction errors are salient in self-initiated auditory speech sounds, which may lead to more efficient error correction in speech production.


2020 ◽  
Author(s):  
Johannes Rennig ◽  
Michael S Beauchamp

AbstractRegions of the human posterior superior temporal gyrus and sulcus (pSTG/S) respond to the visual mouth movements that constitute visual speech and the auditory vocalizations that constitute auditory speech. We hypothesized that these multisensory responses in pSTG/S underlie the observation that comprehension of noisy auditory speech is improved when it is accompanied by visual speech. To test this idea, we presented audiovisual sentences that contained either a clear auditory component or a noisy auditory component while measuring brain activity using BOLD fMRI. Participants reported the intelligibility of the speech on each trial with a button press. Perceptually, adding visual speech to noisy auditory sentences rendered them much more intelligible. Post-hoc trial sorting was used to examine brain activations during noisy sentences that were more or less intelligible, focusing on multisensory speech regions in the pSTG/S identified with an independent visual speech localizer. Univariate analysis showed that less intelligible noisy audiovisual sentences evoked a weaker BOLD response, while more intelligible sentences evoked a stronger BOLD response that was indistinguishable from clear sentences. To better understand these differences, we conducted a multivariate representational similarity analysis. The pattern of response for intelligible noisy audiovisual sentences was more similar to the pattern for clear sentences, while the response pattern for unintelligible noisy sentences was less similar. These results show that for both univariate and multivariate analyses, successful integration of visual and noisy auditory speech normalizes responses in pSTG/S, providing evidence that multisensory subregions of pSTG/S are responsible for the perceptual benefit of visual speech.Significance StatementEnabling social interactions, including the production and perception of speech, is a key function of the human brain. Speech perception is a complex computational problem that the brain solves using both visual information from the talker’s facial movements and auditory information from the talker’s voice. Visual speech information is particularly important under noisy listening conditions when auditory speech is difficult or impossible to understand alone Regions of the human cortex in posterior superior temporal lobe respond to the visual mouth movements that constitute visual speech and the auditory vocalizations that constitute auditory speech. We show that the pattern of activity in cortex reflects the successful multisensory integration of auditory and visual speech information in the service of perception.


Sign in / Sign up

Export Citation Format

Share Document