Slow speech rate effects on stuttering preschoolers with disordered phonology

2015 ◽  
Vol 29 (5) ◽  
pp. 354-377 ◽  
Author(s):  
Lisa R. LaSalle
Keyword(s):  
2020 ◽  
Vol 73 (10) ◽  
pp. 1523-1536 ◽  
Author(s):  
Hans Rutger Bosker ◽  
David Peeters ◽  
Judith Holler

Spoken words are highly variable and therefore listeners interpret speech sounds relative to the surrounding acoustic context, such as the speech rate of a preceding sentence. For instance, a vowel midway between short /ɑ/ and long /a:/ in Dutch is perceived as short /ɑ/ in the context of preceding slow speech, but as long /a:/ if preceded by a fast context. Despite the well-established influence of visual articulatory cues on speech comprehension, it remains unclear whether visual cues to speech rate also influence subsequent spoken word recognition. In two “Go Fish”–like experiments, participants were presented with audio-only (auditory speech + fixation cross), visual-only (mute videos of talking head), and audiovisual (speech + videos) context sentences, followed by ambiguous target words containing vowels midway between short /ɑ/ and long /a:/. In Experiment 1, target words were always presented auditorily, without visual articulatory cues. Although the audio-only and audiovisual contexts induced a rate effect (i.e., more long /a:/ responses after fast contexts), the visual-only condition did not. When, in Experiment 2, target words were presented audiovisually, rate effects were observed in all three conditions, including visual-only. This suggests that visual cues to speech rate in a context sentence influence the perception of following visual target cues (e.g., duration of lip aperture), which at an audiovisual integration stage bias participants’ target categorisation responses. These findings contribute to a better understanding of how what we see influences what we hear.


1986 ◽  
Vol 29 (4) ◽  
pp. 462-470 ◽  
Author(s):  
Linda E. Nicholas ◽  
Robert H. Brookshire

An experiment was carried out to assess the effects of slow and fast speech rate on comprehension of narrative discourse by aphasic, right-hemisphere-damaged, and non-brain-damaged adults. Aphasic subjects were divided into a high-comprehension group and a low-comprehension group based on their performance on the auditory comprehension subtests from the Boston Diagnostic Aphasia Examination. Subjects listened to l0 narrative stories. Half the stories were presented at slow speech rate (110–130 wpm) and half were presented at fast speech rate (190–210 wpm). After each story, subjects' comprehension and retention of stated and implied main ideas and details were tested. Brain-damaged subjects were tested twice, with at least 2 weeks intervening between sessions. Results demonstrated that salience had strong effects on comprehension for all groups of subjects—main ideas consistently were comprehended better than details. Directness affected subjects' comprehension of details, but not their comprehension of main ideas—stated details consistently were comprehended better than implied details. Non-brain-damaged subjects' comprehension was unaffected by differences in speech rate. Brain-damaged subjects comprehended details better in slow rate than in fast rate condition in the first test session, but the effects of rate on brain-damaged subjects' comprehension essentially disappeared by the second test. Furthermore, there were many instances in which individual subjects' failed to demonstrate rate effects exhibited by their group.


1989 ◽  
Vol 32 (4) ◽  
pp. 837-848 ◽  
Author(s):  
Therese M. Brancewicz ◽  
Alan R. Reich

This study explored the effects of reduced speech rate on nasal/voice accelerometric measures and nasality ratings. Nasal/voice accelerometric measures were obtained from normal adults for various speech stimuli and speaking rates. Stimuli included three sentences (one obstruent-loaded, one semivowel-loaded, and one containing a single nasal), and /p/ syllable trains. Speakers read the stimuli at their normal rate, half their normal rate, and as slowly as possible. In addition, a computer program paced each speaker at rates of 1, 2, and 3 syllables per second. The nasal/voice accelerometric values revealed significant stimulus effects but no rate effects. The nasality ratings of experienced listeners, evaluated as a function of stimulus and speaking rate, were compared to the accelerometric measures. The nasality scale values demonstrated small, but statistically significant, stimulus and rate effects. However, the nasality percepts were poorly correlated with the nasal/voice accelerometric measures.


2020 ◽  
Vol 46 (10) ◽  
pp. 1148-1163
Author(s):  
Merel Maslowski ◽  
Antje S. Meyer ◽  
Hans Rutger Bosker

Author(s):  
Catarina Oliveira ◽  
Paula Martins ◽  
António Teixeira

2006 ◽  
Vol 20 (2-3) ◽  
pp. 141-148 ◽  
Author(s):  
Paul A. Dagenais ◽  
Gidget R. Brown ◽  
Robert E. Moore

2008 ◽  
Vol 49 ◽  
pp. 1-21
Author(s):  
Claire Brutel-Vuilmet ◽  
Susanne Fuchs

This paper is a first attempt towards a better understanding of the aerodynamic properties during speech production and their potential control. In recent years, studies on intraoral pressure in speech have been rather rare, and more studies concern the air flow development. However, the intraoral pressure is a crucial factor for analysing the production of various sounds. In this paper, we focus on the intraoral pressure development during the production of intervocalic stops. Two experimental methodologies are presented and confronted with each other: real speech data recorded for four German native speakers, and model data, obtained by a mechanical replica which allows reproducing the main physical mechanisms occurring during phonation. The two methods are presented and applied to a study on the influence of speech rate on aerodynamic properties.  


2020 ◽  
Vol 63 (5) ◽  
pp. 1352-1360
Author(s):  
Camille J. Wynn ◽  
Stephanie A. Borrie

Purpose Conversational entrainment describes the tendency for individuals to alter their communicative behaviors to more closely align with those of their conversation partner. This communication phenomenon has been widely studied, and thus, the methodologies used to examine it are diverse. Here, we summarize key differences in research design and present a test case to examine the effect of methodology on entrainment outcomes. Method Sixty neurotypical adults were randomly assigned to experimental groups formed by a 2 × 2 factorial combination of two independent variables: stimuli organization (blocked vs. random presentation) and stimuli modality (auditory-only vs. audiovisual stimuli). Individuals participated in a quasiconversational design in which the speech of a virtual interlocutor was manipulated to produce fast and slow speech rate conditions. Results There was a significant effect of stimuli organization on entrainment outcomes. Individuals in the blocked, but not the random, groups altered their speech rate to align with the speech rate of the virtual interlocutor. There were no effect of stimuli modality and no interaction between modality and organization on entrainment outcomes. Conclusion Findings highlight the importance of methodological decisions on entrainment outcomes. This underscores the need for more comprehensive research regarding entrainment methodology.


2018 ◽  
Vol 26 (4) ◽  
pp. 1551
Author(s):  
Philippe Martin

Abstract: Whether we read aloud or silently, we segment speech not in words, but in accent phrases, i.e. sequences containing only one stressed syllable (excluding emphatic stress). In lexically stressed languages such as Italian or English, the location of stress in a noun, an adverb, a verb or an adjective (content words) is defined in the lexicon, and accent phrases include one single content word together with its associated grammatical words. In French, a language deprived from lexical stress, accent phrases are defined by the time it takes to read or pronounce them. Therefore, actual phrasing, i.e. the segmentation into accent phrases, depends strongly on the speech rate chosen by the speaker or the reader, whether in oral or silent reading mode. With a slow speech rate, all content words form accent phrases whose final syllables are stressed, whereas a fast speech rate could merge up to 10 or 11 syllables together in a single accent phrase with more than one content word. Based on this observation, and on other properties of stressed syllables, a computer algorithm for automatic phrasing, operating in a top-down fashion, is presented and applied to two examples of read and spontaneous speech.Keywords: accent phrase; French; phrasing; stress location; boundary detection.Resumo: Quando lemos em voz alta ou silenciosamente, segmentamos a fala em palavras, mas em grupos acentuais, i.e., sequências contendo uma única sílaba acentuada (excluindo-se acento enfático). Em línguas lexicalmente acentuadas como o italiano ou o inglês, a localização do acento em um substantivo, um advérbio, um verbo ou em um adjetivo (palavras lexicais) é definida no léxico, e sintagmas acentuais incluem uma única palavra lexical, acompanhada das palavras gramaticais a ela associadas. Em francês, uma língua que não possui acento lexical, sintagmas acentuais são definidos pelo tempo que se leva para lê-los ou pronunciá-los. Assim, os constituintes concretos, i.e., a segmentação em grupos acentuais, depende fortemente da velocidade de fala escolhida pelo falante ou leitor, tanto na fala como na leitura silenciosa. Com uma velocidade de fala baixa, todas as palavras lexicais formam grupos acentuais cujas sílabas finais são acentuadas, enquanto o ritmo de fala rápido poderia juntar de 10 a 11 sílabas em um mesmo grupo acentual contendo mais de uma palavra lexical. Com base nessa observação e em outras propriedades das sílabas acentuadas, um algoritmo computacional para segmentação automática, atuando de maneira top-down é apresentado e aplicado a dois exemplos de leitura e fala espontânea.Palavras-chave: grupo acentual; francês; segmentação; posição do acento; detecção de fronteira.


Sign in / Sign up

Export Citation Format

Share Document