speech acoustics
Recently Published Documents


TOTAL DOCUMENTS

86
(FIVE YEARS 17)

H-INDEX

13
(FIVE YEARS 1)

Author(s):  
Stephanie A. Borrie ◽  
Camille J. Wynn ◽  
Visar Berisha ◽  
Tyson S. Barrett

Purpose: We proposed and tested a causal instantiation of the World Health Organization's International Classification of Functioning, Disability and Health (ICF) framework, linking acoustics, intelligibility, and communicative participation in the context of dysarthria. Method: Speech samples and communicative participation scores were collected from individuals with dysarthria ( n = 32). Speech was analyzed for two acoustic metrics (i.e., articulatory precision and speech rate), and an objective measure of intelligibility was generated from listener transcripts. Mediation analysis was used to evaluate pathways of effect between acoustics, intelligibility, and communicative participation. Results: We observed a strong relationship between articulatory precision and intelligibility and a moderate relationship between intelligibility and communicative participation. Collectively, data supported a significant relationship between articulatory precision and communicative participation, which was almost entirely mediated through intelligibility. These relationships were not significant when speech rate was specified as the acoustic variable of interest. Conclusion: The statistical corroboration of our causal instantiation of the ICF framework with articulatory acoustics affords important support toward the development of a comprehensive causal framework to understand and, ultimately, address restricted communicative participation in dysarthria.


2021 ◽  
pp. JN-RM-0812-21
Author(s):  
Marlies Gillis ◽  
Jonas Vanthornhout ◽  
Jonathan Z. Simon ◽  
Tom Francart ◽  
Christian Brodbeck

2021 ◽  
Vol 150 (4) ◽  
pp. A270-A271
Author(s):  
Kirsten Dixon ◽  
Christopher Dromey ◽  
Tyson Harmon ◽  
Tracianne B. Neilsen

Author(s):  
Jun Ma ◽  
Hongzhi Yu ◽  
Yan Xu ◽  
Kaiying Deng

According to relevant specifications, this article divides, marks, and extracts the acquired speech signals of the Salar language, and establishes the speech acoustic parameter database of the Salar language. Then, the vowels of the Salar language are analyzed and studied by using the parameter database. The vowel bitmap (average value at the beginning of words), the vowel bitmap (average value at the abdomen of words), the vowel bitmap (average value at the ending of words), and the vowel bitmap (average value) are obtained. Through the vowel bitmaps, we can observe the vowel in different positions of the word, the overall appearance of an obtuse triangle. The high vowel [i], [o], and low vowel [a] occupy three vertices, respectively. Among the three lines, [i] to [o] are the longest, [i] to [a] are the second longest, and [a] to [o] are the shortest. The lines between [a] to [o] and [a] and [i] are asymmetric. Combining with the vowel bitmap, the vowels were discretized, and the second formant (F2) frequency parameter was used as the coordinate of the X axis, and the first formant (F1) frequency was used as the coordinate of the Y axis to draw the region where the vowel was located, and then the vowel pattern was formed. These studies provide basic data and parameters for the future development of modern phonetics such as the database of Sarah language speech, speech recognition, and speech synthesis. It also provides the basic parameters of speech acoustics for the rare minority acoustic research work of the national language project.


2021 ◽  
Vol 22 (sup1) ◽  
pp. 14-21
Author(s):  
Gabriela M. Stegmann ◽  
Shira Hahn ◽  
Cayla J. Duncan ◽  
Seward B. Rutkove ◽  
Julie Liss ◽  
...  

2021 ◽  
Author(s):  
Victoria S. McKenna ◽  
Courtney L. Kendall ◽  
Tulsi H. Patel ◽  
Rebecca J. Howell ◽  
Renee L. Gustin

2021 ◽  
Author(s):  
Marlies Gillis ◽  
Jonas Vanthornhout ◽  
Jonathan Z Simon ◽  
Tom Francart ◽  
Christian Brodbeck

When listening to speech, brain responses time-lock to acoustic events in the stimulus. Recent studies have also reported that cortical responses track linguistic representations of speech. However, tracking of these representations is often described without controlling for acoustic properties. Therefore, the response to these linguistic representations might reflect unaccounted acoustic processing rather than language processing. Here we tested several recently proposed linguistic representations, using audiobook speech, while controlling for acoustic and other linguistic representations. Indeed, some of these linguistic representations were not significantly tracked after controlling for acoustic properties. However, phoneme surprisal, cohort entropy, word surprisal and word frequency were significantly tracked over and beyond acoustic properties. Additionally, these linguistic representations are tracked similarly across different stories, spoken by different readers. Together, this suggests that these representations characterize processing of the linguistic content of speech and might allow a behaviour-free evaluation of the speech intelligibility.


Author(s):  
Tyson G. Harmon ◽  
Christopher Dromey ◽  
Brenna Nelson ◽  
Kacy Chapman

Purpose The aim of this study was to investigate how different types of background noise that differ in their level of linguistic content affect speech acoustics, speech fluency, and language production for young adult speakers when performing a monologue discourse task. Method Forty young adults monologued by responding to open-ended questions in a silent baseline and five background noise conditions (debate, movie dialogue, contemporary music, classical music, and pink noise). Measures related to speech acoustics (intensity and frequency), speech fluency (speech rate, pausing, and disfluencies), and language production (lexical, morphosyntactic, and macrolinguistic structure) were analyzed and compared across conditions. Participants also reported on which conditions they perceived as more distracting. Results All noise conditions resulted in some change to spoken language compared with the silent baseline. Effects on speech acoustics were consistent with expected changes due to the Lombard effect (e.g., increased intensity and fundamental frequency). Effects on speech fluency showed decreased pausing and increased disfluencies. Several background noise conditions also seemed to interfere with language production. Conclusions Findings suggest that young adults present with both compensatory and interference effects when speaking in noise. Several adjustments may facilitate intelligibility when noise is present and help both speaker and listener maintain attention on the production. Other adjustments provide evidence that background noise eliciting linguistic interference has the potential to degrade spoken language even for healthy young adults, because of increased cognitive demands.


Sign in / Sign up

Export Citation Format

Share Document