Teaching speech acoustics with adaptable Praat labs

2021 ◽  
Vol 149 (4) ◽  
pp. A147-A147
Author(s):  
Kathleen A. Siren
Keyword(s):  
2019 ◽  
Vol 62 (7) ◽  
pp. 2099-2117 ◽  
Author(s):  
Jason A. Whitfield ◽  
Zoe Kriegel ◽  
Adam M. Fullenkamp ◽  
Daryush D. Mehta

Purpose Prior investigations suggest that simultaneous performance of more than 1 motor-oriented task may exacerbate speech motor deficits in individuals with Parkinson disease (PD). The purpose of the current investigation was to examine the extent to which performing a low-demand manual task affected the connected speech in individuals with and without PD. Method Individuals with PD and neurologically healthy controls performed speech tasks (reading and extemporaneous speech tasks) and an oscillatory manual task (a counterclockwise circle-drawing task) in isolation (single-task condition) and concurrently (dual-task condition). Results Relative to speech task performance, no changes in speech acoustics were observed for either group when the low-demand motor task was performed with the concurrent reading tasks. Speakers with PD exhibited a significant decrease in pause duration between the single-task (speech only) and dual-task conditions for the extemporaneous speech task, whereas control participants did not exhibit changes in any speech production variable between the single- and dual-task conditions. Conclusions Overall, there were little to no changes in speech production when a low-demand oscillatory motor task was performed with concurrent reading. For the extemporaneous task, however, individuals with PD exhibited significant changes when the speech and manual tasks were performed concurrently, a pattern that was not observed for control speakers. Supplemental Material https://doi.org/10.23641/asha.8637008


2021 ◽  
Vol 22 (sup1) ◽  
pp. 14-21
Author(s):  
Gabriela M. Stegmann ◽  
Shira Hahn ◽  
Cayla J. Duncan ◽  
Seward B. Rutkove ◽  
Julie Liss ◽  
...  

2009 ◽  
Author(s):  
Prasanta Kumar Ghosh ◽  
Shrikanth S. Narayanan ◽  
Pierre Divenyi ◽  
Louis Goldstein ◽  
Elliot Saltzman

Author(s):  
Jun Ma ◽  
Hongzhi Yu ◽  
Yan Xu ◽  
Kaiying Deng

According to relevant specifications, this article divides, marks, and extracts the acquired speech signals of the Salar language, and establishes the speech acoustic parameter database of the Salar language. Then, the vowels of the Salar language are analyzed and studied by using the parameter database. The vowel bitmap (average value at the beginning of words), the vowel bitmap (average value at the abdomen of words), the vowel bitmap (average value at the ending of words), and the vowel bitmap (average value) are obtained. Through the vowel bitmaps, we can observe the vowel in different positions of the word, the overall appearance of an obtuse triangle. The high vowel [i], [o], and low vowel [a] occupy three vertices, respectively. Among the three lines, [i] to [o] are the longest, [i] to [a] are the second longest, and [a] to [o] are the shortest. The lines between [a] to [o] and [a] and [i] are asymmetric. Combining with the vowel bitmap, the vowels were discretized, and the second formant (F2) frequency parameter was used as the coordinate of the X axis, and the first formant (F1) frequency was used as the coordinate of the Y axis to draw the region where the vowel was located, and then the vowel pattern was formed. These studies provide basic data and parameters for the future development of modern phonetics such as the database of Sarah language speech, speech recognition, and speech synthesis. It also provides the basic parameters of speech acoustics for the rare minority acoustic research work of the national language project.


2021 ◽  
Author(s):  
Marlies Gillis ◽  
Jonas Vanthornhout ◽  
Jonathan Z Simon ◽  
Tom Francart ◽  
Christian Brodbeck

When listening to speech, brain responses time-lock to acoustic events in the stimulus. Recent studies have also reported that cortical responses track linguistic representations of speech. However, tracking of these representations is often described without controlling for acoustic properties. Therefore, the response to these linguistic representations might reflect unaccounted acoustic processing rather than language processing. Here we tested several recently proposed linguistic representations, using audiobook speech, while controlling for acoustic and other linguistic representations. Indeed, some of these linguistic representations were not significantly tracked after controlling for acoustic properties. However, phoneme surprisal, cohort entropy, word surprisal and word frequency were significantly tracked over and beyond acoustic properties. Additionally, these linguistic representations are tracked similarly across different stories, spoken by different readers. Together, this suggests that these representations characterize processing of the linguistic content of speech and might allow a behaviour-free evaluation of the speech intelligibility.


2021 ◽  
Author(s):  
Victoria S. McKenna ◽  
Courtney L. Kendall ◽  
Tulsi H. Patel ◽  
Rebecca J. Howell ◽  
Renee L. Gustin

Author(s):  
Stephanie A. Borrie ◽  
Camille J. Wynn ◽  
Visar Berisha ◽  
Tyson S. Barrett

Purpose: We proposed and tested a causal instantiation of the World Health Organization's International Classification of Functioning, Disability and Health (ICF) framework, linking acoustics, intelligibility, and communicative participation in the context of dysarthria. Method: Speech samples and communicative participation scores were collected from individuals with dysarthria ( n = 32). Speech was analyzed for two acoustic metrics (i.e., articulatory precision and speech rate), and an objective measure of intelligibility was generated from listener transcripts. Mediation analysis was used to evaluate pathways of effect between acoustics, intelligibility, and communicative participation. Results: We observed a strong relationship between articulatory precision and intelligibility and a moderate relationship between intelligibility and communicative participation. Collectively, data supported a significant relationship between articulatory precision and communicative participation, which was almost entirely mediated through intelligibility. These relationships were not significant when speech rate was specified as the acoustic variable of interest. Conclusion: The statistical corroboration of our causal instantiation of the ICF framework with articulatory acoustics affords important support toward the development of a comprehensive causal framework to understand and, ultimately, address restricted communicative participation in dysarthria.


Sign in / Sign up

Export Citation Format

Share Document