syllable repetition
Recently Published Documents


TOTAL DOCUMENTS

81
(FIVE YEARS 24)

H-INDEX

20
(FIVE YEARS 3)

Author(s):  
Ray D. Kent ◽  
Yunjung Kim ◽  
Li-mei Chen

Purpose: The aim of this study was to conduct a scoping review of research on oral and laryngeal diadochokinesis (DDK) in children and adults, either typically developing/developed or with a clinical diagnosis. Method: Searches were conducted with PubMed/MEDLINE, Google Scholar, CINAHL, and legacy sources in retrieved articles. Search terms included the following: DDK, alternating motion rate, maximum repetition rate, sequential motion rate, and syllable repetition rate. Results: Three hundred sixty articles were retrieved and included in the review. Data source tables for children and adults list the number and ages of study participants, DDK task, and language(s) spoken. Cross-sectional data for typically developing children and typically developed adults are compiled for the monosyllables /pʌ/, /tʌ/, and /kʌ/; the trisyllable /pʌtʌkʌ/; and laryngeal DDK. In addition, DDK results are summarized for 26 disorders or conditions. Discussion: A growing number of multidisciplinary reports on DDK affirm its role in clinical practice and research across the world. Atypical DDK is not a well-defined singular entity but rather a label for a collection of disturbances associated with diverse etiologies, including motoric, structural, sensory, and cognitive. The clinical value of DDK can be optimized by consideration of task parameters, analysis method, and population of interest.


Zootaxa ◽  
2021 ◽  
Vol 5005 (2) ◽  
pp. 101-144
Author(s):  
KLAUS-GERHARD HELLER ◽  
ED BAKER ◽  
SIGFRID INGRISCH ◽  
OLGA KORSUNOVSKAYA ◽  
CHUN-XIANG LIU ◽  
...  

Bush-crickets (or katydids) of the genus Mecopoda are relatively large insects well-known for their sounds for centuries. Bioacoustic studies in India and China revealed a surprisingly large diversity of sound patterns. We extend these studies into the tropics of South East Asia using integrative taxonomy, combining song analysis, morphology of sound producing organs and male genitalia as well as chromosomes, to get a better understanding of the phylogeny and evolution of this widespread group. Besides the closely related genus Eumecopoda, the genus Mecopoda contains some isolated species and a large group of species which we assign to the Mecopoda elongata group. Some species of this group have broad tegmina and stridulatory files with different tooth spacing patterns and produce continuous, often relatively complicated, trill-like songs. The species of another subgroup with narrower wings have all similar files. Their songs consist of echemes (groups of syllables) which differ in syllable number and syllable repetition rate and also in echeme repetition rate. Our results show that South East Asia harbours a large and certainly not yet fully explored number of Mecopoda species which are most easily and clearly identified by song. Based on the data, five new forms are described: Mecopoda mahindai Heller sp. nov., Mecopoda paucidens Ingrisch, Su & Heller sp. nov., Mecopoda sismondoi Heller sp. nov., Mecopoda niponensis vietnamica Heller & Korsunovskaya subsp. nov., Eumecopoda cyrtoscelis zhantievi Heller subsp. nov. In addition, some taxonomic changes are proposed: Eumecopoda Hebard, 1922 stat. rev., Paramecopoda Gorochov, 2020, syn. nov. of Eumecopoda Hebard, 1922, Mecopoda javana (Johansson, 1763) stat. nov. (neotype selected) with M. javana minahasa Gorochov, 2020 stat. nov., M. javana darevskyi Gorochov, 2020 stat. nov., M. javana buru Gorochov, 2020 stat. nov., Mecopoda macassariensis (Haan, 1843) stat. rev., Mecopoda ampla malayensis Gorochov, 2020 syn. nov., Mecopada ampla javaensis Gorochov, 2020 syn. nov., Mecopoda fallax aequatorialis Gorochov, 2020 syn. nov., the last three are all synonyms of Mecopoda himalaya Liu, 2020, Mecopoda yunnana Liu 2020, stat. nov.


2021 ◽  
pp. 105566562110254
Author(s):  
Firas Alfwaress ◽  
Ann W. Kummer ◽  
Barbara Weinrich

Objective: To establish nasalance score norms for adolescent and young adult native speakers of American English and also determine age-group and gender differences using the Simplified Nasometric Assessment Procedures (SNAP) Test-R and Nasometer II. Design: Prospective study using a randomly selected sample of participants. Setting: Greater Cincinnati area and Miami University of Ohio. Participants: Participants had a history of normal speech and language development and no history of speech therapy. Participants in the adolescent group were recruited from schools in West Clermont and Hamilton County, whereas the young adults were recruited from Miami University of Ohio. The participants of both groups were residents of Cincinnati, Ohio or Oxford, Ohio and spoke midland American English dialect. Outcome Measures: Mean nasalance scores for the SNAP Test-R. Results: Normative nasalance scores were obtained for the Syllable Repetition/Prolonged Sounds, Picture-Cued, and Paragraph subtests. Results showed statistically significant nasalance score differences between adolescents and young adults in the Syllable Repetition, Picture-Cued, and Paragraph subtests, and between males and females in the Syllable Repetition and the Sound-Prolonged subtests. A significant univariate effect was found for the syllables and sentences containing nasal consonants and high vowels compared to syllables and sentences containing oral consonants and low vowels. Across all the SNAP Test-R subtests, the females’ nasalance scores were higher than the males. A significant univariate effect was also found across nasal syllables, and high vowels such that the females’ nasalance scores were higher than the males. Tables of normative data are provided that may be useful for clinical purposes. Conclusion: Norms obtained demonstrated nasalance score differences according to age and gender, particularly in the Syllable Repetition/Prolonged Sound subtest. These differences were discussed in light of potential reasons for their existence and implications for understanding velopharyngeal function. In addition, nasalance scores are affected by the vowel type and place of articulation of the consonant. These facts should be considered when nasometry is used clinically and for research purposes.


2021 ◽  
Vol 12 ◽  
Author(s):  
Kaila L. Stipancic ◽  
Yana Yunusova ◽  
Thomas F. Campbell ◽  
Jun Wang ◽  
James D. Berry ◽  
...  

Objective: Understanding clinical variants of motor neuron diseases such as amyotrophic lateral sclerosis (ALS) is critical for discovering disease mechanisms and across-patient differences in therapeutic response. The current work describes two clinical subgroups of patients with ALS that, despite similar levels of bulbar motor involvement, have disparate clinical and functional speech presentations.Methods: Participants included 47 healthy control speakers and 126 speakers with ALS. Participants with ALS were stratified into three clinical subgroups (i.e., bulbar asymptomatic, bulbar symptomatic high speech function, and bulbar symptomatic low speech function) based on clinical metrics of bulbar motor impairment. Acoustic and lip kinematic analytics were derived from each participant's recordings of reading samples and a rapid syllable repetition task. Group differences were reported on clinical scales of ALS and bulbar motor severity and on multiple speech measures.Results: The high and low speech-function subgroups were found to be similar on many of the dependent measures explored. However, these two groups were differentiated on the basis of an acoustic measure used as a proxy for tongue movement.Conclusion: This study supports the hypothesis that high and low speech-function subgroups do not differ solely in overall severity, but rather, constitute two distinct bulbar motor phenotypes. The findings suggest that the low speech-function group exhibited more global involvement of the bulbar muscles than the high speech-function group that had relatively intact lingual function. This work has implications for clinical measures used to grade bulbar motor involvement, suggesting that a single bulbar measure is inadequate for capturing differences among phenotypes.


Stuttering is an involuntary disturbance in the fluent flow of speech characterized by disfluencies such as stop gaps, sound or syllable repetition or prolongation. There are high proportion of stop gaps in stuttering. This work presents automatic removal of stop gaps using combination of spectral parameters such as spectral energy, centroid, Entropy and Zero crossing rate. A method for detecting and removing stop gaps based on threshold is discussed in this paper


2021 ◽  
Author(s):  
Alan Bush ◽  
Anna Chrabaszcz ◽  
Victoria Peterson ◽  
Varun Saravanan ◽  
Christina Dastolfo-Hromack ◽  
...  

AbstractThere is great interest in identifying the neurophysiological underpinnings of speech production. Deep brain stimulation (DBS) surgery is unique in that it allows intracranial recordings from both cortical and subcortical regions in patients who are awake and speaking. The quality of these recordings, however, may be affected to various degrees by mechanical forces resulting from speech itself. Here we describe the presence of speech-induced artifacts in local-field potential (LFP) recordings obtained from mapping electrodes, DBS leads, and cortical electrodes. In addition to expected physiological increases in high gamma (60-200 Hz) activity during speech production, time-frequency analysis in many channels revealed a narrowband gamma component that exhibited a pattern similar to that observed in the speech audio spectrogram. This component was present to different degrees in multiple types of neural recordings. We show that this component tracks the fundamental frequency of the participant’s voice, correlates with the power spectrum of speech and has coherence with the produced speech audio. A vibration sensor attached to the stereotactic frame recorded speech-induced vibrations with the same pattern observed in the LFPs. No corresponding component was identified in any neural channel during the listening epoch of a syllable repetition task. These observations demonstrate how speech-induced vibrations can create artifacts in the primary frequency band of interest. Identifying and accounting for these artifacts is crucial for establishing the validity and reproducibility of speech-related data obtained from intracranial recordings during DBS surgery.


Author(s):  
Caroline Spencer ◽  
Jennifer Vannest ◽  
Edwin Maas ◽  
Jonathan L. Preston ◽  
Erin Redle ◽  
...  

Purpose This study investigated phonological and speech motor neural networks in children with residual speech sound disorder (RSSD) during an overt Syllable Repetition Task (SRT). Method Sixteen children with RSSD with /ɹ/ errors (6F [female]; ages 8;0–12;6 [years;months]) and 16 children with typically developing speech (TD; 8F; ages 8;5–13;7) completed a functional magnetic resonance imaging experiment. Children performed the SRT (“SRT-Early Sounds”) with the phonemes /b, d, m, n, ɑ/ and an adapted version (“SRT-Late Sounds”) with the phonemes /ɹ, s, l, tʃ, ɑ/. We compared the functional activation and transcribed production accuracy of the RSSD and TD groups during both conditions. Expected errors were not scored as inaccurate. Results No between-group or within-group differences in repetition accuracy were found on the SRT-Early Sounds or SRT-Late Sounds tasks at any syllable sequence length. On a first-level analysis of the tasks, the TD group showed expected patterns of activation for both the SRT-Early Sounds and SRT-Late Sounds, including activation in the left primary motor cortex, left premotor cortex, bilateral anterior cingulate, bilateral primary auditory cortex, bilateral superior temporal gyrus, and bilateral insula. The RSSD group showed similar activation when correcting for multiple comparisons. In further exploratory analyses, we observed the following subthreshold patterns: (a) On the SRT-Early Sounds, greater activation was found in the left premotor cortex for the RSSD group, while greater activation was found in the left cerebellum for the TD group; (b) on the SRT-Late Sounds, a small area of greater activation was found in the right cerebellum for the RSSD group. No within-group functional differences were observed (SRT-Early Sounds vs. SRT-Late Sounds) for either group. Conclusions Performance was similar between groups, and likewise, we found that functional activation did not differ. Observed functional differences in previous studies may reflect differences in task performance, rather than fundamental differences in neural mechanisms for syllable repetition.


Author(s):  
Jonathan L. Preston ◽  
Nina R. Benway ◽  
Megan C. Leece ◽  
Nicole F. Caballero

Purpose To assess the concurrent validity of two tasks used to inform diagnosis of childhood apraxia of speech (CAS), this study evaluated the agreement between the Syllable Repetition Task (SRT) and the Maximum Repetition Rate of Trisyllables (MRR-Tri). Method A retrospective analysis was conducted with 80 children 7–16 years of age who were referred for treatment studies. All children had a speech sound disorder, and all completed both the SRT and the MRR-Tri. On each task, children were classified as meeting or not meeting the tool's threshold for CAS based on the sound sequencing errors demonstrated. Results The two tasks were in agreement for 47 participants (59% of the sample); both tasks classified 13 children as meeting the threshold for CAS and 34 children as not meeting the threshold for CAS. However, the two tasks disagreed on CAS classification for 33 children (41% of the sample). Overall, the MRR-Tri identified more children as having sound sequencing errors indicative of CAS ( n = 39) than did the SRT ( n = 20). Conclusions These two tasks of sound sequencing differ in the children they identify with CAS, possibly due to aspects of the underlying task requirements (e.g., time pressure). The SRT and the MRR-Tri should not be used in isolation to identify CAS but may be useful as part of a balanced CAS assessment battery that includes additional tasks that inform the nature of the impairment and that aid treatment planning. Supplemental Material https://doi.org/10.23641/asha.14110280


2020 ◽  
Vol 5 (5) ◽  
pp. 1324-1338
Author(s):  
Panying Rong

Purpose This study aimed to provide a preliminary examination of the articulatory control of speech and speechlike tasks based on a gestural framework and identify shared and task-specific articulatory factors in speech and speechlike tasks. Method Ten healthy participants performed two speechlike tasks (i.e., alternating motion rate [AMR] and sequential motion rate [SMR]) and three speech tasks (i.e., reading of “clever Kim called the cat clinic” at the regular, fast, and slow rates) that varied in phonological complexity and rate. Articulatory kinematics were recorded using an electromagnetic kinematic tracking system (Wave, Northern Digital Inc.). Based on the gestural framework for articulatory phonology, the gestures of tongue body and lips were derived from the kinematic data. These gestures were subjected to a fine-grained analysis, which extracted (a) four gestural features (i.e., range of magnitude [ROM], frequency [Freq], acceleration time, and maximum speed [maxSpd]) for the tongue body gesture; (b) three intergestural measures including the peak intergestural coherence (InterCOH), frequency at which the peak intergestural coherence occurs (Freq_InterCOH), and the mean absolute relative phase between the tongue body and lip gestures; and (c) three intragestural (i.e., interarticulator) measures including the peak intragestural coherence (IntraCOH), Freq_IntraCOH, and mean absolute relative phase between the tongue body and the jaw, which are the component articulators that underlie the tongue body gesture. In addition, the performance rate for each task was also derived. The effects of task and sex on all the articulatory and behavioral measures were examined using mixed-design analysis of variance followed by post hoc pairwise comparisons across tasks. Results Task had a significant effect on performance rate, ROM, Freq, maxSpd, InterCOH, Freq_InterCOH, IntraCOH, and Freq_IntraCOH. Compared to the speech tasks, the AMR task showed a decrease in ROM and increases in Freq, InterCOH, Freq_InterCOH, IntraCOH, and Freq_IntraCOH. The SMR task showed similar ROM, Freq, maxSpd, InterCOH, and IntraCOH as the fast and regular speech tasks. Conclusions The simple phonological structure and demand for rapid syllable rate for the AMR task may elicit a distinct articulatory control mechanism. Despite being a rapid nonsense syllable repetition task, the relatively complex phonological structure of the SMR task appeared to elicit a similar articulatory control mechanism as that of speech production. Based on these shared and task-specific articulatory features between speech and speechlike tasks, the clinical implications for articulatory assessment were discussed.


2020 ◽  
Vol 63 (10) ◽  
pp. 3453-3460
Author(s):  
Michal Novotny ◽  
Jan Melechovsky ◽  
Kriss Rozenstoks ◽  
Tereza Tykalova ◽  
Petr Kryze ◽  
...  

Purpose The purpose of this research note is to provide a performance comparison of available algorithms for the automated evaluation of oral diadochokinesis using speech samples from patients with amyotrophic lateral sclerosis (ALS). Method Four different algorithms based on a wide range of signal processing approaches were tested on a sequential motion rate /pa/-/ta/-/ka/ syllable repetition paradigm collected from 18 patients with ALS and 18 age- and gender-matched healthy controls (HCs). Results The best temporal detection of syllable position for a 10-ms tolerance value was achieved for ALS patients using a traditional signal processing approach based on a combination of filtering in the spectrogram, Bayesian detection, and polynomial thresholding with an accuracy rate of 74.4%, and for HCs using a deep learning approach with an accuracy rate of 87.6%. Compared to HCs, a slow diadochokinetic rate ( p < .001) and diadochokinetic irregularity ( p < .01) were detected in ALS patients. Conclusions The approaches using deep learning or multiple-step combinations of advanced signal processing methods provided a more robust solution to the estimation of oral DDK variables than did simpler approaches based on the rough segmentation of the signal envelope. The automated acoustic assessment of oral diadochokinesis shows excellent potential for monitoring bulbar disease progression in individuals with ALS.


Sign in / Sign up

Export Citation Format

Share Document