speech output
Recently Published Documents


TOTAL DOCUMENTS

149
(FIVE YEARS 28)

H-INDEX

24
(FIVE YEARS 2)

Author(s):  
Adithya Chandregowda ◽  
Heather M. Clark

Purpose The purpose of this clinical focus article is to illustrate how speech-language pathologist (SLP) characterization of anarthria can contribute to neurological diagnosis and to highlight the challenges associated with such an endeavor. Method Used in this study are a retrospective chart review and clinicians' experience-based reflections. Results A 65-year-old man, who, in the context of a neurodegenerative disease, presented with near-complete-loss of speech, was referred by neurologists to SLPs for further characterization of his speech difficulty. Assessment of his limited speech output revealed anarthria with mixed features (spastic and hypokinetic) with superimposed apraxia of speech. Conclusions SLP characterization of anarthria to facilitate neurological diagnosis is challenging but possible. Clinical lessons learned from this unusual scenario are discussed.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Asad Khan ◽  
Muhammad Awais Ashraf ◽  
Muhammad Awais Javeed ◽  
Muhammad Shahzad Sarfraz ◽  
Asad Ullah ◽  
...  

Vision is, no doubt, one of the most important and precious gifts to humans; however, there exists a fraction of visually impaired ones who cannot see properly. These visually impaired disabled people face many challenges in their lives—like performing routine activities, e.g., shopping and walking. Additionally, they also need to travel to known and unknown places for different necessities, and hence, they require an attendant. Most of the time, affording an attendant is not easier and inexpensive, especially when almost 2.5% of the population of Pakistan is visually impaired. There exist some ways of helping these physically impaired people, for example, devices with a navigation system with speech output; however, these are either less accurate, costly, or heavier. Additionally, none of them have shown perfect results in both indoor and outdoor activities. Additionally, the problems become even more severe when the subject/the people are partially deaf as well. In this paper, we present a proof of concept of an embedded prototype which not only navigates but also detects the hurdles and gives alerts—using speech alarm output and/or vibration for the partially deaf—along the way. The designed embedded system includes a cane, a microcontroller, Global System for Mobile Communication (GSM), Global Positioning System (GPS) module, Arduino, a speech output module speaker, Light-Dependent Resistor (LDR), and ultrasonic sensors for hurdle detection with voice and vibrational feedback. Using our developed system, physically impaired people can reach their destination safely and independently.


Author(s):  
Caroline A. Niziolek ◽  
Benjamin Parrell

Purpose Speakers use auditory feedback to guide their speech output, although individuals differ in the magnitude of their compensatory response to perceived errors in feedback. Little is known about the factors that contribute to the compensatory response or how fixed or flexible they are within an individual. Here, we test whether manipulating the perceived reliability of auditory feedback modulates speakers' compensation to auditory perturbations, as predicted by optimal models of sensorimotor control. Method Forty participants produced monosyllabic words in two separate sessions, which differed in the auditory feedback given during an initial exposure phase. In the veridical session exposure phase, feedback was normal. In the noisy session exposure phase, small, random formant perturbations were applied, reducing reliability of auditory feedback. In each session, a subsequent test phase introduced larger unpredictable formant perturbations. We assessed whether the magnitude of within-trial compensation for these larger perturbations differed across the two sessions. Results Compensatory responses to downward (though not upward) formant perturbations were larger in the veridical session than the noisy session. However, in post hoc testing, we found the magnitude of this effect is highly dependent on the choice of analysis procedures. Compensation magnitude was not predicted by other production measures, such as formant variability, and was not reliably correlated across sessions. Conclusions Our results, though mixed, provide tentative support that the feedback control system monitors the reliability of sensory feedback. These results must be interpreted cautiously given the potentially limited stability of auditory feedback compensation measures across analysis choices and across sessions. Supplemental Material https://doi.org/10.23641/asha.14167136


2021 ◽  
Author(s):  
Yuka Hatayama ◽  
Satoshi Yamaguchi ◽  
Keiichi Kumai ◽  
Junko Takada ◽  
Kyoko Akanuma ◽  
...  

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
James D. Stefaniak ◽  
Matthew A. Lambon Ralph ◽  
Blanca De Dios Perez ◽  
Timothy D. Griffiths ◽  
Manon Grube

AbstractAphasia affects at least one third of stroke survivors, and there is increasing awareness that more fundamental deficits in auditory processing might contribute to impaired language performance in such individuals. We performed a comprehensive battery of psychoacoustic tasks assessing the perception of tone pairs and sequences across the domains of pitch, rhythm and timbre in 17 individuals with post-stroke aphasia and 17 controls. At the level of individual differences we demonstrated a correlation between metrical pattern (beat) perception and speech output fluency with strong effect (Spearman’s rho = 0.72). This dissociated from more basic auditory timing perception, which did not correlate with output fluency. This was also specific in terms of the language and cognitive measures, amongst which phonological, semantic and executive function did not correlate with beat detection. We interpret the data in terms of a requirement for the analysis of the metrical structure of sound to construct fluent output, with both being a function of higher-order “temporal scaffolding”. The beat perception task herein allows measurement of timing analysis without any need to account for motor output deficit, and could be a potential clinical tool to examine this. This work suggests strategies to improve fluency after stroke by training in metrical pattern perception.


Author(s):  
Susannah Boyle ◽  
David McNaughton ◽  
Janice Light ◽  
Salena Babb ◽  
Shelley E. Chapin

Purpose This study investigated the use of a new software feature, namely, dynamic text with speech output, on the acquisition of single-word reading skills by six children with developmental disabilities during shared e-book reading experiences with six typically developing peers. Method A single-subject, multiple-probe design across participants was used to evaluate the effects of the software intervention. Six children with developmental delays were the primary focus for intervention, while six children with typical development participated as peer partners in intervention activities. e-Books were created with the new software feature, in which a child selects a picture from the e-book and the written word is presented dynamically and then spoken out. These e-books were then used in shared reading activities with dyads including a child with a disability and a peer with typical development. Participants engaged in the shared reading activity for an average of 13 sessions over a 6-week time period, an average of 65 min of intervention for each dyad. Results Participants with disabilities acquired an average of 73% of the words to which they were exposed, a gain of 4.3 words above the baseline average of 1.7 correct responses. The average effect size (Tau-U) was .94, evidence of a very large effect. Conclusion The results provide evidence that the use of e-books with the dynamic text and speech output feature during inclusive shared reading activities can be an effective and socially valid method to develop the single-word reading skills of young children with developmental disabilities.


2020 ◽  
Vol 63 (11) ◽  
pp. 3571-3585
Author(s):  
Xiaotong Xi ◽  
Peng Li ◽  
Florence Baills ◽  
Pilar Prieto

Purpose Research has shown that observing hand gestures mimicking pitch movements or rhythmic patterns can improve the learning of second language (L2) suprasegmental features. However, less is known about the effects of hand gestures on the learning of novel phonemic contrasts. This study examines (a) whether hand gestures mimicking phonetic features can boost L2 segment learning by naive learners and (b) whether a mismatch between the hand gesture form and the target phonetic feature influences the learning effect. Method Fifty Catalan native speakers undertook a short multimodal training session on two types of Mandarin Chinese consonants (plosives and affricates) in either of two conditions: Gesture and No Gesture. In the Gesture condition, a fist-to-open-hand gesture was used to mimic air burst, while the No Gesture condition included no such use of gestures. Crucially, while the hand gesture appropriately mimicked the air burst produced in plosives, this was not the case for affricates. Before and after training, participants were tested on two tasks, namely, the identification task and the imitation task. Participants' speech output was rated by five Chinese native speakers. Results The perception results showed that training with or without gestures yielded similar degrees of improvement for the identification of aspiration contrasts. By contrast, the production results showed that, while training without gestures did not help improve L2 pronunciation, training with gestures improved pronunciation, but only when the given gestures appropriately mimicked the phonetic properties they represented. Conclusions Results revealed that the efficacy of observing hand gestures on the learning of nonnative phonemes depends on the appropriateness of the form of those gestures relative to the target phonetic features. That is, hand gestures seem to be more useful when they appropriately mimic phonetic features. Supplemental Material https://doi.org/10.23641/asha.13105442


2020 ◽  
pp. 026461962096768
Author(s):  
Markus Lang ◽  
Ursula Hofer ◽  
Fabian Winter

This study aims to investigate the literacy skills of Braille readers in the areas of reading fluency, reading and listening comprehension, and spelling. A total of 119 German-speaking, Braille readers aged between 11.0 and 22.11 years were tested for this purpose. Data collection was carried out using a questionnaire, psychometric tests, and self-constructed assessments. Wherever possible, the results were compared with the standards of sighted peers. Regarding reading fluency, Braille readers performed significantly slower than print readers. In terms of spelling, the Braille users performed within an average range of sighted peers. Furthermore, a positive correlation was obtained between Braille reading fluency and spelling, whereas the use of auditory aids (e.g., speech output) showed a negative correlation with Braille reading fluency and spelling. In addition, a comparison between listening and reading within the study sample revealed that reading Braille proved to be better for comprehension, although listening was significantly faster. In conclusion, the findings provide evidence that Braille reading skills are important for the development of literacy skills in general. Nevertheless, listening skills are important and need to be systematically promoted.


2020 ◽  
Vol 63 (10) ◽  
pp. 3392-3407
Author(s):  
Ayoub Daliri ◽  
Sara-Ching Chao ◽  
Lacee C. Fitzgerald

Purpose We continuously monitor our speech output to detect potential errors in our productions. When we encounter errors, we rapidly change our speech output to compensate for the errors. However, it remains unclear whether we adjust the magnitude of our compensatory responses based on the characteristics of errors. Method Participants ( N = 30 adults) produced monosyllabic words containing /ɛ/ (/hɛp/, /hɛd/, /hɛk/) while receiving perturbed or unperturbed auditory feedback. In the perturbed trials, we applied two different types of formant perturbations: (a) the F1 shift, in which the first formant of /ɛ/ was increased, and (b) the F1–F2 shift, in which the first formant was increased and the second formant was decreased to make a participant's /ɛ/ sound like his or her /æ/. In each perturbation condition, we applied three participant-specific perturbation magnitudes (0.5, 1.0, and 1.5 ɛ–æ distance). Results Compensatory responses to perturbations with the magnitude of 1.5 ɛ–æ were proportionally smaller than responses to perturbation magnitudes of 0.5 ɛ–æ. Responses to the F1–F2 shift were larger than responses to the F1 shift regardless of the perturbation magnitude. Additionally, compensatory responses for /hɛd/ were smaller than responses for /hɛp/ and /hɛk/. Conclusions Overall, these results suggest that the brain uses its error evaluation to determine the extent of compensatory responses. The brain may also consider categorical errors and phonemic environments (e.g., articulatory configurations of the following phoneme) to determine the magnitude of its compensatory responses to auditory errors.


Sign in / Sign up

Export Citation Format

Share Document