fluent speech
Recently Published Documents


TOTAL DOCUMENTS

211
(FIVE YEARS 14)

H-INDEX

35
(FIVE YEARS 1)

Sensors ◽  
2022 ◽  
Vol 22 (1) ◽  
pp. 321
Author(s):  
Izabela Świetlicka ◽  
Wiesława Kuniszyk-Jóźkowiak ◽  
Michał Świetlicki

The presented paper introduces principal component analysis application for dimensionality reduction of variables describing speech signal and applicability of obtained results for the disturbed and fluent speech recognition process. A set of fluent speech signals and three speech disturbances—blocks before words starting with plosives, syllable repetitions, and sound-initial prolongations—was transformed using principal component analysis. The result was a model containing four principal components describing analysed utterances. Distances between standardised original variables and elements of the observation matrix in a new system of coordinates were calculated and then applied in the recognition process. As a classifying algorithm, the multilayer perceptron network was used. Achieved results were compared with outcomes from previous experiments where speech samples were parameterised with the Kohonen network application. The classifying network achieved overall accuracy at 76% (from 50% to 91%, depending on the dysfluency type).


2021 ◽  
Vol 11 (5) ◽  
pp. 521-527
Author(s):  
Sopuruchi Christian Aboh

This paper conducts a psycholinguistic analysis of a neologistic jargon aphasic, Akala Gboo (a pseudonym of the patient) who is 52 years old. Neologistic jargon aphasia is a type of language disorder that manifests in the form of fluent speech, production of series of meaningless sounds and formulation of new words. This aphasic condition has not been explored to a large extent by researchers. By adopting the descriptive research design and using oral interview as instrument of data collection, the research finds out that the jargon aphasic exhibits elements of phonemic and morphemic paraphasias; as well as production of new words which are very much meaningful to him but they sound as gibberish to the hearers such as kwotekumakumakakununism, inianimous kalikwokaminolamkamkwuu. The paper finds out that the stimulants of the jargon aphasic symptoms are excitement and excessive intake of alcohol and cigarette. However, the paper recommends that government agencies and Non-Governmental Organisations (NGOs) should set up an aphasia centre where the needs of aphasics will be catered for and which will also make them easily accessible for aphasia researchers.


2021 ◽  
Vol 12 ◽  
Author(s):  
Erica M. Ellis ◽  
Arielle Borovsky ◽  
Jeffrey L. Elman ◽  
Julia L. Evans

PurposeThis study investigated whether the ability to utilize statistical regularities from fluent speech and map potential words to meaning at 18-months predicts vocabulary at 18- and again at 24-months.MethodEighteen-month-olds (N = 47) were exposed to an artificial language with statistical regularities within the speech stream, then participated in an object-label learning task. Learning was measured using a modified looking-while-listening eye-tracking design. Parents completed vocabulary questionnaires when their child was 18-and 24-months old.ResultsAbility to learn the object-label pairing for words after exposure to the artificial language predicted productive vocabulary at 24-months and amount of vocabulary change from 18- to 24 months, independent of non-verbal cognitive ability, socio-economic status (SES) and/or object-label association performance.ConclusionEighteen-month-olds’ ability to use statistical information derived from fluent speech to identify words within the stream of speech and then to map the “words” to meaning directly predicts vocabulary size at 24-months and vocabulary change from 18 to 24 months. The findings support the hypothesis that statistical word segmentation is one of the important aspects of word learning and vocabulary acquisition in toddlers.


Author(s):  
Wim Pouw ◽  
Lisette Jonge‐Hoekstra ◽  
Steven J. Harrison ◽  
Alexandra Paxton ◽  
James A. Dixon

2020 ◽  
Author(s):  
Charlie E E Wiltshire ◽  
Mark Chiew ◽  
Jennifer Chesters ◽  
Mairead Healy ◽  
Kate E Watkins

Purpose: People who stutter (PWS) have more unstable speech motor systems than people who are typically fluent (PWTF). Here, we used real-time MRI of the vocal tract to assess variability and duration of movements of different articulators in PWS and PWTF during fluent speech production.Method: The vocal tracts of 28 adults with moderate to severe stuttering and 20 PWTF were scanned using MRI while repeating simple and complex pseudowords. Mid-sagittal images of the vocal tract from lips to larynx were reconstructed at 33.3 frames per second. For each participant, we measured the variability and duration of movements across multiple repetitions of the pseudowords in three selected articulators: the lips, tongue body, and velum. Results: PWS showed significantly greater speech movement variability than PWTF during fluent repetitions of pseudowords. The group difference was most evident for measurements of lip aperture, as reported previously, but here we report that movements of the tongue body and velum were also affected during the same utterances. Variability was highest in both PWS and PWTF for repetitions of the monosyllabic pseudowords and was not affected by phonological complexity. Speech movement variability was unrelated to stuttering severity with the PWS group. PWS also showed longer speech movement durations relative to PWTF for fluent repetitions of multisyllabic pseudowords and this group difference was even more evident when repeating the phonologically complex pseudowords. Conclusions: Using real-time MRI of the vocal tract, we found that PWS produced more variable movements than PWTF even during fluent productions of simple pseudowords. This indicates general, trait-level differences in the control of the articulators between PWS and PWTF.


Author(s):  
Alexandra Korzeczek ◽  
Annika Primassin ◽  
Alexander Wolff von Gudenberg ◽  
Peter Dechent ◽  
Walter Paulus ◽  
...  

Developmental stuttering is a fluency disorder with anomalies in the neural speech motor system. Fluent speech requires multifunctional network formations. Currently, it is unclear which functional domain is targeted by speech fluency interventions. Here, we tested the impact of fluency-shaping on resting-state fMRI connectivity of the speech planning, articulatory convergence, sensorimotor integration, and inhibitory control network. Furthermore, we examined white matter metrics of major speech tracts. Improved fluency was accompanied by an increased synchronization within the sensorimotor integration network. Specifically, two connections were strengthened, left laryngeal motor cortex and right superior temporal gyrus showed increased connectivity with the left inferior frontal gyrus. The integration of the command-to-execution and auditory-motor pathway was strengthened. Since we investigated task-free brain activity, we assume that our findings are not biased to network activity involved in compensation. No alterations were found within white matter microstructure. But, brain-behavior relationships changed. We found a heightened negative correlation between stuttering severity and fractional anisotropy in the superior longitudinal fasciculus, and a heightened positive correlation between the psycho-social impact of stuttering and fractional anisotropy in the right frontal aslant tract. Taken together, structural and functional connectivity of the sensorimotor integration and inhibitory control network shape speech motor learning.


2020 ◽  
Author(s):  
Wim Pouw ◽  
Lisette de Jonge-Hoekstra ◽  
Steven A. Harrison ◽  
Alexandra Paxton ◽  
James A. Dixon

Communicative hand gestures are often coordinated with prosodic aspects of speech, and salient moments of gestural movement (e.g., quick changes in speed) often co-occur with salient moments in speech (e.g., near peaks in fundamental frequency and intensity). A common understanding is that such gesture and speech coordination is culturally and cognitively acquired, rather than having a biological basis. Recently, however, the biomechanical physical coupling of arm movements to speech movements has been identified as a potentially important factor in understanding the emergence of gesture-speech coordination. Specifically, in the case of steady-state vocalization and mono-syllable utterances, forces produced during gesturing are transferred onto the tensioned body, leading to changes in respiratory-related activity and thereby affecting vocalization F0 and intensity. In the current experiment (N = 37), we extend this previous line of work to show that gesture-speech physics impacts fluent speech, too. Compared with non-movement, participants who are producing fluent self-formulated speech, while rhythmically moving their limbs, demonstrate heightened F0 and amplitude envelope, and such effects are more pronounced for higher-impulse arm versus lower-impulse wrist movement. We replicate that acoustic peaks arise especially during moments of peak-impulse (i.e., the beat) of the movement, namely around deceleration phases of the movement. Finally, higher deceleration rates of higher-mass arm movements were related to higher peaks in acoustics. These results confirm a role for physical-impulses of gesture affecting the speech system. We discuss the implications of gesture-speech physics for understanding of the emergence of communicative gesture, both ontogenetically and phylogenetically.


Sign in / Sign up

Export Citation Format

Share Document