speech token
Recently Published Documents


TOTAL DOCUMENTS

10
(FIVE YEARS 1)

H-INDEX

3
(FIVE YEARS 0)

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Darren Mao ◽  
Julia Wunderlich ◽  
Borislav Savkovic ◽  
Emily Jeffreys ◽  
Namita Nicholls ◽  
...  

AbstractSpeech detection and discrimination ability are important measures of hearing ability that may inform crucial audiological intervention decisions for individuals with a hearing impairment. However, behavioral assessment of speech discrimination can be difficult and inaccurate in infants, prompting the need for an objective measure of speech detection and discrimination ability. In this study, the authors used functional near-infrared spectroscopy (fNIRS) as the objective measure. Twenty-three infants, 2 to 10 months of age participated, all of whom had passed newborn hearing screening or diagnostic audiology testing. They were presented with speech tokens at a comfortable listening level in a natural sleep state using a habituation/dishabituation paradigm. The authors hypothesized that fNIRS responses to speech token detection as well as speech token contrast discrimination could be measured in individual infants. The authors found significant fNIRS responses to speech detection in 87% of tested infants (false positive rate 0%), as well as to speech discrimination in 35% of tested infants (false positive rate 9%). The results show initial promise for the use of fNIRS as an objective clinical tool for measuring infant speech detection and discrimination ability; the authors highlight the further optimizations of test procedures and analysis techniques that would be required to improve accuracy and reliability to levels needed for clinical decision-making.


2018 ◽  
Vol 16 (2) ◽  
pp. 494-518 ◽  
Author(s):  
Gitte Kristiansen ◽  
Eline Zenner ◽  
Dirk Geeraerts

Abstract While empirical research on attitudes towards languages and linguistic varieties has become increasingly popular from the 1960s onwards (e.g. Lambert, Hodgson, Gardner, & Fillenbaum, 1960), experimental investigations into the ability to correctly identify the origin of speakers are in comparison still relatively scarce. We know that the ability to correlate a stretch of uncategorised speech (token) with a series of models (types) is experientially acquired in early childhood (e.g. Kristiansen, 2010), but how similar are those abilities in adulthood and across European nations? English as a Lingua Franca (ELF) has become an integral part of the linguistic reality in Europe (and of the linguistic scenario in the entire world) (e.g. Jenkins, Baker, & Dewey, 2018). Whenever we communicate with anyone who is not a speaker of our own native language in any European country, most of the time we communicate in English. But does our L1 accent still shine through? Will we be recognised (and in most cases probably also stereotypically judged) on the basis of just a short stretch of speech when we communicate in ELF? In Part I of this paper we outline the design of the first large-scale pan-European project on L1 and L2 identifications of ELF in Europe, including 785 respondents from 8 countries. Exploratory analyses confirm the hypothesis that statistically significant asymmetries would show up across different European countries or regions. In Part II of this paper we then aim to explain these asymmetries through a multifactorial statistical analysis (Geeraerts, Grondelaers, & Speelman, 1999; Tagliamonte & Baayen, 2012; Speelman, Heylen, & Geeraerts, 2018).


2015 ◽  
Vol 2015 ◽  
pp. 1-12 ◽  
Author(s):  
Ke Heng Chen ◽  
Susan A. Small

The acoustic change complex (ACC) is an auditory-evoked potential elicited to changes within an ongoing stimulus that indicates discrimination at the level of the auditory cortex. Only a few studies to date have attempted to record ACCs in young infants. The purpose of the present study was to investigate the elicitation of ACCs to long-duration speech stimuli in English-learning 4-month-old infants. ACCs were elicited to consonant contrasts made up of two concatenated speech tokens. The stimuli included native dental-dental /dada/ and dental-labial /daba/ contrasts and a nonnative Hindi dental-retroflex /daDa/ contrast. Each consonant-vowel speech token was 410 ms in duration. Slow cortical responses were recorded to the onset of the stimulus and to the acoustic change from /da/ to either /ba/ or /Da/ within the stimulus with significantly prolonged latencies compared with adults. ACCs were reliably elicited for all stimulus conditions with more robust morphology compared with our previous findings using stimuli that were shorter in duration. The P1 amplitudes elicited to the acoustic change in /daba/ and /daDa/ were significantly larger compared to /dada/ supporting that the brain discriminated between the speech tokens. These findings provide further evidence for the use of ACCs as an index of discrimination ability.


2012 ◽  
Vol 25 (0) ◽  
pp. 29
Author(s):  
Argiro Vatakis ◽  
Charles Spence

Research has revealed different temporal integration windows between and within different speech-tokens. The limited speech-tokens tested to date has not allowed for the proper evaluation of whether such differences are task or stimulus driven? We conducted a series of experiments to investigate how the physical differences associated with speech articulation affect the temporal aspects of audiovisual speech perception. Videos of consonants and vowels uttered by three speakers were presented. Participants made temporal order judgments (TOJs) regarding which speech-stream had been presented first. The sensitivity of participants’ TOJs and the point of subjective simultaneity (PSS) were analyzed as a function of the place, manner of articulation, and voicing for consonants, and the height/backness of the tongue and lip-roundedness for vowels. The results demonstrated that for the case of place of articulation/roundedness, participants were more sensitive to the temporal order of highly-salient speech-signals with smaller visual-leads at the PSS. This was not the case when the manner of articulation/height was evaluated. These findings suggest that the visual-speech signal provides substantial cues to the auditory-signal that modulate the relative processing times required for the perception of the speech-stream. A subsequent experiment explored how the presentation of different sources of visual-information modulated such findings. Videos of three consonants were presented under natural and point-light (PL) viewing conditions revealing parts, or the whole, face. Preliminary analysis revealed no differences in TOJ accuracy under different viewing conditions. However, the PSS data revealed significant differences in viewing conditions depending on the speech token uttered (e.g., larger visual-leads for PL-lip/teeth/tongue-only views).


2009 ◽  
Vol 20 (02) ◽  
pp. 119-127 ◽  
Author(s):  
Marc Brennan ◽  
Pamela Souza

Background: Hearing aid expansion is intended to reduce the gain for low-level noise. However, expansion can also degrade low-intensity speech. Although it has been suggested that the poorer performance with expansion is due to reduced audibility, this has not been measured directly. Furthermore, previous studies used relatively high expansion kneepoints. Purpose: This study compared the effect of a 30 dB SPL and 50 dB SPL expansion kneepoint on consonant audibility and recognition. Research Design: Eight consonant-vowel syllables were presented at 50, 60, and 71 dB SPL. Recordings near the tympanic membrane were made of each speech token and used to calculate the Aided Audibility Index (AAI). Study Sample: Thirteen subjects with mild to moderate sensorineural hearing loss. Results: Expansion with a high kneepoint resulted in reduced consonant recognition. The AAI correlated significantly with consonant recognition across all conditions and subjects. Conclusion: If consonant recognition is the priority, audibility calculations could be used to determine an optimal expansion kneepoint for a given individual.


1989 ◽  
Vol 69 (2) ◽  
pp. 435-441 ◽  
Author(s):  
Linda I. Shuster ◽  
Robert Allen Fox

This study investigated the relationship between speech perception and speech production. An experimental technique called motor-motor adaptation was devised. Subjects produced a speech token repeatedly (20 to 40 repetitions), then produced a second token one time. These tokens all contained stop consonants and were subsequently analyzed for voice onset time. The results paralleled previous findings using the experimental procedure, perceptuomotor adaptation. The present study supports the notion of a perception-production link.


1987 ◽  
Vol 52 (3) ◽  
pp. 243-250 ◽  
Author(s):  
James A. Till ◽  
Kathie E. England ◽  
Cindy B. Law-Till

Stomal noise intensity during esophageal speech was measured in 7 laryngectomized subjects during amplified monaural auditory feedback and during control conditions without feedback. A significant (5–10 dB) reduction in stomal noise was observed when auditory feedback was applied. The conditions without feedback were designed to provide additional information regarding the effects of the initial phonetic element in the esophageal speech token on stomal noise. During the control conditions, esophageal speech tokens beginning with voiceless consonants resulted in significantly more stomal noise than was present for the other speech tokens. Clinical implications of the findings are discussed.


Sign in / Sign up

Export Citation Format

Share Document