Phonetically Trained and Untrained Adults' Transcription of Place of Articulation for Intervocalic Lingual Stops With Intermediate Acoustic Cues

2013 ◽  
Vol 56 (3) ◽  
pp. 779-791 ◽  
Author(s):  
Catherine Mayo ◽  
Fiona Gibbon ◽  
Robert A. J. Clark

Purpose In this study, the authors aimed to investigate how listener training and the presence of intermediate acoustic cues influence transcription variability for conflicting cue speech stimuli. Method Twenty listeners with training in transcribing disordered speech, and 26 untrained listeners, were asked to make forced-choice labeling decisions for synthetic vowel–consonant–vowel (VCV) sequences “a doe” (/ədo/) and “a go” (/əgo/). Both the VC and CV transitions in these stimuli ranged through intermediate positions, from appropriate for /d/ to appropriate for /g/. Results Both trained and untrained listeners gave more weight to the CV transitions than to the VC transitions. However, listener behavior was not uniform: The results showed a high level of inter- and intratranscriber inconsistency, with untrained listeners showing a nonsignificant tendency to be more influenced than trained listeners by CV transitions. Conclusions Listeners do not assign consistent categorical labels to the type of intermediate, conflicting transitional cues that were present in the stimuli used in the current study and that are also present in disordered articulations. Although listener inconsistency in assigning labels to intermediate productions is not increased as a result of phonetic training, neither is it reduced by such training.

Author(s):  
Luodi Yu ◽  
Jiajing Zeng ◽  
Suiping Wang ◽  
Yang Zhang

Purpose This study aimed to examine whether abstract knowledge of word-level linguistic prosody is independent of or integrated with phonetic knowledge. Method Event-related potential (ERP) responses were measured from 18 adult listeners while they listened to native and nonnative word-level prosody in speech and in nonspeech. The prosodic phonology (speech) conditions included disyllabic pseudowords spoken in Chinese and in English matched for syllabic structure, duration, and intensity. The prosodic acoustic (nonspeech) conditions were hummed versions of the speech stimuli, which eliminated the phonetic content while preserving the acoustic prosodic features. Results We observed language-specific effects on the ERP that native stimuli elicited larger late negative response (LNR) amplitude than nonnative stimuli in the prosodic phonology conditions. However, no such effect was observed in the phoneme-free prosodic acoustic control conditions. Conclusions The results support the integration view that word-level linguistic prosody likely relies on the phonetic content where the acoustic cues embedded in. It remains to be examined whether the LNR may serve as a neural signature for language-specific processing of prosodic phonology beyond auditory processing of the critical acoustic cues at the suprasyllabic level.


Phonology ◽  
2018 ◽  
Vol 35 (1) ◽  
pp. 79-114 ◽  
Author(s):  
Alessandro Vietti ◽  
Birgit Alber ◽  
Barbara Vogt

In the Southern Bavarian variety of Tyrolean, laryngeal contrasts undergo a typologically interesting process of neutralisation in word-initial position. We undertake an acoustic analysis of Tyrolean stops in word-initial, word-medial intersonorant and word-final contexts, as well as in obstruent clusters, investigating the role of the acoustic parameters VOT, prevoicing, closure duration and F0 and H1–H2* on following vowels in implementing contrast, if any. Results show that stops contrast word-medially via [voice] (supported by the acoustic cues of closure duration and F0), and are neutralised completely in word-final position and in obstruent clusters. Word-initially, neutralisation is subject to inter- and intraspeaker variability, and is sensitive to place of articulation. Aspiration plays no role in implementing laryngeal contrasts in Tyrolean.


2004 ◽  
Vol 16 (1) ◽  
pp. 31-39 ◽  
Author(s):  
Jonas Obleser ◽  
Aditi Lahiri ◽  
Carsten Eulitz

This study further elucidates determinants of vowel perception in the human auditory cortex. The vowel inventory of a given language can be classified on the basis of phonological features which are closely linked to acoustic properties. A cortical representation of speech sounds based on these phonological features might explain the surprisingly inverse correlation between immense variance in the acoustic signal and high accuracy of speech recognition. We investigated timing and mapping of the N100m elicited by 42 tokens of seven natural German vowels varying along the phonological features tongue height (corresponding to the frequency of the first formant) and place of articulation (corresponding to the frequency of the second and third formants). Auditoryevoked fields were recorded using a 148-channel whole-head magnetometer while subjects performed target vowel detection tasks. Source location differences appeared to be driven by place of articulation: Vowels with mutually exclusive place of articulation features, namely, coronal and dorsal elicited separate centers of activation along the posterior-anterior axis. Additionally, the time course of activation as reflected in the N100m peak latency distinguished between vowel categories especially when the spatial distinctiveness of cortical activation was low. In sum, results suggest that both N100m latency and source location as well as their interaction reflect properties of speech stimuli that correspond to abstract phonological features.


2011 ◽  
Vol 15 (2) ◽  
pp. 255-274 ◽  
Author(s):  
ERIN M. INGVALSON ◽  
LORI L. HOLT ◽  
JAMES L. McCLELLAND

Many attempts have been made to teach native Japanese listeners to perceptually differentiate English /r–l/ (e.g.rock–lock). Though improvement is evident, in no case is final performance native English-like. We focused our training on the third formant onset frequency, shown to be the most reliable indicator of /r–l/ category membership. We first presented listeners with instances of synthetic /r–l/ stimuli varying only in F3 onset frequency, in a forced-choice identification training task with feedback. Evidence of learning was limited. The second experiment utilized an adaptive paradigm beginning with non-speech stimuli consisting only of /r/ and /l/ F3 frequency trajectories progressing to synthetic speech instances of /ra–la/; half of the trainees received feedback. Improvement was shown by some listeners, suggesting some enhancement of /r–l/ identification is possible following training with only F3 onset frequency. However, only a subset of these listeners showed signs of generalization of the training effect beyond the trained synthetic context.


1983 ◽  
Vol 73 (5) ◽  
pp. 1779-1793 ◽  
Author(s):  
Diane Kewley‐Port ◽  
David B. Pisoni ◽  
Michael Studdert‐Kennedy

Author(s):  
Inger Karin Almås ◽  
Dean P. Smith ◽  
Sigmund Eldevik ◽  
Svein Eikeseth

AbstractWe evaluated whether intraverbal and reverse intraverbal behavior emerged following listener training in children with autism spectrum disorder (ASD). Six participants were each taught three sets of three “when?” questions in listener training. A multiple baseline design across behaviors (stimulus sets) was used to assess the effects of listener training. Results showed that intraverbal behavior emerged following listener training for five out of six participants. One participant received additional listener training and intraverbal training before intraverbal behavior emerged. Furthermore, reverse intraverbal responding occurred across all three sets of questions for three of the six participants. Establishing listener behavior may be a pathway for emergent intraverbal and reverse intraverbal responding in children with ASD. Future research could examine what skill repertoire may facilitate such transfer.


1998 ◽  
Vol 104 (3) ◽  
pp. 1777-1777
Author(s):  
Jeannette M. Denton ◽  
Yukari Hirata ◽  
Joanna H. Lowenstein ◽  
Candace V. Perez ◽  
Karen L. Landahl

Author(s):  
Daniel Recasens

The Discussion chapter summarizes the main findings of the book regarding those contextual, positional, and prosodic conditions which trigger velar and labial softening, and the acoustic cues which are responsible for the integration of (alveolo)palatal stops as affricates differing in place of articulation. The arguments in support of an articulation-based interpretation of these sound changes are also summarized and evaluated. The chapter also addresses some phonological issues, namely, why (alveolo)palatal stops are phonetically but not phonologically frequent, and the extent to which their realization is conditioned by the number of dorsal-stop phonemes in the language.


2007 ◽  
Vol 35 (2) ◽  
pp. 180-209 ◽  
Author(s):  
Jennifer Cole ◽  
Heejin Kim ◽  
Hansook Choi ◽  
Mark Hasegawa-Johnson

Sign in / Sign up

Export Citation Format

Share Document