Effects of speaking rate and lexical status on phonetic perception.

Author(s):  
Joanne L. Miller ◽  
Emily R. Dexter
1984 ◽  
Vol 76 (S1) ◽  
pp. S89-S89 ◽  
Author(s):  
Joanne L Miller ◽  
Emily R. Dexter ◽  
Kimberly A. Pickard

2021 ◽  
pp. 1-13
Author(s):  
Gavin M. Bidelman ◽  
Claire Pearson ◽  
Ashleigh Harrison

Categorical judgments of otherwise identical phonemes are biased toward hearing words (i.e., “Ganong effect”) suggesting lexical context influences perception of even basic speech primitives. Lexical biasing could manifest via late stage postperceptual mechanisms related to decision or, alternatively, top–down linguistic inference that acts on early perceptual coding. Here, we exploited the temporal sensitivity of EEG to resolve the spatiotemporal dynamics of these context-related influences on speech categorization. Listeners rapidly classified sounds from a /gɪ/-/kɪ/ gradient presented in opposing word–nonword contexts ( GIFT–kift vs. giss–KISS), designed to bias perception toward lexical items. Phonetic perception shifted toward the direction of words, establishing a robust Ganong effect behaviorally. ERPs revealed a neural analog of lexical biasing emerging within ~200 msec. Source analyses uncovered a distributed neural network supporting the Ganong including middle temporal gyrus, inferior parietal lobe, and middle frontal cortex. Yet, among Ganong-sensitive regions, only left middle temporal gyrus and inferior parietal lobe predicted behavioral susceptibility to lexical influence. Our findings confirm lexical status rapidly constrains sublexical categorical representations for speech within several hundred milliseconds but likely does so outside the purview of canonical auditory-sensory brain areas.


Phonetica ◽  
1981 ◽  
Vol 38 (1-3) ◽  
pp. 159-180 ◽  
Author(s):  
Joanne L. Miller

2020 ◽  
Author(s):  
Gavin M. Bidelman ◽  
Claire Pearson ◽  
Ashleigh Harrison

AbstractCategorical judgments of otherwise identical phonemes are biased toward hearing words (i.e., “Ganong effect”) suggesting lexical context influences perception of even basic speech primitives. Lexical biasing could manifest via late stage post-perceptual mechanisms related to decision or alternatively, top-down linguistic inference which acts on early perceptual coding. Here, we exploited the temporal sensitivity of EEG to resolve the spatiotemporal dynamics of these context-related influences on speech categorization. Listeners rapidly classified sounds from a /gi/ - /ki/ gradient presented in opposing word-nonword contexts (GIFT-kift vs. giss-KISS), designed to bias perception toward lexical items. Phonetic perception shifted toward the direction of words, establishing a robust Ganong effect behaviorally. ERPs revealed a neural analog of lexical biasing emerging within ∼200 ms. Source analyses uncovered a distributed neural network supporting the Ganong including middle temporal gyrus (MTG), inferior parietal lobe (IPL), and middle frontal cortex. Yet, among Ganong-sensitive regions, only left MTG and IPL predicted behavioral susceptibility to lexical influence. Our findings confirm lexical status rapidly constrains sub-lexical categorical representations for speech within several hundred milliseconds but likely does so outside the purview of canonical “auditory-linguistic” brain areas.


1983 ◽  
Vol 73 (S1) ◽  
pp. S67-S67
Author(s):  
Joanne L. Miller ◽  
Iona L. Aibel ◽  
Kerry Green

2020 ◽  
Vol 63 (1) ◽  
pp. 59-73 ◽  
Author(s):  
Panying Rong

Purpose The purpose of this article was to validate a novel acoustic analysis of oral diadochokinesis (DDK) in assessing bulbar motor involvement in amyotrophic lateral sclerosis (ALS). Method An automated acoustic DDK analysis was developed, which filtered out the voice features and extracted the envelope of the acoustic waveform reflecting the temporal pattern of syllable repetitions during an oral DDK task (i.e., repetitions of /tɑ/ at the maximum rate on 1 breath). Cycle-to-cycle temporal variability (cTV) of envelope fluctuations and syllable repetition rate (sylRate) were derived from the envelope and validated against 2 kinematic measures, which are tongue movement jitter (movJitter) and alternating tongue movement rate (AMR) during the DDK task, in 16 individuals with bulbar ALS and 18 healthy controls. After the validation, cTV, sylRate, movJitter, and AMR, along with an established clinical speech measure, that is, speaking rate (SR), were compared in their ability to (a) differentiate individuals with ALS from healthy controls and (b) detect early-stage bulbar declines in ALS. Results cTV and sylRate were significantly correlated with movJitter and AMR, respectively, across individuals with ALS and healthy controls, confirming the validity of the acoustic DDK analysis in extracting the temporal DDK pattern. Among all the acoustic and kinematic DDK measures, cTV showed the highest diagnostic accuracy (i.e., 0.87) with 80% sensitivity and 94% specificity in differentiating individuals with ALS from healthy controls, which outperformed the SR measure. Moreover, cTV showed a large increase during the early disease stage, which preceded the decline of SR. Conclusions This study provided preliminary validation of a novel automated acoustic DDK analysis in extracting a useful measure, namely, cTV, for early detection of bulbar ALS. This analysis overcame a major barrier in the existing acoustic DDK analysis, which is continuous voicing between syllables that interferes with syllable structures. This approach has potential clinical applications as a novel bulbar assessment.


Sign in / Sign up

Export Citation Format

Share Document