Motor-Motor Adaptation to Speech: Further Investigations

1990 ◽  
Vol 71 (1) ◽  
pp. 275-280
Author(s):  
Linda I. Shuster

The two experiments described in this paper were designed to investigate further the phenomenon called motor-motor adaptation. In the first investigation, subjects were adapted while noise was presented through headphones, which prevented them from hearing themselves. In the second experiment, subjects repeated an isolated vowel, as well as a consonant-vowel syllable which contained a stop consonant. The findings indicated that motor-motor adaptation is not a product of perceptual adaptation, and it is not a result of subjects producing longer voice onset times after adaptation to a voiced consonant rather than shorter voice onset times after adaptation to a voiceless consonant.

2018 ◽  
Vol 4 (s2) ◽  
Author(s):  
Eleanor Chodroff ◽  
Colin Wilson

AbstractThe present study investigates patterns of covariation among acoustic properties of stop consonants in a large multi-talker corpus of American English connected speech. Relations among talker means for different stops on the same dimension (between-category covariation) were considerably stronger than those for different dimensions of the same stop (within-category covariation). The existence of between-category covariation supports a uniformity principle that restricts the mapping from phonological features to phonetic targets in the sound system of each speaker. This principle was formalized with factor analysis, in which observed covariation derives from a lower-dimensional space of talker variation. Knowledge of between-category phonetic covariation could facilitate perceptual adaptation to novel talkers by providing a rational basis for generalizing idiosyncratic properties to several sounds on the basis of limited exposure.


2015 ◽  
Vol 47 (1) ◽  
pp. 1
Author(s):  
Linlin YAN ◽  
Zhe WANG ◽  
Yuanyuan LI ◽  
Ming ZHONG ◽  
Yuhao SUN ◽  
...  

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Daniel H. Blustein ◽  
Ahmed W. Shehata ◽  
Erin S. Kuylenstierna ◽  
Kevin B. Englehart ◽  
Jonathon W. Sensinger

AbstractWhen a person makes a movement, a motor error is typically observed that then drives motor planning corrections on subsequent movements. This error correction, quantified as a trial-by-trial adaptation rate, provides insight into how the nervous system is operating, particularly regarding how much confidence a person places in different sources of information such as sensory feedback or motor command reproducibility. Traditional analysis has required carefully controlled laboratory conditions such as the application of perturbations or error clamping, limiting the usefulness of motor analysis in clinical and everyday environments. Here we focus on error adaptation during unperturbed and naturalistic movements. With increasing motor noise, we show that the conventional estimation of trial-by-trial adaptation increases, a counterintuitive finding that is the consequence of systematic bias in the estimate due to noise masking the learner’s intention. We present an analytic solution relying on stochastic signal processing to reduce this effect of noise, producing an estimate of motor adaptation with reduced bias. The result is an improved estimate of trial-by-trial adaptation in a human learner compared to conventional methods. We demonstrate the effectiveness of the new method in analyzing simulated and empirical movement data under different noise conditions.


2021 ◽  
pp. 026765832110089
Author(s):  
Daniel J Olson

Featural approaches to second language phonetic acquisition posit that the development of new phonetic norms relies on sub-phonemic features, expressed through a constellation of articulatory gestures and their corresponding acoustic cues, which may be shared across multiple phonemes. Within featural approaches, largely supported by research in speech perception, debate remains as to the fundamental scope or ‘size’ of featural units. The current study examines potential featural relationships between voiceless and voiced stop consonants, as expressed through the voice onset time cue. Native English-speaking learners of Spanish received targeted training on Spanish voiceless stop consonant production through a visual feedback paradigm. Analysis focused on the change in voice onset time, for both voiceless (i.e. trained) and voiced (i.e. non-trained) phonemes, across the pretest, posttest, and delayed posttest. The results demonstrated a significant improvement (i.e. reduction) in voice onset time for voiceless stops, which were subject to the training paradigm. In contrast, there was no significant change in the non-trained voiced stop consonants. These results suggest a limited featural relationship, with independent voice onset time (VOT) cues for voiceless and voices phonemes. Possible underlying mechanisms that limit feature generalization in second language (L2) phonetic production, including gestural considerations and acoustic similarity, are discussed.


1999 ◽  
Vol 82 (5) ◽  
pp. 2346-2357 ◽  
Author(s):  
Mitchell Steinschneider ◽  
Igor O. Volkov ◽  
M. Daniel Noh ◽  
P. Charles Garell ◽  
Matthew A. Howard

Voice onset time (VOT) is an important parameter of speech that denotes the time interval between consonant onset and the onset of low-frequency periodicity generated by rhythmic vocal cord vibration. Voiced stop consonants (/b/, /g/, and /d/) in syllable initial position are characterized by short VOTs, whereas unvoiced stop consonants (/p/, /k/, and t/) contain prolonged VOTs. As the VOT is increased in incremental steps, perception rapidly changes from a voiced stop consonant to an unvoiced consonant at an interval of 20–40 ms. This abrupt change in consonant identification is an example of categorical speech perception and is a central feature of phonetic discrimination. This study tested the hypothesis that VOT is represented within auditory cortex by transient responses time-locked to consonant and voicing onset. Auditory evoked potentials (AEPs) elicited by stop consonant-vowel (CV) syllables were recorded directly from Heschl's gyrus, the planum temporale, and the superior temporal gyrus in three patients undergoing evaluation for surgical remediation of medically intractable epilepsy. Voiced CV syllables elicited a triphasic sequence of field potentials within Heschl's gyrus. AEPs evoked by unvoiced CV syllables contained additional response components time-locked to voicing onset. Syllables with a VOT of 40, 60, or 80 ms evoked components time-locked to consonant release and voicing onset. In contrast, the syllable with a VOT of 20 ms evoked a markedly diminished response to voicing onset and elicited an AEP very similar in morphology to that evoked by the syllable with a 0-ms VOT. Similar response features were observed in the AEPs evoked by click trains. In this case, there was a marked decrease in amplitude of the transient response to the second click in trains with interpulse intervals of 20–25 ms. Speech-evoked AEPs recorded from the posterior superior temporal gyrus lateral to Heschl's gyrus displayed comparable response features, whereas field potentials recorded from three locations in the planum temporale did not contain components time-locked to voicing onset. This study demonstrates that VOT at least partially is represented in primary and specific secondary auditory cortical fields by synchronized activity time-locked to consonant release and voicing onset. Furthermore, AEPs exhibit features that may facilitate categorical perception of stop consonants, and these response patterns appear to be based on temporal processing limitations within auditory cortex. Demonstrations of similar speech-evoked response patterns in animals support a role for these experimental models in clarifying selected features of speech encoding.


Sign in / Sign up

Export Citation Format

Share Document