Interaction, function words, and the wider goals of speech perception

2000 ◽  
Vol 23 (3) ◽  
pp. 346-346
Author(s):  
Richard Shillcock

We urge caution in generalising from content words to function words, in which lexical-to-phonemic feedback might be more likely. Speech perception involves more than word recognition; feedback might be outside the narrow logic of word identification but still be present for other purposes. Finally, we raise the issue of evidence from imaging studies of auditory hallucination.

1997 ◽  
Vol 40 (6) ◽  
pp. 1395-1405 ◽  
Author(s):  
Karen Iler Kirk ◽  
David B. Pisoni ◽  
R. Christopher Miyamoto

Traditional word-recognition tests typically use phonetically balanced (PB) word lists produced by one talker at one speaking rate. Intelligibility measures based on these tests may not adequately evaluate the perceptual processes used to perceive speech under more natural listening conditions involving many sources of stimulus variability. The purpose of this study was to examine the influence of stimulus variability and lexical difficulty on the speech-perception abilities of 17 adults with mild-to-moderate hearing loss. The effects of stimulus variability were studied by comparing word-identification performance in single-talker versus multipletalker conditions and at different speaking rates. Lexical difficulty was assessed by comparing recognition of "easy" words (i.e., words that occur frequently and have few phonemically similar neighbors) with "hard" words (i.e., words that occur infrequently and have many similar neighbors). Subjects also completed a 20-item questionnaire to rate their speech understanding abilities in daily listening situations. Both sources of stimulus variability produced significant effects on speech intelligibility. Identification scores were poorer in the multiple-talker condition than in the single-talker condition, and word-recognition performance decreased as speaking rate increased. Lexical effects on speech intelligibility were also observed. Word-recognition performance was significantly higher for lexically easy words than lexically hard words. Finally, word-recognition performance was correlated with scores on the self-report questionnaire rating speech understanding under natural listening conditions. The pattern of results suggest that perceptually robust speech-discrimination tests are able to assess several underlying aspects of speech perception in the laboratory and clinic that appear to generalize to conditions encountered in natural listening situations where the listener is faced with many different sources of stimulus variability. That is, wordrecognition performance measured under conditions where the talker varied from trial to trial was better correlated with self-reports of listening ability than was performance in a single-talker condition where variability was constrained.


2021 ◽  
Author(s):  
Katrina Sue McClannahan ◽  
Amelia Mainardi ◽  
Austin Luor ◽  
Yi-Fang Chiu ◽  
Mitchell S. Sommers ◽  
...  

BackgroundDifficulty understanding speech is a common complaint of older adults. In quiet, speech perception is often assumed to be relatively automatic. In background noise, however, higher-level cognitive processes play a more substantial role in successful communication. Cognitive resources are often limited in adults with dementia, which may therefore hamper word recognition. ObjectiveThe goal of this study was to determine the impact of mild dementia on spoken word recognition in quiet and noise.MethodsParticipants were adults aged 53–86 years with (n=16) or without (n=32) dementia symptoms as classified by a clinical dementia rating scale. Participants performed a word identification task with two levels of neighborhood density in quiet and in speech shaped noise at two signal-to-noise ratios (SNRs), +6 dB and +3 dB. Our hypothesis was that listeners with mild dementia would have more difficulty with speech perception in noise under conditions that tax cognitive resources. ResultsListeners with mild dementia had poorer speech perception accuracy in both quiet and noise, which held after accounting for differences in age and hearing level. Notably, even in quiet, adults with dementia symptoms correctly identified words only about 80% of the time. However, phonological neighborhood density was not a factor in the identification task performance for either group.ConclusionThese results affirm the difficulty that listeners with mild dementia have with spoken word recognition, both in quiet and in background noise, consistent with a key role of cognitive resources in spoken word identification. However, the impact of neighborhood density in these listeners is less clear.


2004 ◽  
Vol 16 (3) ◽  
pp. 154-159 ◽  
Author(s):  
Seung-Hwan Lee ◽  
Young-Cho Chung ◽  
Jong-Chul Yang ◽  
Yong-Ku Kim ◽  
Kwang-Yoon Suh

Background:The neurobiological mechanism of auditory hallucination (AH) in schizophrenia remains elusive, but AH can be caused by the abnormality in the speech perception system based on the speech perception neural network model.Objectives:The purpose of this study was to investigate whether schizophrenic patients with AH have the speech processing impairment as compared with schizophrenic patients without AH, and whether the speech perception ability could be improved after AH had subsided.Methods:Twenty-four schizophrenic patients with AH were compared with 25 schizophrenic patients without AH. Narrative speech perception was assessed using a masked speech tracking (MST) task with three levels of superimposed phonetic noise. Sentence repetition task (SRT) and auditory continuous performance task (CPT) were used to assess grammar-dependent verbal working memory and non-language attention, respectively. These tests were measured before and after treatment in both groups.Results:Before treatment, schizophrenic patients with AH showed significant impairments in MST compared with those without AH. There were no significant differences in SRT and CPT correct (CPT-C) rates between both groups, but CPT incorrect (CPT-I) rate showed a significant difference. The low-score CPI-I group showed a significant difference in MST performance between the two groups, while the high-score CPI-I group did not. After treatment (after AH subsided), the hallucinating schizophrenic patients still had significant impairment in MST performance compared with non-hallucinating schizophrenic patients.Conclusions:Our results support the claim that schizophrenic patients with AH are likely to have a disturbance of the speech perception system. Moreover, our data suggest that non-language attention might be a key factor influencing speech perception ability and that speech perception dysfunction might be a trait marker in schizophrenia with AH.


Author(s):  
David B. Pisoni ◽  
Susannah V. Levi

This article examines how new approaches—coupled with previous insights—provide a new framework for questions that deal with the nature of phonological and lexical knowledge and representation, processing of stimulus variability, and perceptual learning and adaptation. First, it outlines the traditional view of speech perception and identifies some problems with assuming such a view, in which only abstract representations exist. The article then discusses some new approaches to speech perception that retain detailed information in the representations. It also considers a view which rejects abstraction altogether, but shows that such a view has difficulty dealing with a range of linguistic phenomena. After providing a brief discussion of some new directions in linguistics that encode both detailed information and abstraction, the article concludes by discussing the coupling of speech perception and spoken word recognition.


2015 ◽  
Vol 26 (10) ◽  
pp. 815-823 ◽  
Author(s):  
Jijo Pottackal Mathai ◽  
Sabarish Appu

Background: Auditory neuropathy spectrum disorder (ANSD) is a form of sensorineural hearing loss, causing severe deficits in speech perception. The perceptual problems of individuals with ANSD were attributed to their temporal processing impairment rather than to reduced audibility. This rendered their rehabilitation difficult using hearing aids. Although hearing aids can restore audibility, compression circuits in a hearing aid might distort the temporal modulations of speech, causing poor aided performance. Therefore, hearing aid settings that preserve the temporal modulations of speech might be an effective way to improve speech perception in ANSD. Purpose: The purpose of the study was to investigate the perception of hearing aid–processed speech in individuals with late-onset ANSD. Research Design: A repeated measures design was used to study the effect of various compression time settings on speech perception and perceived quality. Study Sample: Seventeen individuals with late-onset ANSD within the age range of 20–35 yr participated in the study. Data Collection and Analysis: The word recognition scores (WRSs) and quality judgment of phonemically balanced words, processed using four different compression settings of a hearing aid (slow, medium, fast, and linear), were evaluated. The modulation spectra of hearing aid–processed stimuli were estimated to probe the effect of amplification on the temporal envelope of speech. Repeated measures analysis of variance and post hoc Bonferroni’s pairwise comparisons were used to analyze the word recognition performance and quality judgment. Results: The comparison between unprocessed and all four hearing aid–processed stimuli showed significantly higher perception using the former stimuli. Even though perception of words processed using slow compression time settings of the hearing aids were significantly higher than the fast one, their difference was only 4%. In addition, there were no significant differences in perception between any other hearing aid–processed stimuli. Analysis of the temporal envelope of hearing aid–processed stimuli revealed minimal changes in the temporal envelope across the four hearing aid settings. In terms of quality, the highest number of individuals preferred stimuli processed using slow compression time settings. Individuals who preferred medium ones followed this. However, none of the individuals preferred fast compression time settings. Analysis of quality judgment showed that slow, medium, and linear settings presented significantly higher preference scores than the fast compression setting. Conclusions: Individuals with ANSD showed no marked difference in perception of speech that was processed using the four different hearing aid settings. However, significantly higher preference, in terms of quality, was found for stimuli processed using slow, medium, and linear settings over the fast one. Therefore, whenever hearing aids are recommended for ANSD, those having slow compression time settings or linear amplification may be chosen over the fast (syllabic compression) one. In addition, WRSs obtained using hearing aid–processed stimuli were remarkably poorer than unprocessed stimuli. This shows that processing of speech through hearing aids might have caused a large reduction of performance in individuals with ANSD. However, further evaluation is needed using individually programmed hearing aids rather than hearing aid–processed stimuli.


2021 ◽  
Vol 12 ◽  
Author(s):  
Ana Marcet ◽  
María Fernández-López ◽  
Melanie Labusch ◽  
Manuel Perea

Recent research has found that the omission of accent marks in Spanish does not produce slower word identification times in go/no-go lexical decision and semantic categorization tasks [e.g., cárcel (prison) = carcel], thus suggesting that vowels like á and a are represented by the same orthographic units during word recognition and reading. However, there is a discrepant finding with the yes/no lexical decision task, where the words with the omitted accent mark produced longer response times than the words with the accent mark. In Experiment 1, we examined this discrepant finding by running a yes/no lexical decision experiment comparing the effects for words and non-words. Results showed slower response times for the words with omitted accent mark than for those with the accent mark present (e.g., cárcel < carcel). Critically, we found the opposite pattern for non-words: response times were longer for the non-words with accent marks (e.g., cárdil > cardil), thus suggesting a bias toward a “word” response for accented items in the yes/no lexical decision task. To test this interpretation, Experiment 2 used the same stimuli with a blocked design (i.e., accent mark present vs. omitted in all items) and a go/no-go lexical decision task (i.e., respond only to “words”). Results showed similar response times to words regardless of whether the accent mark was omitted (e.g., cárcel = carcel). This pattern strongly suggests that the longer response times to words with an omitted accent mark in yes/no lexical decision experiments are a task-dependent effect rather than a genuine reading cost.


Author(s):  
Samuel Evans ◽  
Stuart Rosen

Purpose: Many children have difficulties understanding speech. At present, there are few assessments that test for subtle impairments in speech perception with normative data from U.K. children. We present a new test that evaluates children's ability to identify target words in background noise by choosing between minimal pair alternatives that differ by a single articulatory phonetic feature. This task (a) is tailored to testing young children, but also readily applicable to adults; (b) has minimal memory demands; (c) adapts to the child's ability; and (d) does not require reading or verbal output. Method: We tested 155 children and young adults aged from 5 to 25 years on this new test of single word perception. Results: Speech-in-noise abilities in this particular task develop rapidly through childhood until they reach maturity at around 9 years of age. Conclusions: We make this test freely available and provide associated normative data. We hope that it will be useful to researchers and clinicians in the assessment of speech perception abilities in children who are hard of hearing or have developmental language disorder, dyslexia, or auditory processing disorder. Supplemental Material https://doi.org/10.23641/asha.17155934


2021 ◽  
pp. 174702182110446
Author(s):  
Ana Marcet ◽  
Manuel Perea

Lexical stress in multisyllabic words is consistent in some languages (e.g., first syllable in Finnish), but it is variable in others (e.g., Spanish, English). To help lexical processing in a transparent language like Spanish, scholars have proposed a set of rules specifying which words require an accent mark indicating lexical stress in writing. However, recent word recognition using that lexical decision showed that word identification times were not affected by the omission of a word's accent mark in Spanish. To examine this question in a paradigm with greater ecological validity, we tested whether omitting the accent mark in a Spanish word had a deleterious effect during silent sentence reading. A target word was embedded in a sentence with its accent mark or not. Results showed no reading cost of omitting the word's accent mark in first-pass eye fixation durations, but we found a cost in the total reading time spent on the target word (i.e., including re-reading). Thus, the omission of an accent mark delays late, but not early, lexical processing in Spanish. These findings help constrain the locus of accent mark information in models of visual word recognition and reading. Furthermore, these findings offer some clues on how to simplify the Spanish rules of accentuation.


Sign in / Sign up

Export Citation Format

Share Document