scholarly journals HEARINGS AND MISHEARINGS: DECRYPTING THE SPOKEN WORD

2020 ◽  
Vol 23 (03) ◽  
pp. 2050008
Author(s):  
ANITA MEHTA ◽  
JEAN-MARC LUCK

We propose a model of the speech perception of individual words in the presence of mishearings. This phenomenological approach is based on concepts used in linguistics, and provides a formalism that is universal across languages. We put forward an efficient two-parameter form for the word length distribution, and introduce a simple representation of mishearings, which we use in our subsequent modeling of word recognition. In a context-free scenario, word recognition often occurs via anticipation when, part-way into a word, we can correctly guess its full form. We give a quantitative estimate of this anticipation threshold when no mishearings occur, in terms of model parameters. As might be expected, the whole anticipation effect disappears when there are sufficiently many mishearings. Our global approach to the problem of speech perception is in the spirit of an optimization problem. We show for instance that speech perception is easy when the word length is less than a threshold, to be identified with a static transition, and hard otherwise. We extend this to the dynamics of word recognition, proposing an intuitive approach highlighting the distinction between individual, isolated mishearings and clusters of contiguous mishearings. At least in some parameter range, a dynamical transition is manifest well before the static transition is reached, as is the case for many other examples of complex systems.

Author(s):  
David B. Pisoni ◽  
Susannah V. Levi

This article examines how new approaches—coupled with previous insights—provide a new framework for questions that deal with the nature of phonological and lexical knowledge and representation, processing of stimulus variability, and perceptual learning and adaptation. First, it outlines the traditional view of speech perception and identifies some problems with assuming such a view, in which only abstract representations exist. The article then discusses some new approaches to speech perception that retain detailed information in the representations. It also considers a view which rejects abstraction altogether, but shows that such a view has difficulty dealing with a range of linguistic phenomena. After providing a brief discussion of some new directions in linguistics that encode both detailed information and abstraction, the article concludes by discussing the coupling of speech perception and spoken word recognition.


2021 ◽  
Author(s):  
James Magnuson ◽  
ZHAOBIN LI ◽  
Anne Marie Crinnion

Language scientists often need to generate lists of related words, such as potential competitors. They may do this for purposes of experimental control (e.g., selecting items matched on lexical neighborhood but varying in word frequency), or to test theoretical predictions (e.g., hypothesizing that a novel type of competitor may impact word recognition). Several online tools are available, but most are constrained to a fixed lexicon and fixed sets of competitor definitions, and may not give the user full access to or control of source data. We present LexFindR, an open source R package that can be easily modified to include additional, novel competitor types. LexFindR is easy to use. Because it can leverage multiple CPU cores and uses vectorized code when possible, it is also extremely fast. In this article, we present an overview of LexFindR usage, illustrated with examples. We also explain the details of how we implemented several standard lexical competitor types used in spoken word recognition research (e.g., cohorts, neighbors, embeddings, rhymes), and show how “lexical dimensions” (e.g., word frequency, word length, uniqueness point) can be integrated into LexFindR workflows (for example, to calculate “frequency weighted competitor probabilities”), for both spoken and visual word recognition research.


2012 ◽  
Vol 16 (1) ◽  
pp. 1-19 ◽  
Author(s):  
Elżbieta Łukasiewicz

Perception, Processing and Storage of Subphonemic and Extralinguistic Features in Spoken Word Recognition - An Argument from Language Variation and ChangeRecent research on speech perception and word recognition has shown that fine-grained sub-phonemic as well as speaker- and episode-specific characteristics of a speech signal are integrally connected with segmental (phonemic) information; they are all most probably processed in a non-distinct manner, and stored in the lexical memory. This view contrasts with the traditional approach holding that we operate on abstract phonemic representations extracted from a particular acoustic signal, without the need to process and store the multitude of its individual features. In the paper, I want to show that this turn towards the "particulars" of a speech event was in fact quite predictable, and the so-called traditional view would most probably have never been formulated if studies on language variation and language change-in-progress had been taken into account when constructing models of speech perception. In part one, I discuss briefly the traditional view ("abstract representations only"), its theoretical background, and outline some problems, internal to the speech perception theory, that the traditional view encounters. Part two will demonstrate that what we know about the implementation of sound changes has long made it possible to answer, once and for all, the question of integrated processing and storage of extralinguistic, phonemic and subphonemic characteristics of the speech signal.


2004 ◽  
Vol 47 (3) ◽  
pp. 496-508 ◽  
Author(s):  
Elizabeth A. Collison ◽  
Benjamin Munson ◽  
Arlene Earley Carney

This study examined spoken word recognition in adults with cochlear implants (CIs) to determine the extent to which linguistic and cognitive abilities predict variability in speech-perception performance. Both a traditional consonant-vowel-consonant (CVC)-repetition measure and a gated-word recognition measure (F. Grosjean, 1996) were used. Stimuli in the gated-word-recognition task varied in neighborhood density. Adults with CIs repeated CVC words less accurately than did age-matched adults with normal hearing sensitivity (NH). In addition, adults with CIs required more acoustic information to recognize gated words than did adults with NH. Neighborhood density had a smaller influence on gated-word recognition by adults with CIs than on recognition by adults with NH. With the exception of 1 outlying participant, standardized, norm-referenced measures of cognitive and linguistic abilities were not correlated with word-recognition measures. Taken together, these results do not support the hypothesis that cognitive and linguistic abilities predict variability in speech-perception performance in a heterogeneous group of adults with CIs. Findings are discussed in light of the potential role of auditory perception in mediating relations among cognitive and linguistic skill and spoken word recognition.


Sign in / Sign up

Export Citation Format

Share Document