Assessing the Role of Hemispheric Specialisation, Serial-Position Processing, and Retinal Eccentricity in Lateralised Word RecognitioN

2003 ◽  
Vol 20 (1) ◽  
pp. 49-71 ◽  
Author(s):  
Timothy R. Jordan ◽  
Geoffrey R. Patching ◽  
Sharon M. Thomas
2011 ◽  
Author(s):  
S. J. Lupker ◽  
J. Acha ◽  
C. J. Davis ◽  
M. Perea
Keyword(s):  

2019 ◽  
Vol 72 (11) ◽  
pp. 2574-2583 ◽  
Author(s):  
Julie Gregg ◽  
Albrecht W Inhoff ◽  
Cynthia M Connine

Spoken word recognition models incorporate the temporal unfolding of word information by assuming that positional match constrains lexical activation. Recent findings challenge the linearity constraint. In the visual world paradigm, Toscano, Anderson, and McMurray observed that listeners preferentially viewed a picture of a target word’s anadrome competitor (e.g., competitor bus for target sub) compared with phonologically unrelated distractors (e.g., well) or competitors sharing an overlapping vowel (e.g., sun). Toscano et al. concluded that spoken word recognition relies on coarse grain spectral similarity for mapping spoken input to a lexical representation. Our experiments aimed to replicate the anadrome effect and to test the coarse grain similarity account using competitors without vowel position overlap (e.g., competitor leaf for target flea). The results confirmed the original effect: anadrome competitor fixation curves diverged from unrelated distractors approximately 275 ms after the onset of the target word. In contrast, the no vowel position overlap competitor did not show an increase in fixations compared with the unrelated distractors. The contrasting results for the anadrome and no vowel position overlap items are discussed in terms of theoretical implications of sequential match versus coarse grain similarity accounts of spoken word recognition. We also discuss design issues (repetition of stimulus materials and display parameters) concerning the use of the visual world paradigm in making inferences about online spoken word recognition.


2018 ◽  
Vol 61 (1) ◽  
pp. 145-158 ◽  
Author(s):  
Chhayakanta Patro ◽  
Lisa Lucks Mendel

PurposeThe main goal of this study was to investigate the minimum amount of sensory information required to recognize spoken words (isolation points [IPs]) in listeners with cochlear implants (CIs) and investigate facilitative effects of semantic contexts on the IPs.MethodListeners with CIs as well as those with normal hearing (NH) participated in the study. In Experiment 1, the CI users listened to unprocessed (full-spectrum) stimuli and individuals with NH listened to full-spectrum or vocoder processed speech. IPs were determined for both groups who listened to gated consonant-nucleus-consonant words that were selected based on lexical properties. In Experiment 2, the role of semantic context on IPs was evaluated. Target stimuli were chosen from the Revised Speech Perception in Noise corpus based on the lexical properties of the final words.ResultsThe results indicated that spectrotemporal degradations impacted IPs for gated words adversely, and CI users as well as participants with NH listening to vocoded speech had longer IPs than participants with NH who listened to full-spectrum speech. In addition, there was a clear disadvantage due to lack of semantic context in all groups regardless of the spectral composition of the target speech (full spectrum or vocoded). Finally, we showed that CI users (and users with NH with vocoded speech) can overcome such word processing difficulties with the help of semantic context and perform as well as listeners with NH.ConclusionWord recognition occurs even before the entire word is heard because listeners with NH associate an acoustic input with its mental representation to understand speech. The results of this study provide insight into the role of spectral degradation on the processing of spoken words in isolation and the potential benefits of semantic context. These results may also explain why CI users rely substantially on semantic context.


2017 ◽  
Vol 61 (3) ◽  
pp. 430-465 ◽  
Author(s):  
Miquel Llompart ◽  
Miquel Simonet

This study investigates the production and auditory lexical processing of words involved in a patterned phonological alternation in two dialects of Catalan spoken on the island of Majorca, Spain. One of these dialects, that of Palma, merges /ɔ/ and /o/ as [o] in unstressed position, and it maintains /u/ as an independent category, [u]. In the dialect of Sóller, a small village, speakers merge unstressed /ɔ/, /o/, and /u/ to [u]. First, a production study asks whether the discrete, rule-based descriptions of the vowel alternations provided in the dialectological literature are able to account adequately for these processes: are mergers complete? Results show that mergers are complete with regards to the main acoustic cue to these vowel contrasts, that is, F1. However, minor differences are maintained for F2 and vowel duration. Second, a lexical decision task using cross-modal priming investigates the strength with which words produced in the phonetic form of the neighboring (versus one’s own) dialect activate the listeners’ lexical representations during spoken word recognition: are words within and across dialects accessed efficiently? The study finds that listeners from one of these dialects, Sóller, process their own and the neighboring forms equally efficiently, while listeners from the other one, Palma, process their own forms more efficiently than those of the neighboring dialect. This study has implications for our understanding of the role of lifelong linguistic experience on speech performance.


Sign in / Sign up

Export Citation Format

Share Document