scholarly journals Reading and Lexicalization in Opaque and Transparent Orthographies: Word Naming and Word Learning in English and Spanish

2017 ◽  
Vol 70 (10) ◽  
pp. 2105-2129 ◽  
Author(s):  
Rosa Kit Wan Kwok ◽  
Fernando Cuetos ◽  
Rrezarta Avdyli ◽  
Andrew W. Ellis

Do skilled readers of opaque and transparent orthographies make differential use of lexical and sublexical processes when converting words from print to sound? Two experiments are reported, which address that question, using effects of letter length on naming latencies as an index of the involvement of sublexical letter–sound conversion. Adult native speakers of English (Experiment 1) and Spanish (Experiment 2) read aloud four- and seven-letter high-frequency words, low-frequency words, and nonwords in their native language. The stimuli were interleaved and presented 10 times in a first testing session and 10 more times in a second session 28 days later. Effects of lexicality were observed in both languages, indicating the deployment of lexical representations in word naming. Naming latencies to both words and nonwords reduced across repetitions on Day 1, with those savings being retained to Day 28. Length effects were, however, greater for Spanish than English word naming. Reaction times to long and short nonwords converged with repeated presentations in both languages, but less in Spanish than in English. The results support the hypothesis that reading in opaque orthographies favours the rapid creation and use of lexical representations, while reading in transparent orthographies makes more use of a combination of lexical and sublexical processing.

2015 ◽  
Vol 68 (2) ◽  
pp. 326-349 ◽  
Author(s):  
Rosa Kit Wan Kwok ◽  
Andrew W. Ellis

Three experiments are reported analysing the processes by which adult readers of English learn new written words. Visual word learning was simulated by presenting short (four-letter) and longer (seven-letter) nonwords repeatedly and observing the reduction in naming latencies and the convergence in reaction times (RTs) to shorter and longer items that are the hallmarks of visual word learning. Experiment 1 presented nonwords in ten consecutive blocks. Naming latencies reduced over the first four or five presentations. The effect of length on naming RTs was large in block 1 but non-significant after four or five presentations. Experiment 2 demonstrated some reduction in RTs to untrained nonwords following practice on a trained set, but the reduction was less than for the trained items and RTs to shorter and longer nonwords did not converge. Experiment 3 included a retest after seven days which showed some slowing of RTs compared with the end of the first session but also considerable retention of learning. We conclude that four to six exposures to novel words (nonwords) are sufficient to establish durable lexical representations that permit parallel processing of newly-learned words. The results are discussed in terms of theoretical models of reading and word learning.


2020 ◽  
Author(s):  
Jon Catling ◽  
Mahmoud Medhat Elsherif

The Age of Acquisition (AoA) effect is such that words acquired early in life are processed more quickly than later-acquired words. One explanation for the AoA effects is the arbitrary mapping hypothesis (Ellis & Lambon-Ralph, 2000), stating that the AoA effects are more likely to occur in items that have an arbitrary, rather than a systematic, nature between input and output. Previous behavioural findings have shown that the AoA effects are larger in pictorial than word items. However, no behavioural studies have attempted to directly assess the AoA effects in relation to the connections between representations. In the first two experiments, 48 participants completed a word-picture verification task (Experiments 1A and 2A), together with a spoken (Experiment 1B) or written (Experiment 2B) picture naming task. In the third and fourth experiments, 48 participants complete a picture-word verification task (Experiments 3A and 4A), together with a spoken (Experiment 3B) or written (Experiment 4B) word naming task. For each pair of experiments the subtraction of the naming latencies from the verification tasks for each item per participant was calculated (Experiments 1-4C; e.g. Santiago, Mackay, Palma & Rho, 2000). Results showed that early-acquired items were responded to more quickly than late-acquired ones for all experiments, except for Experiment 3B (spoken word naming) where the AoA effect was shown for only low-frequency words. In addition, the subtraction results for pictorial stimuli demonstrated strong AoA effects. This strengthens the case for the AM hypothesis, also suggesting the AoA effect resides in the connections between representations.


2005 ◽  
Vol 93 (1) ◽  
pp. 519-534 ◽  
Author(s):  
Masayuki Watanabe ◽  
Yasushi Kobayashi ◽  
Yuka Inoue ◽  
Tadashi Isa

To examine the role of competitive and cooperative neural interactions within the intermediate layer of superior colliculus (SC), we elevated the basal SC neuronal activity by locally injecting a cholinergic agonist nicotine and analyzed its effects on saccade performance. After microinjection, spontaneous saccades were directed toward the movement field of neurons at the injection site (affected area). For visually guided saccades, reaction times were decreased when targets were presented close to the affected area. However, when visual targets were presented remote from the affected area, reaction times were not increased regardless of the rostrocaudal level of the injection sites. The endpoints of visually guided saccades were biased toward the affected area when targets were presented close to the affected area. After this endpoint effect diminished, the trajectories of visually guided saccades remained modestly curved toward the affected area. Compared with the effects on endpoints, the effects on reaction times were more localized to the targets close to the affected area. These results are consistent with a model that saccades are triggered by the activities of neurons within a restricted region, and the endpoints and trajectories of the saccades are determined by the widespread population activity in the SC. However, because increased reaction times were not observed for saccades toward targets remote from the affected area, inhibitory interactions in the SC may not be strong enough to shape the spatial distribution of the low-frequency preparatory activities in the SC.


1997 ◽  
Vol 8 (6) ◽  
pp. 411-416 ◽  
Author(s):  
Daniel H. Spieler ◽  
David A. Balota

Early noncomputational models of word recognition have typically attempted to account for effects of categorical factors such as word frequency (high vs low) and spelling-to-sound regularity (regular vs irregular) More recent computational models that adhere to general connectionist principles hold the promise of being sensitive to underlying item differences that are only approximated by these categorical factors In contrast to earlier models, these connectionist models provide predictions of performance for individual items In the present study, we used the item-level estimates from two connectionist models (Plaut, McClelland, Seidenberg, & Patterson, 1996, Seidenberg & McClelland, 1989) to predict naming latencies on the individual items on which the models were trained The results indicate that the models capture, at best, slightly more variance than simple log frequency and substantially less than the combined predictive power of log frequency, neighborhood density, and orthographic length. The discussion focuses on the importance of examining the item-level performance of word-naming models and possible approaches that may improve the models' sensitivity to such item differences


2003 ◽  
Vol 19 (3) ◽  
pp. 209-223 ◽  
Author(s):  
Joe Pater

This article presents a follow-up to Curtin et al.’s study of the perceptual acquisition of Thai laryngeal contrasts by native speakers of English, which found that subjects performed better on contrasts in voice than aspiration. This finding - surprising in light of earlier cross-linguistic voice onset time (VOT) research - was attributed to the fact that the task tapped lexical representations, which are unspecified for aspiration according to standard assumptions in generative phonology. The present study further investigated possible task effects by examining the discrimination and categorization of the same stimuli in various experimental conditions. Stimulus effects were also investigated by performing token-based analyses of the results, and by comparing them to acoustic properties of the tokens. The outcome of the discrimination experiment was the opposite of the earlier study, with significantly better performance on contrasts in aspiration than voice, even on a lexical task. A second finding of this experiment is that place of articulation interacts with the perception of the laryngeal distinctions; the aspiration distinction is discriminated better on the labials, and voice on alveolars. A parallel effect of place of articulation was also found in a categorization experiment.


Author(s):  
D. A. Chernova ◽  
◽  
S. V. Alexeeva ◽  
N. A. Slioussar ◽  
◽  
...  

Even if we know how to spell, we often see words misspelled by other people — especially nowadays when we constantly read unedited texts on social media and in personal messages. In this paper, we present two experiments showing that the incidence of orthographic errors reduces the quality of lexical representations in the mental lexicon—even if one knows how to spell a word, repeated exposure to incorrect spellings blurs its orthographical representation and weakens the connection between form and meaning. As a result, it is more difficult to judge whether the word is spelled correctly, and — more surprisingly — it takes more time to read the word even when there are no errors. We show that when all other factors are balanced the effect of misspellings is more pronounced for the words with lower frequency. We compare our results with the only previous study addressing the problem of misspellings’ influence on the processing of correctly spelled words — it was conducted on the English data. It may be interesting to explore this issue in a cross-linguistic perspective. In this study, we turn to Russian, which differs from English by a more transparent orthography. Much larger corpora of unedited texts are available for English than for Russian, but, using a different way to estimate the incidence of misspellings, we obtained similar results and could also make some novel generalizations. In Experiment 1 we selected 44 words that are frequently misspelled and presented in two conditions (with or without spelling errors) and were distributed across two experimental lists. For every word, participants were asked to determine whether it is spelled correctly or not. The frequency of the word and the relative frequency of its misspelled occurrences significantly influenced the number of incorrect responses: not only it takes longer to read frequently misspelled words, it is also more difficult to decide whether they are spelled correctly. In Experiment 2 we selected 30 words from the materials of Experiment 1 and for every selected word, we found a pair that is matched for length and frequency, but is rarely misspelled due to its orthographic transparency. We used a lexical decision task, presenting these 60 words in the correct spelling, as well as 60 nonwords. We used LMMs for statistics. Firstly, the word type factor was significant: it takes more time to recognize a frequently misspelled word, which replicates the results obtained for English. Secondly, the interaction between the word type factor and the frequency factor was significant: the effect of misspellings was more pronounced for the words of lower frequency. We can conclude that high frequency words have more robust representations that resist blurring more efficiently than low frequency ones. Finally, we conducted a separate analysis showing that the number of incorrect responses in Experiment 1 correlates with RTs in Experiment 2. Thus, whether we consciously try to find an error or simply read words orthographic representations blurred due to exposure to frequent misspellings make the task more difficult.


2021 ◽  
pp. 1-34
Author(s):  
Hyein Jeong ◽  
Emiel van den Hoven ◽  
Sylvain Madec ◽  
Audrey Bürki

Abstract Usage-based theories assume that all aspects of language processing are shaped by the distributional properties of the language. The frequency not only of words but also of larger chunks plays a major role in language processing. These theories predict that the frequency of phrases influences the time needed to prepare these phrases for production and their acoustic duration. By contrast, dominant psycholinguistic models of utterance production predict no such effects. In these models, the system keeps track of the frequency of individual words but not of co-occurrences. This study investigates the extent to which the frequency of phrases impacts naming latencies and acoustic duration with a balanced design, where the same words are recombined to build high- and low-frequency phrases. The brain signal of participants is recorded so as to obtain information on the electrophysiological bases and functional locus of frequency effects. Forty-seven participants named pictures using high- and low-frequency adjective–noun phrases. Naming latencies were shorter for high-frequency than low-frequency phrases. There was no evidence that phrase frequency impacted acoustic duration. The electrophysiological signal differed between high- and low-frequency phrases in time windows that do not overlap with conceptualization or articulation processes. These findings suggest that phrase frequency influences the preparation of phrases for production, irrespective of the lexical properties of the constituents, and that this effect originates at least partly when speakers access and encode linguistic representations. Moreover, this study provides information on how the brain signal recorded during the preparation of utterances changes with the frequency of word combinations.


1992 ◽  
Vol 43 ◽  
pp. 27-38
Author(s):  
Ton Dijkstra

Two divided attention experiments investigated whether graphemes and phonemes can mutually activate each other during bimodal sublexical processing. Dutch subjects reacted to target letters and/or speech sounds in single-channel and bimodal stimuli. In some bimodal conditions, the visual and auditory targets were congruent (e.g., visual A, auditory /a:/), in others they were not (e.g., visual U, auditory /a:/). Temporal aspects of cross-modal activation were examined by varying the stimulus onset asynchrony (SOA) of visual and auditory stimulus components. Processing differences among stimuli (e.g., the letters A and U) were accounted for by correcting the obtained bimodal reaction times by means of the predictions of an independent race-model. Comparing the results of the adapted congruent and incongruent conditions for each SOA, it can be concluded that (a) cross-modal activation takes place in this task situation; (b) it is bidirectional, i.e. it spreads from grapheme to phoneme and vice versa; and (c) it occurs very rapidly.


1988 ◽  
Vol 66 (3) ◽  
pp. 803-810 ◽  
Author(s):  
Michael P. Rastatter ◽  
Catherine Loren

The current study investigated the capacity of the right hemisphere to process verbs using a paradigm proven reliable for predicting differential, minor hemisphere lexical analysis in the normal, intact brain. Vocal reaction times of normal subjects were measured to unilaterally presented verbs of high and of low frequency. A significant interaction was noted between the stimulus items and visual fields. Post hoc tests showed that vocal reaction times to verbs of high frequency were significantly faster following right visual-field presentations (right hemisphere). No significant differences in vocal reaction time occurred between the two visual fields for the verbs of low frequency. Also, significant differences were observed between the two types of verbs following left visual-field presentation but not the right. These results were interpreted to suggest that right-hemispheric analysis was restricted to the verbs of high frequency in the presence of a dominant left hemisphere.


2009 ◽  
Vol 62 (5) ◽  
pp. 858-867 ◽  
Author(s):  
Erin Maloney ◽  
Evan F. Risko ◽  
Shannon O'Malley ◽  
Derek Besner

Participants read aloud nonword letter strings, one at a time, which varied in the number of letters. The standard result is observed in two experiments; the time to begin reading aloud increases as letter length increases. This result is standardly understood as reflecting the operation of a serial, left-to-right translation of graphemes into phonemes. The novel result is that the effect of letter length is statistically eliminated by a small number of repetitions. This elimination suggests that these nonwords are no longer always being read aloud via a serial left-to-right sublexical process. Instead, the data are taken as evidence that new orthographic and phonological lexical entries have been created for these nonwords and are now read at least sometimes by recourse to the lexical route. Experiment 2 replicates the interaction between nonword letter length and repetition observed in Experiment 1 and also demonstrates that this interaction is not seen when participants merely classify the string as appearing in upper or lower case. Implications for existing dual-route models of reading aloud and Share's self-teaching hypothesis are discussed.


Sign in / Sign up

Export Citation Format

Share Document