Measuring reading ability in the web-browser with a lexical decision task

2020 ◽  
Author(s):  
Jason D Yeatman ◽  
Kenny An Tang ◽  
Patrick M. Donnelly ◽  
Maya Yablonski

An accurate model of the factors that contribute to individual differences in reading ability depends on data collection in large, diverse and representative samples of research participants. However, that is rarely feasible due to the constraints imposed by standardized measures of reading ability which require test administration by trained clinicians or researchers. Here we explore whether a simple, two-alternative forced choice, time limited lexical decision task (LDT), self-delivered through the web-browser, can serve as an accurate and reliable measure of reading ability. We found that performance on the LDT is highly correlated with scores on standardized measures of reading ability such as the Woodcock-Johnson Letter Word Identification test administered in the lab (r = 0.91, disattenuated r = 0.94) . Importantly, the LDT reading ability measure is highly reliable (r = 0.97). After optimizing the list of words and pseudowords based on item response theory, we found that a short experiment with 80 words (2-3 minutes) provides a reliable (r = 0.95) measure of reading ability. Thus, the self-administered, rapid online assessment of reading ability (ROAR) developed here overcomes the constraints of resource-intensive, in-person reading assessment, and provides an efficient and automated tool for effective online research into the mechanisms of reading (dis)ability.

2020 ◽  
Author(s):  
Jason D. Yeatman ◽  
Kenny An Tang ◽  
Patrick M. Donnelly ◽  
Maya Yablonski ◽  
Mahalakshmi Ramamurthy ◽  
...  

AbstractAn accurate model of the factors that contribute to individual differences in reading ability depends on data collection in large, diverse and representative samples of research participants. However, that is rarely feasible due to the constraints imposed by standardized measures of reading ability which require test administration by trained clinicians or researchers. Here we explore whether a simple, two-alternative forced choice, time limited lexical decision task (LDT), self-delivered through the web-browser, can serve as an accurate and reliable measure of reading ability. We found that performance on the LDT is highly correlated with scores on standardized measures of reading ability such as the Woodcock-Johnson Letter Word Identification test administered in the lab (r = 0.91, disattenuated r = 0.94). Importantly, the LDT reading ability measure is highly reliable (r = 0.97). After optimizing the list of words and pseudowords based on item response theory, we found that a short experiment with 76 trials (2-3 minutes) provides a reliable (r = 0.95) measure of reading ability. Thus, the self-administered, Rapid Online Assessment of Reading ability (ROAR) developed here overcomes the constraints of resource-intensive, in-person reading assessment, and provides an efficient and automated tool for effective online research into the mechanisms of reading (dis)ability.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jason D. Yeatman ◽  
Kenny An Tang ◽  
Patrick M. Donnelly ◽  
Maya Yablonski ◽  
Mahalakshmi Ramamurthy ◽  
...  

AbstractAn accurate model of the factors that contribute to individual differences in reading ability depends on data collection in large, diverse and representative samples of research participants. However, that is rarely feasible due to the constraints imposed by standardized measures of reading ability which require test administration by trained clinicians or researchers. Here we explore whether a simple, two-alternative forced choice, time limited lexical decision task (LDT), self-delivered through the web-browser, can serve as an accurate and reliable measure of reading ability. We found that performance on the LDT is highly correlated with scores on standardized measures of reading ability such as the Woodcock-Johnson Letter Word Identification test (r = 0.91, disattenuated r = 0.94). Importantly, the LDT reading ability measure is highly reliable (r = 0.97). After optimizing the list of words and pseudowords based on item response theory, we found that a short experiment with 76 trials (2–3 min) provides a reliable (r = 0.95) measure of reading ability. Thus, the self-administered, Rapid Online Assessment of Reading ability (ROAR) developed here overcomes the constraints of resource-intensive, in-person reading assessment, and provides an efficient and automated tool for effective online research into the mechanisms of reading (dis)ability.


2011 ◽  
Vol 32 (3) ◽  
pp. 483-498 ◽  
Author(s):  
LUDO VERHOEVEN ◽  
ROB SCHREUDER

ABSTRACTThis study examined to what extent advanced and beginning readers, including dyslexic readers of Dutch, make use of morphological access units in the reading of polymorphemic words. Therefore, experiments were carried out in which the role of singular root form frequency in reading plural word forms was investigated in a lexical decision task with both adults and children. Twenty-three adult readers, 37 8-year-old children from Grade 3, 43 11-year-old children from Grade 6, and 33 11-year-old dyslexic readers were presented with a lexical decision task in which we contrasted plural word forms with a high versus low frequency of the singular root form. For the adults, it was found that the accuracy and speed of lexical decision is determined by the surface frequency of the plural word form. The frequency of the constituent root form played a role as well, but in the low-frequency plural words only. Furthermore, a strong developmental effect regarding the accuracy and speed of reading plural word forms was found. An effect of plural word form frequency on word identification was evidenced in all groups. The singular root form frequency also had an impact of the reading of the plural word forms. In the normal reading and dyslexic children, plurals with a high-frequency singular root form were read more accurately and faster than plurals with a low singular root frequency. It can be concluded that constituent morphemes have an impact on the reading of polymorphemic words. The results can be explained in the light of a word experience model leaving room for morphological constituency to play a role in the lexical access of complex words as a function of reading skill and experience and word and morpheme frequency.


2021 ◽  
Vol 12 ◽  
Author(s):  
Ana Marcet ◽  
María Fernández-López ◽  
Melanie Labusch ◽  
Manuel Perea

Recent research has found that the omission of accent marks in Spanish does not produce slower word identification times in go/no-go lexical decision and semantic categorization tasks [e.g., cárcel (prison) = carcel], thus suggesting that vowels like á and a are represented by the same orthographic units during word recognition and reading. However, there is a discrepant finding with the yes/no lexical decision task, where the words with the omitted accent mark produced longer response times than the words with the accent mark. In Experiment 1, we examined this discrepant finding by running a yes/no lexical decision experiment comparing the effects for words and non-words. Results showed slower response times for the words with omitted accent mark than for those with the accent mark present (e.g., cárcel < carcel). Critically, we found the opposite pattern for non-words: response times were longer for the non-words with accent marks (e.g., cárdil > cardil), thus suggesting a bias toward a “word” response for accented items in the yes/no lexical decision task. To test this interpretation, Experiment 2 used the same stimuli with a blocked design (i.e., accent mark present vs. omitted in all items) and a go/no-go lexical decision task (i.e., respond only to “words”). Results showed similar response times to words regardless of whether the accent mark was omitted (e.g., cárcel = carcel). This pattern strongly suggests that the longer response times to words with an omitted accent mark in yes/no lexical decision experiments are a task-dependent effect rather than a genuine reading cost.


Author(s):  
Erik D. Reichle

This chapter first describes the tasks that are used to study how readers identify printed words (e.g., the lexical-decision task) and then reviews the key empirical findings related to skilled and impaired word identification (i.e., dyslexia). As explained, these findings have both motivated the development of computer models of word identification and been used to evaluate the explanatory adequacy of those models. The chapter then reviews several precursor theories and models of word identification that provide recurring metaphors (e.g., generating word pronunciations via analogy vs. the application of rules) in the development of later, more formally implemented word-identification models. The chapter reviews a large representative sample of these models in the order of their development, to show how the models have evolved in response to empirical research and the need to accommodate new findings (e.g., how the letters in words are perceived in their correct order). The chapter concludes with an explicit comparative analysis of the word-identification models and discussion of the findings that each model can and cannot explain.


2017 ◽  
Vol 76 (2) ◽  
pp. 71-79 ◽  
Author(s):  
Hélène Maire ◽  
Renaud Brochard ◽  
Jean-Luc Kop ◽  
Vivien Dioux ◽  
Daniel Zagar

Abstract. This study measured the effect of emotional states on lexical decision task performance and investigated which underlying components (physiological, attentional orienting, executive, lexical, and/or strategic) are affected. We did this by assessing participants’ performance on a lexical decision task, which they completed before and after an emotional state induction task. The sequence effect, usually produced when participants repeat a task, was significantly smaller in participants who had received one of the three emotion inductions (happiness, sadness, embarrassment) than in control group participants (neutral induction). Using the diffusion model ( Ratcliff, 1978 ) to resolve the data into meaningful parameters that correspond to specific psychological components, we found that emotion induction only modulated the parameter reflecting the physiological and/or attentional orienting components, whereas the executive, lexical, and strategic components were not altered. These results suggest that emotional states have an impact on the low-level mechanisms underlying mental chronometric tasks.


Author(s):  
Xu Xu ◽  
Chunyan Kang ◽  
Kaia Sword ◽  
Taomei Guo

Abstract. The ability to identify and communicate emotions is essential to psychological well-being. Yet research focusing exclusively on emotion concepts has been limited. This study examined nouns that represent emotions (e.g., pleasure, guilt) in comparison to nouns that represent abstract (e.g., wisdom, failure) and concrete entities (e.g., flower, coffin). Twenty-five healthy participants completed a lexical decision task. Event-related potential (ERP) data showed that emotion nouns elicited less pronounced N400 than both abstract and concrete nouns. Further, N400 amplitude differences between emotion and concrete nouns were evident in both hemispheres, whereas the differences between emotion and abstract nouns had a left-lateralized distribution. These findings suggest representational distinctions, possibly in both verbal and imagery systems, between emotion concepts versus other concepts, implications of which for theories of affect representations and for research on affect disorders merit further investigation.


1994 ◽  
Author(s):  
P. M. Pexman ◽  
C. I. Racicot ◽  
Stephen J. Lupker

Sign in / Sign up

Export Citation Format

Share Document