scholarly journals Experimental Study on A Two-string Lexical Decision Task: Non-words and Words

Author(s):  
Shuyu Zhang ◽  
Sihong Zhang

<p class="0abstract"><strong>Abstract—</strong>This paper introduces the general purposes, hypotheses and designs of the lexical decision task and compares the results of several existing studies. Based on previous studies, three hypotheses are proposed. Then, it illustrates a two-lexical decision task designed and completed by the Research School of Psychology, Australian National University. In comparison with traditional lexical decision task, the two-string lexical decision task further tests participants’ response time to non-words and words. The results of the current two-string lexical decision task experiment verify the validity of previous studies on the one hand, while on the other hand, do not fully support the statement that participants would make faster responses to unrelated words than unrelated non-words. The findings of the current study directly provide cognitive processes for English lexical differentiation and learning, which could give hints to English lexical teaching and acquisition.</p>

2009 ◽  
Vol 12 (6) ◽  
pp. 699-714 ◽  
Author(s):  
Naira Delgado ◽  
Armando Rodríguez-Pérez ◽  
Jeroen Vaes ◽  
Jacques-Philippe Leyens ◽  
Verónica Betancor

Two experiments examine whether exposure to generic violence can display infrahumanization towards out-groups. In Study 1, participants had to solve a lexical decision task after viewing animal or human violent scenes. In Study 2, participants were exposed to either human violent or human suffering pictures before doing a lexical decision task. In both studies, the infrahumanization bias appeared after viewing the human violent pictures but not in the other experimental conditions. These two experiments support the idea of contextual dependency of infrahumanization, and suggest that violence can prime an infrahuman perception of the out-group. Theoretical implications for infrahumanization and potential underlying mechanisms are discussed.


2013 ◽  
Vol 34 (1) ◽  
pp. 41-47 ◽  
Author(s):  
Patricia Lockwood ◽  
Abigail Millings ◽  
Erica Hepper ◽  
Angela C. Rowe

Crying is a powerful solicitation of caregiving, yet little is known about the cognitive processes underpinning caring responses to crying others. This study examined (1) whether crying (compared to sad and happy) faces differentially elicited semantic activation of caregiving, and (2) whether individual differences in cognitive and emotional empathy moderated this activation. Ninety participants completed a lexical decision task in which caregiving, neutral, and nonwords were presented after subliminal exposure (24 ms) to crying, sad, and happy faces. Individuals low in cognitive empathy had slower reaction times to caregiving (vs. neutral) words after exposure to crying faces, but not after sad or happy faces. Results are discussed with respect to the role of empathy in response to crying others.


2001 ◽  
Vol 22 (3) ◽  
pp. 343-361 ◽  
Author(s):  
LIZ NATHAN ◽  
BILL WELLS

This study explores the hypothesis that children identified as having phonological processing problems may have particular difficulty in processing a different accent. Children with speech difficulties (n = 18) were compared with matched controls on four measures of auditory processing. First, an accent auditory lexical decision task was administered. In one condition, the children made lexical decisions about stimuli presented in their own accent (London). In the second condition, the stimuli were spoken in an unfamiliar accent (Glaswegian). The results showed that the children with speech difficulties had a specific deficit on the unfamiliar accent. Performance on the other auditory discrimination tasks revealed additional deficits at lower levels of input processing. The wider clinical implications of the findings are considered.


2008 ◽  
Vol 2 (1-2) ◽  
pp. 101-117 ◽  
Author(s):  
Leen Impe ◽  
Dirk Geeraerts ◽  
Dirk Speelman

In this experimental study, we aim to arrive at a global picture of the mutual intelligibility of various Dutch language varieties by carrying out a computer-controlled lexical decision task in which ten target varieties are evaluated – the Belgian and Netherlandic Dutch standard language as well as four regional varieties of both countries. We auditorily presented real as well as pseudo-words in various varieties of Dutch to Netherlandic and Belgian test subjects, who were asked to decide as quickly as possible whether the items were existing Dutch words or not. The experiment's working assumption is that the faster the subjects react, the better the intelligibility of (the language variety of) the word concerned.


2015 ◽  
Vol 10 (3) ◽  
pp. 435-457
Author(s):  
Laura Teddiman ◽  
Gary Libben

We present an auditory presentation technique called segmented binaural presentation. The technique builds on the dichotic listening paradigm (Shankweiler & Studdert-Kennedy, 1967; Studdert-Kennedy & Shankweiler, 1970) and segmented lexical presentation (Libben, 2003; Betram, Kuperman, Baayen, & Hyönä, 2011). The technique allows the first part of a word to be presented to one ear and the second part of the word to be presented to the other ear. The experimenter may thus manipulate whether a stimulus is segmented in this binaural manner and, if it is segmented, the location of the binaural segmentation within the word. We discuss how the technique may be implemented on the Macintosh platform, using PsyScope and freely available software for audio file creation. We also report on a test implementation of the technique using suffixed and compound English words in a lexical decision task. Results suggest that the technique differentiates between segmentation that occurs within and between compound constituents.


2006 ◽  
Vol 95 (4) ◽  
pp. 2630-2637 ◽  
Author(s):  
Nicole Behne ◽  
Beate Wendt ◽  
Henning Scheich ◽  
André Brechmann

In a previous study, we hypothesized that the approach of presenting information-bearing stimuli to one ear and noise to the other ear may be a general strategy to determine hemispheric specialization in auditory cortex (AC). In that study, we confirmed the dominant role of the right AC in directional categorization of frequency modulations by showing that fMRI activation of right but not left AC was sharply emphasized when masking noise was presented to the contralateral ear. Here, we tested this hypothesis using a lexical decision task supposed to be mainly processed in the left hemisphere. Subjects had to distinguish between pseudowords and natural words presented monaurally to the left or right ear either with or without white noise to the other ear. According to our hypothesis, we expected a strong effect of contralateral noise on fMRI activity in left AC. For the control conditions without noise, we found that activation in both auditory cortices was stronger on contralateral than on ipsilateral word stimulation consistent with a more influential contralateral than ipsilateral auditory pathway. Additional presentation of contralateral noise did not significantly change activation in right AC, whereas it led to a significant increase of activation in left AC compared with the condition without noise. This is consistent with a left hemispheric specialization for lexical decisions. Thus our results support the hypothesis that activation by ipsilateral information-bearing stimuli is upregulated mainly in the hemisphere specialized for a given task when noise is presented to the more influential contralateral ear.


2017 ◽  
Vol 76 (2) ◽  
pp. 71-79 ◽  
Author(s):  
Hélène Maire ◽  
Renaud Brochard ◽  
Jean-Luc Kop ◽  
Vivien Dioux ◽  
Daniel Zagar

Abstract. This study measured the effect of emotional states on lexical decision task performance and investigated which underlying components (physiological, attentional orienting, executive, lexical, and/or strategic) are affected. We did this by assessing participants’ performance on a lexical decision task, which they completed before and after an emotional state induction task. The sequence effect, usually produced when participants repeat a task, was significantly smaller in participants who had received one of the three emotion inductions (happiness, sadness, embarrassment) than in control group participants (neutral induction). Using the diffusion model ( Ratcliff, 1978 ) to resolve the data into meaningful parameters that correspond to specific psychological components, we found that emotion induction only modulated the parameter reflecting the physiological and/or attentional orienting components, whereas the executive, lexical, and strategic components were not altered. These results suggest that emotional states have an impact on the low-level mechanisms underlying mental chronometric tasks.


Author(s):  
Xu Xu ◽  
Chunyan Kang ◽  
Kaia Sword ◽  
Taomei Guo

Abstract. The ability to identify and communicate emotions is essential to psychological well-being. Yet research focusing exclusively on emotion concepts has been limited. This study examined nouns that represent emotions (e.g., pleasure, guilt) in comparison to nouns that represent abstract (e.g., wisdom, failure) and concrete entities (e.g., flower, coffin). Twenty-five healthy participants completed a lexical decision task. Event-related potential (ERP) data showed that emotion nouns elicited less pronounced N400 than both abstract and concrete nouns. Further, N400 amplitude differences between emotion and concrete nouns were evident in both hemispheres, whereas the differences between emotion and abstract nouns had a left-lateralized distribution. These findings suggest representational distinctions, possibly in both verbal and imagery systems, between emotion concepts versus other concepts, implications of which for theories of affect representations and for research on affect disorders merit further investigation.


Sign in / Sign up

Export Citation Format

Share Document