scholarly journals Semantic priming in the dual task paradigm

2017 ◽  
Author(s):  
Etienne P. LeBel

This study examined whether early effects of semantics could occur in the context of the PRP paradigm. Based on Reynolds and Besner (2004), it was predicted that early effects of semantics would possibly be observed. The study found that the difference in reaction times for the related and unrelated primes for the secondary lexical decision task were constant at the varying SOAs. This additive effect, using the locus-of-slack logic, means that semantic information occurs at or after the central processing bottleneck. Thus, the results did not lend support for our prediction, as semantic information did not feedback to letter representations. The results of this study thus contradict Reynolds and Besner’s (2004) position that the letter level receives information from the semantic and lexical levels. Johnston, McCann, and Remington’s (1995) position, on the other hand, is completely in line with the current data of this study. Their theory that words and semantic occur at or after central processing accounts for the additive effects observed in this study. The current study found that the faster related prime trials in the secondary lexical decision task remained as fast in the short SOA as compared to the long SOA. Thus, the additive effect found lends support that semantic analysis occurs at or after the central attention bottleneck. Although early effects of semantics were not found in the current investigation, one can only imagine that one day researchers may be able to show semantic processing occurring simultaneously with other cognitive tasks.

1993 ◽  
Vol 10 (1) ◽  
pp. 79-108 ◽  
Author(s):  
Regina McGlinchey-berroth ◽  
William P. Milberg ◽  
Mieke Verfaellie ◽  
Michael Alexander ◽  
Patrick T. Kilduff

1992 ◽  
Vol 45 (2) ◽  
pp. 299-322 ◽  
Author(s):  
Luis J. Fuentes ◽  
Pío Tudela

Using a lexical decision task in which two primes appeared simultaneously in the visual field for 150 msec followed by a target word, two experiments examined semantic priming from attended and unattended primes as a function of both the separation between the primes in the visual field and the prime-target stimulus-onset asynchrony (SOA). In the first experiment significant priming effects were found for both the attended and unattended prime words, though the effect was much greater for the attended words. In addition, and also for both attention conditions, priming showed a tendency to increase with increasing eccentricity (2.3°, 3.3°, and 4.3°) between the prime words in the visual field at the long (550 and 850 msec) but not at the short (250 msec) prime-target SOA. In the second experiment the prime stimuli were either two words (W-W) or one word and five Xs (W-X). We manipulated the degree of eccentricity (2° and 3.6°) between the prime stimuli and used a prime-target SOA of 850 msec. Again significant priming was found for both the attended and unattended words but only the W-W condition showed a decrement in priming as a function of the separation between the primes; this decrement came to produce negative priming for the unattended word at the narrow (2°) separation. These results are discussed in relation to the semantic processing of parafoveal words and the inhibitory effects of focused attention.


1982 ◽  
Vol 17 (2) ◽  
pp. 301-315 ◽  
Author(s):  
Sheila E. Blumstein ◽  
William Milberg ◽  
Robin Shrier

2021 ◽  
pp. 174702182110308
Author(s):  
Simone Sulpizio ◽  
Remo Job ◽  
Paolo Leoni ◽  
Michele Scaltritti

We investigated whether semantic interference occurring during visual word recognition is resolved using domain-general control mechanism or using more specific mechanisms related to semantic processing. We asked participants to perform a lexical decision task with taboo stimuli, which induce semantic interference, as well as well as a semantic Stroop task and a Simon task, intended as benchmarks of linguistic-semantic and non-linguistic interference, respectively. Using a correlational approach, we investigated potential similarities between effects produced in the three tasks, both at the level of overall means and as a function of response speed (delta-plot analysis). Correlations selectively surfaced between the lexical decision and the semantic Stroop task. These findings suggest that, during visual word recognition, semantic interference is controlled by semantic-specific mechanisms, which intervene to face prepotent but task-irrelevant semantic information interfering with the accomplishment of the task's goal.


2017 ◽  
Vol 76 (2) ◽  
pp. 71-79 ◽  
Author(s):  
Hélène Maire ◽  
Renaud Brochard ◽  
Jean-Luc Kop ◽  
Vivien Dioux ◽  
Daniel Zagar

Abstract. This study measured the effect of emotional states on lexical decision task performance and investigated which underlying components (physiological, attentional orienting, executive, lexical, and/or strategic) are affected. We did this by assessing participants’ performance on a lexical decision task, which they completed before and after an emotional state induction task. The sequence effect, usually produced when participants repeat a task, was significantly smaller in participants who had received one of the three emotion inductions (happiness, sadness, embarrassment) than in control group participants (neutral induction). Using the diffusion model ( Ratcliff, 1978 ) to resolve the data into meaningful parameters that correspond to specific psychological components, we found that emotion induction only modulated the parameter reflecting the physiological and/or attentional orienting components, whereas the executive, lexical, and strategic components were not altered. These results suggest that emotional states have an impact on the low-level mechanisms underlying mental chronometric tasks.


Author(s):  
Xu Xu ◽  
Chunyan Kang ◽  
Kaia Sword ◽  
Taomei Guo

Abstract. The ability to identify and communicate emotions is essential to psychological well-being. Yet research focusing exclusively on emotion concepts has been limited. This study examined nouns that represent emotions (e.g., pleasure, guilt) in comparison to nouns that represent abstract (e.g., wisdom, failure) and concrete entities (e.g., flower, coffin). Twenty-five healthy participants completed a lexical decision task. Event-related potential (ERP) data showed that emotion nouns elicited less pronounced N400 than both abstract and concrete nouns. Further, N400 amplitude differences between emotion and concrete nouns were evident in both hemispheres, whereas the differences between emotion and abstract nouns had a left-lateralized distribution. These findings suggest representational distinctions, possibly in both verbal and imagery systems, between emotion concepts versus other concepts, implications of which for theories of affect representations and for research on affect disorders merit further investigation.


1994 ◽  
Author(s):  
P. M. Pexman ◽  
C. I. Racicot ◽  
Stephen J. Lupker

Sign in / Sign up

Export Citation Format

Share Document