scholarly journals Do morphemes matter when reading compound words with transposed letters? Evidence from eye-tracking and event-related potentials

2016 ◽  
Vol 31 (10) ◽  
pp. 1299-1319 ◽  
Author(s):  
Mallory C. Stites ◽  
Kara D. Federmeier ◽  
Kiel Christianson
2020 ◽  
Vol 11 ◽  
Author(s):  
Maria Richter ◽  
Mariella Paul ◽  
Barbara Höhle ◽  
Isabell Wartenburger

One of the most important social cognitive skills in humans is the ability to “put oneself in someone else’s shoes,” that is, to take another person’s perspective. In socially situated communication, perspective taking enables the listener to arrive at a meaningful interpretation of what is said (sentence meaning) and what is meant (speaker’s meaning) by the speaker. To successfully decode the speaker’s meaning, the listener has to take into account which information he/she and the speaker share in their common ground (CG). We here further investigated competing accounts about when and how CG information affects language comprehension by means of reaction time (RT) measures, accuracy data, event-related potentials (ERPs), and eye-tracking. Early integration accounts would predict that CG information is considered immediately and would hence not expect to find costs of CG integration. Late integration accounts would predict a rather late and effortful integration of CG information during the parsing process that might be reflected in integration or updating costs. Other accounts predict the simultaneous integration of privileged ground (PG) and CG perspectives. We used a computerized version of the referential communication game with object triplets of different sizes presented visually in CG or PG. In critical trials (i.e., conflict trials), CG information had to be integrated while privileged information had to be suppressed. Listeners mastered the integration of CG (response accuracy 99.8%). Yet, slower RTs, and enhanced late positivities in the ERPs showed that CG integration had its costs. Moreover, eye-tracking data indicated an early anticipation of referents in CG but an inability to suppress looks to the privileged competitor, resulting in later and longer looks to targets in those trials, in which CG information had to be considered. Our data therefore support accounts that foresee an early anticipation of referents to be in CG but a rather late and effortful integration if conflicting information has to be processed. We show that both perspectives, PG and CG, contribute to socially situated language processing and discuss the data with reference to theoretical accounts and recent findings on the use of CG information for reference resolution.


2014 ◽  
Vol 8 ◽  
pp. 144-152 ◽  
Author(s):  
Giulia Righi ◽  
Alissa Westerlund ◽  
Eliza L. Congdon ◽  
Sonya Troller-Renfree ◽  
Charles A. Nelson

1995 ◽  
Vol 16 (2) ◽  
pp. 145-156 ◽  
Author(s):  
Norbert Kathmann ◽  
Michael Wagner ◽  
Nicola Rendtorff ◽  
Claudia Schöchlin ◽  
Rolf R. Engel

Author(s):  
Juan-Carlos Rojas ◽  
Manuel Contero ◽  
Jorge D. Camba ◽  
M. Concepción Castellanos ◽  
Eva García-González ◽  
...  

The study of product visual attributes is usually performed through questionnaires which provide information about the conscious subjective opinions of the consumer. This work complements such method by combining Event-Related Potentials (ERP) and Eye-Tracking (ET) techniques and using semantic priming to elicit user perception. Our study focuses on package design and follows the basic structure of classic ERP experiments where participants are presented an ordered sequence of frames (stimuli) in a computer screen for a certain period of time: attention frame, semantic priming frame (descriptive adjective), neutral background, target frame (product image), and a question regarding coherence between priming and target frames. The eye-tracking system works in combination with the ERP experiment. The results of our study reveal the connection between adjectives (semantic priming) and package design attributes (based on the analysis of the N400 ERP component), and the connection between adjectives and the specific visual elements that get more attention (based on the information provided by eye-tracking analysis software).


2007 ◽  
Vol 19 (5) ◽  
pp. 843-854 ◽  
Author(s):  
A. J. Wills ◽  
A. Lavric ◽  
G. S. Croft ◽  
T. L. Hodgson

Prediction error (“surprise”) affects the rate of learning: We learn more rapidly about cues for which we initially make incorrect predictions than cues for which our initial predictions are correct. The current studies employ electrophysiological measures to reveal early attentional differentiation of events that differ in their previous involvement in errors of predictive judgment. Error-related events attract more attention, as evidenced by features of event-related scalp potentials previously implicated in selective visual attention (selection negativity, augmented anterior N1). The earliest differences detected occurred around 120 msec after stimulus onset, and distributed source localization (LORETA) indicated that the inferior temporal regions were one source of the earliest differences. In addition, stimuli associated with the production of prediction errors show higher dwell times in an eye-tracking procedure. Our data support the view that early attentional processes play a role in human associative learning.


2018 ◽  
Vol 26 (6) ◽  
pp. 339-351
Author(s):  
Lixiu Jia ◽  
Yan Tu ◽  
Lili Wang ◽  
Xuefei Zhong ◽  
Ying Wang

2014 ◽  
Vol 67 (3) ◽  
pp. 424-454 ◽  
Author(s):  
Megan A. Boudewyn ◽  
Megan Zirnstein ◽  
Tamara Y. Swaab ◽  
Matthew J. Traxler

2018 ◽  
Author(s):  
Bingcan Li ◽  
Meng Han ◽  
Chunyan Guo ◽  
Roni Tibon

AbstractAlthough it is often assumed that memory of episodic associations requires recollection, it has been suggested that when stimuli are experienced as a unit, familiarity processes might contribute to their subsequent associative recognition. We investigated the effects of associative relations and perceptual domain during episodic encoding on retrieval of associative information. During study, participants encoded compound and non-compound words-pairs, presented either to the same sensory modality (visual presentation) or to different sensory modalities (audio-visual presentation). At test, they discriminated between old, rearranged, and new pairs while event related potentials (ERPs) were recorded. In an early ERP component, generally associated with familiarity processes, differences related to associative memory only emerged for compounds, regardless their encoding modality. In contrast, in a later ERP component associated with recollection, differences related to associative memory emerged in all encoding conditions. These findings may indicate that episodic retrieval of compound words can be supported by familiarity-related processes, regardless of whether both words were presented to the same or different sensory modalities.


Sign in / Sign up

Export Citation Format

Share Document