relational processing
Recently Published Documents


TOTAL DOCUMENTS

89
(FIVE YEARS 16)

H-INDEX

16
(FIVE YEARS 3)

SAGE Open ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 215824402110566
Author(s):  
Elizabeth L. Wetzler ◽  
Aryn A. Pyke ◽  
Adam Werner

Subsequent recall is improved if students try to recall target material during study (self-testing) versus simply re-reading it. This effect is consistent with the notion of “desirable difficulties.” If the learning experience involves difficulties that induce extra effort, then retention may be improved. Not all difficulties are desirable, however. Difficult-to-read ( disfluent) typefaces yield inconsistent results. A new disfluent font, Sans Forgetica, was developed and alleged to promote deeper processing and improve learning. Although it would be invaluable if changing the font could enhance learning, the few studies on Sans Forgetica have been inconsistent, and focused on short retention intervals (0–5 minutes). We investigated a 1-week interval to increase practical relevance and because some benefits only manifest after a delay. A testing-effect manipulation was also included. Students ( N = 120) learned two passages via different methods (study then re-study vs. study then self-test). Half the students saw the passages in Times New Roman and half in Sans Forgetica. Recall test scores were higher for passages learned via self-testing than restudying, but the effect of font and the interaction were nonsignificant. We suggest that disfluency increases the local (orthographic) processing effort on each word but slowed reading might impair relational processing across words. In contrast, testing and generation effect manipulations often engage relational processing (question: answer; cue: target)—yielding subsequent benefits on cued-recall tests. We elaborate this suggestion to reconcile conflicting results across studies.


2021 ◽  
Vol 7 (9) ◽  
pp. 191
Author(s):  
Nurit Gronau

Associative relations among words, concepts and percepts are the core building blocks of high-level cognition. When viewing the world ‘at a glance’, the associative relations between objects in a scene, or between an object and its visual background, are extracted rapidly. The extent to which such relational processing requires attentional capacity, however, has been heavily disputed over the years. In the present manuscript, I review studies investigating scene–object and object–object associative processing. I then present a series of studies in which I assessed the necessity of spatial attention to various types of visual–semantic relations within a scene. Importantly, in all studies, the spatial and temporal aspects of visual attention were tightly controlled in an attempt to minimize unintentional attention shifts from ‘attended’ to ‘unattended’ regions. Pairs of stimuli—either objects, scenes or a scene and an object—were briefly presented on each trial, while participants were asked to detect a pre-defined target category (e.g., an animal, a nonsense shape). Response times (RTs) to the target detection task were registered when visual attention spanned both stimuli in a pair vs. when attention was focused on only one of two stimuli. Among non-prioritized stimuli that were not defined as to-be-detected targets, findings consistently demonstrated rapid associative processing when stimuli were fully attended, i.e., shorter RTs to associated than unassociated pairs. Focusing attention on a single stimulus only, however, largely impaired this relational processing. Notably, prioritized targets continued to affect performance even when positioned at an unattended location, and their associative relations with the attended items were well processed and analyzed. Our findings portray an important dissociation between unattended task-irrelevant and task-relevant items: while the former require spatial attentional resources in order to be linked to stimuli positioned inside the attentional focus, the latter may influence high-level recognition and associative processes via feature-based attentional mechanisms that are largely independent of spatial attention.


2021 ◽  
Author(s):  
Nurit Gronau

Associative relations among words, concepts and percepts are the core building blocks of high-level cognition. When viewing the world ‘at a glance’, the associative relations between objects in a scene, or between an object and its visual background are extracted rapidly. The extent to which such relational processing requires attentional capacity, however, has been heavily disputed over the years. In the present manuscript I review studies investigating scene-object and object-object associative processing. I then present a series of studies in which I assessed the necessity of spatial attention to various types of visual-semantic relations within a scene. Importantly, in all studies, the spatial and temporal aspects of visual attention were tightly controlled in an attempt to minimize unintentional attention shifts from ‘attended’ to ‘unattended’ regions. Pairs of stimuli - either objects, scenes, or a scene and an object - were briefly presented on each trial, while participants were asked to detect a pre-defined category of stimuli (e.g., an animal, a nonsense shape). Response times (RTs) to the target detection task were registered when visual attention spanned both stimuli in a pair vs. when attention was focused on only one of two stimuli. Findings consistently demonstrated rapid associative processing when stimuli were fully attended, i.e., shorter RTs to associated than unassociated pairs. Focusing attention on a single stimulus only, however, largely impaired this relational processing. The only exception to this result pattern was observed with the target stimuli that were prioritized by task demands: such stimuli continued to affect performance even when positioned at an unattended location, indicating that their relations with the attended items were well processed and analyzed. Our findings suggest that attention plays a critical role in processing visual-associative relations when these involve stimuli that are irrelevant to one's immediate goals.


2020 ◽  
Vol 46 (8) ◽  
pp. 1424-1441 ◽  
Author(s):  
Kristina Wiebels ◽  
Donna Rose Addis ◽  
David Moreau ◽  
Valerie van Mulukom ◽  
Kelsey E. Onderdijk ◽  
...  

2020 ◽  
Author(s):  
Matthew P. McCurdy ◽  
Allison Sklenar ◽  
Andrea N. Frankenstein ◽  
Eric D. Leshikar

Memory is often better for information that is self-generated versus read (i.e., the generation effect). Theoretical work attributes the generation effect to two mechanisms: enhanced item-specific and relational processing (i.e., the two-factor theory). Recent work has demonstrated that the generation effect increases when generation tasks place lower, relative to higher, constraints on what participants can self-generate. This study examined whether the effects of generation constraint on memory might be attributable to either mechanism of the two-factor theory. Across three experiments, participants encoded word pairs in two generation conditions (lower- and higher-constraint) and a read control task, followed by a memory test for item memory and two context memory details (source and font color). The results of these experiments support the idea that lower-constraint generation increases the generation effect via enhanced relational processing, as measured through both recognition and cued recall tasks. Results further showed that lower-constraint generation improves context memory for conceptual context (source), but not perceptual context (color), suggesting that this enhanced relational processing may extend to conceptually related details of an item. Overall, these results provide more evidence that fewer generation constraints increase the generation effect and implicate enhanced relational processing as a mechanism for this improvement.


Memory ◽  
2020 ◽  
Vol 28 (5) ◽  
pp. 598-616 ◽  
Author(s):  
Matthew P. McCurdy ◽  
Allison M. Sklenar ◽  
Andrea N. Frankenstein ◽  
Eric D. Leshikar

2019 ◽  
Vol 14 (3) ◽  
pp. 416-430 ◽  
Author(s):  
Margaret M. Keane ◽  
Kathryn Bousquet ◽  
Aubrey Wank ◽  
Mieke Verfaellie

Sign in / Sign up

Export Citation Format

Share Document