scholarly journals Context-based facilitation in visual word recognition: Evidence for visual and lexical but not pre-lexical contributions

2018 ◽  
Author(s):  
Susanne Eisenhauer ◽  
Christian J. Fiebach ◽  
Benjamin Gagl

AbstractWord familiarity and predictive context facilitate visual word processing, leading to faster recognition times and reduced neuronal responses. Previously, models with and without top-down connections, including lexical-semantic, pre-lexical (e.g., orthographic/ phonological), and visual processing levels were successful in accounting for these facilitation effects. Here we systematically assessed context-based facilitation with a repetition priming task and explicitly dissociated pre-lexical and lexical processing levels using a pseudoword familiarization procedure. Experiment 1 investigated the temporal dynamics of neuronal facilitation effects with magnetoencephalography (MEG; N=38 human participants) while Experiment 2 assessed behavioral facilitation effects (N=24 human participants). Across all stimulus conditions, MEG demonstrated context-based facilitation across multiple time windows starting at 100 ms, in occipital brain areas. This finding indicates context based-facilitation at an early visual processing level. In both experiments, we furthermore found an interaction of context and lexical familiarity, such that stimuli with associated meaning showed the strongest context-dependent facilitation in brain activation and behavior. Using MEG, this facilitation effect could be localized to the left anterior temporal lobe at around 400 ms, indicating within-level (i.e., exclusively lexical-semantic) facilitation but no top-down effects on earlier processing stages. Increased pre-lexical familiarity (in pseudowords familiarized utilizing training) did not enhance or reduce context effects significantly. We conclude that context based-facilitation is achieved within visual and lexical processing levels. Finally, by testing alternative hypotheses derived from mechanistic accounts of repetition suppression, we suggest that the facilitatory context effects found here are implemented using a predictive coding mechanism.Significance StatementThe goal of reading is to derive meaning from script. This highly automatized process benefits from facilitation depending on word familiarity and text context. Facilitation might occur exclusively within each level of word processing (i.e., visual, pre-lexical, and/or lexical-semantic) but could alternatively also propagate in a top-down manner from higher to lower levels. To test the relevance of these two alternative accounts at each processing level, we combined a pseudoword learning approach controlling for letter string familiarity with repetition priming. We found enhanced context-based facilitation at the lexical-semantic but not pre-lexical processing stage, and no evidence of top-down facilitation from lexical-semantic to earlier word recognition processes. We also identified predictive coding as the most likely mechanism underlying within-level context-based facilitation.

2004 ◽  
Vol 63 (3) ◽  
pp. 143-149 ◽  
Author(s):  
Fred W. Mast ◽  
Charles M. Oman

The role of top-down processing on the horizontal-vertical line length illusion was examined by means of an ambiguous room with dual visual verticals. In one of the test conditions, the subjects were cued to one of the two verticals and were instructed to cognitively reassign the apparent vertical to the cued orientation. When they have mentally adjusted their perception, two lines in a plus sign configuration appeared and the subjects had to evaluate which line was longer. The results showed that the line length appeared longer when it was aligned with the direction of the vertical currently perceived by the subject. This study provides a demonstration that top-down processing influences lower level visual processing mechanisms. In another test condition, the subjects had all perceptual cues available and the influence was even stronger.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Laura Bechtold ◽  
Christian Bellebaum ◽  
Paul Hoffman ◽  
Marta Ghio

AbstractThis study aimed to replicate and validate concreteness and context effects on semantic word processing. In Experiment 1, we replicated the behavioral findings of Hoffman et al. (Cortex 63,250–266, https://doi.org/10.1016/j.cortex.2014.09.001, 2015) by applying their cueing paradigm with their original stimuli translated into German. We found concreteness and contextual cues to facilitate word processing in a semantic judgment task with 55 healthy adults. The two factors interacted in their effect on reaction times: abstract word processing profited more strongly from a contextual cue, while the concrete words’ processing advantage was reduced but still present. For accuracy, the descriptive pattern of results suggested an interaction, which was, however, not significant. In Experiment 2, we reformulated the contextual cues to avoid repetition of the to-be-processed word. In 83 healthy adults, the same pattern of results emerged, further validating the findings. Our corroborating evidence supports theories integrating representational richness and semantic control mechanisms as complementary mechanisms in semantic word processing.


Cognition ◽  
1987 ◽  
Vol 25 (1-2) ◽  
pp. 213-234 ◽  
Author(s):  
Michael K. Tanenhaus ◽  
Margery M. Lucas

1983 ◽  
Vol 27 (5) ◽  
pp. 354-354
Author(s):  
Bruce W. Hamill ◽  
Robert A. Virzi

This investigation addresses the problem of attention in the processing of symbolic information from visual displays. Its scope includes the nature of attentive processes, the structural properties of stimuli that influence visual information processing mechanisms, and the manner in which these factors interact in perception. Our purpose is to determine the effects of configural feature structure on visual information processing. It is known that for stimuli comprising separable features, one can distinguish between conditions in which only one relevant feature differs among stimuli in the array being searched and conditions in which conjunctions of two (or more) features differ: Since the visual process of conjoining separable features is additive, this fact is reflected in search time as a function of array size, with feature conditions yielding flat curves associated with parallel search (no increase in search time across array sizes) and conjunction conditions yielding linearly increasing curves associated with serial search. We studied configural-feature stimuli within this framework to determine the nature of visual processing for such stimuli as a function of their feature structure. Response times of subjects searching for particular targets among structured arrays of distractors were measured in a speeded visual search task. Two different sets of stimulus materials were studied in array sizes of up to 32 stimuli, using both tachistoscope and microcomputer-based CRT presentation for each. Our results with configural stimuli indicate serial search in all of the conditions, with the slope of the response-time-by-array-size function being steeper for conjunction conditions than for feature conditions. However, for each of the two sets of stimuli we studied, there was one configuration that stood apart from the others in its set in that it yielded significantly faster response times, and in that conjunction conditions involving these particular stimuli tended to cluster with the feature conditions rather than with the other conjunction conditions. In addition to these major effects of particular targets, context effects also appeared in our results as effects of the various distractor sets used; certain of these context effects appear to be reversible. The effects of distractor sets on target search were studied in considerable detail. We have found interesting differences in visual processing between stimuli comprising separable features and those comprising configural features. We have also been able to characterize the effects we have found with configural-feature stimuli as being related to the specific feature structure of the target stimulus in the context of the specific feature structure of distractor stimuli. These findings have strong implications for the design of symbology that can enhance visual performance in the use of automated displays.


Vision ◽  
2021 ◽  
Vol 5 (1) ◽  
pp. 13
Author(s):  
Christian Valuch

Color can enhance the perception of relevant stimuli by increasing their salience and guiding visual search towards stimuli that match a task-relevant color. Using Continuous Flash Suppression (CFS), the current study investigated whether color facilitates the discrimination of targets that are difficult to perceive due to interocular suppression. Gabor patterns of two or four cycles per degree (cpd) were shown as targets to the non-dominant eye of human participants. CFS masks were presented at a rate of 10 Hz to the dominant eye, and participants had the task to report the target’s orientation as soon as they could discriminate it. The 2-cpd targets were robustly suppressed and resulted in much longer response times compared to 4-cpd targets. Moreover, only for 2-cpd targets, two color-related effects were evident. First, in trials where targets and CFS masks had different colors, targets were reported faster than in trials where targets and CFS masks had the same color. Second, targets with a known color, either cyan or yellow, were reported earlier than targets whose color was randomly cyan or yellow. The results suggest that the targets’ entry to consciousness may have been speeded by color-mediated effects relating to increased (bottom-up) salience and (top-down) task relevance.


2019 ◽  
Author(s):  
Yuru Song ◽  
Mingchen Yao ◽  
Helen Kemprecos ◽  
Áine Byrne ◽  
Zhengdong Xiao ◽  
...  

AbstractPain is a complex, multidimensional experience that involves dynamic interactions between sensory-discriminative and affective-emotional processes. Pain experiences have a high degree of variability depending on their context and prior anticipation. Viewing pain perception as a perceptual inference problem, we use a predictive coding paradigm to characterize both evoked and spontaneous pain. We record the local field potentials (LFPs) from the primary somatosensory cortex (S1) and the anterior cingulate cortex (ACC) of freely behaving rats—two regions known to encode the sensory-discriminative and affective-emotional aspects of pain, respectively. We further propose a framework of predictive coding to investigate the temporal coordination of oscillatory activity between the S1 and ACC. Specifically, we develop a high-level, empirical and phenomenological model to describe the macroscopic dynamics of bottom-up and top-down activity. Supported by recent experimental data, we also develop a mechanistic mean-field model to describe the mesoscopic population neuronal dynamics in the S1 and ACC populations, in both naive and chronic pain-treated animals. Our proposed predictive coding models not only replicate important experimental findings, but also provide new mechanistic insight into the uncertainty of expectation, placebo or nocebo effect, and chronic pain.Author SummaryPain perception in the mammalian brain is encoded through multiple brain circuits. The experience of pain is often associated with brain rhythms or neuronal oscillations at different frequencies. Understanding the temporal coordination of neural oscillatory activity from different brain regions is important for dissecting pain circuit mechanisms and revealing differences between distinct pain conditions. Predictive coding is a general computational framework to understand perceptual inference by integrating bottom-up sensory information and top-down expectation. Supported by experimental data, we propose a predictive coding framework for pain perception, and develop empirical and biologically-constrained computational models to characterize oscillatory dynamics of neuronal populations from two cortical circuits—one for the sensory-discriminative experience and the other for affective-emotional experience, and further characterize their temporal coordination under various pain conditions. Our computational study of biologically-constrained neuronal population model reveals important mechanistic insight on pain perception, placebo analgesia, and chronic pain.


2020 ◽  
Vol 15 ◽  
pp. 185-190
Author(s):  
Filiz Mergen ◽  
Gulmira Kuruoglu

A great bulk of research in the psycholinguistic literature has been dedicated to hemispheric organization of words. An overwhelming evidence suggests that the left hemisphere is primarily responsible for lexical processing. However, non-words, which look similar to real words but lack meaningful associations, is underrepresented in the laterality literature. This study investigated the lateralization of Turkish non-words. Fifty-three Turkish monolinguals performed a lexical decision task in a visual hemifield paradigm. An analysis of their response times revealed left-hemispheric dominance for non-words, adding further support to the literature. The accuracy of their answers, however, were comparable regardless of the field of presentation. The results were discussed in light of the psycholinguistic word processing views.


Sign in / Sign up

Export Citation Format

Share Document