scholarly journals The influence of scene context on parafoveal processing of objects

2018 ◽  
Vol 71 (1) ◽  
pp. 229-240 ◽  
Author(s):  
Monica S Castelhano ◽  
Effie J Pereira

Many studies in reading have shown the enhancing effect of context on the processing of a word before it is directly fixated (parafoveal processing of words). Here, we examined whether scene context influences the parafoveal processing of objects and enhances the extraction of object information. Using a modified boundary paradigm called the Dot-Boundary paradigm, participants fixated on a suddenly onsetting cue before the preview object would onset 4° away. The preview object could be identical to the target, visually similar, visually dissimilar or a control (black rectangle). The preview changed to the target object once a saccade toward the object was made. Critically, the objects were presented on either a consistent or an inconsistent scene background. Results revealed that there was a greater processing benefit for consistent than inconsistent scene backgrounds and that identical and visually similar previews produced greater processing benefits than other previews. In the second experiment, we added an additional context condition in which the target location was inconsistent, but the scene semantics remained consistent. We found that changing the location of the target object disrupted the processing benefit derived from the consistent context. Most importantly, across both experiments, the effect of preview was not enhanced by scene context. Thus, preview information and scene context appear to independently boost the parafoveal processing of objects without any interaction from object–scene congruency.

2012 ◽  
Vol 24 (12) ◽  
pp. 2281-2291 ◽  
Author(s):  
Eva Zita Patai ◽  
Sonia Doallo ◽  
Anna Christina Nobre

In everyday situations, we often rely on our memories to find what we are looking for in our cluttered environment. Recently, we developed a new experimental paradigm to investigate how long-term memory (LTM) can guide attention and showed how the pre-exposure to a complex scene in which a target location had been learned facilitated the detection of the transient appearance of the target at the remembered location [Summerfield, J. J., Rao, A., Garside, N., & Nobre, A. C. Biasing perception by spatial long-term memory. The Journal of Neuroscience, 31, 14952–14960, 2011; Summerfield, J. J., Lepsien, J., Gitelman, D. R., Mesulam, M. M., & Nobre, A. C. Orienting attention based on long-term memory experience. Neuron, 49, 905–916, 2006]. This study extends these findings by investigating whether and how LTM can enhance perceptual sensitivity to identify targets occurring within their complex scene context. Behavioral measures showed superior perceptual sensitivity (d′) for targets located in remembered spatial contexts. We used the N2pc ERP to test whether LTM modulated the process of selecting the target from its scene context. Surprisingly, in contrast to effects of visual spatial cues or implicit contextual cueing, LTM for target locations significantly attenuated the N2pc potential. We propose that the mechanism by which these explicitly available LTMs facilitate perceptual identification of targets may differ from mechanisms triggered by other types of top–down sources of information.


Author(s):  
Samia Hussein

The present study examined the effect of scene context on guidance of attention during visual search in real‐world scenes. Prior research has demonstrated that when searching for an object, attention is usually guided to the region of a scene that most likely contains that target object. This study examined two possible mechanisms of attention that underlie efficient search: enhancement of attention (facilitation) and a deficiency of attention (inhibition). In this study, participants (N=20) were shown an object name and then required to search through scenes for the target while their eye movements were tracked. Scenes were divided into target‐relevant contextual regions (upper, middle, lower) and participants searched repeatedly in the same scene for different targets either in the same region or in different regions. Comparing repeated searches within the same scene across different regions, we expect to find that visual search is faster and more efficient (facilitation of attention) in regions of a scene where attention was previously deployed. At the same time, when searching across different regions, we expect searches to be slower and less efficient (inhibition of attention) because those regions were previously ignored. Results from this study help to better understand how mechanisms of visual attention operate within scene contexts during visual search. 


2014 ◽  
Vol 112 (8) ◽  
pp. 1999-2005 ◽  
Author(s):  
Oleg Spivak ◽  
Peter Thier ◽  
Shabtai Barash

During visual fixations, the eyes are directed so that the image of the target (object of interest) falls on the fovea. An exception to this rule was described in macaque monkeys (though not in humans): dark background induces a gaze shift upwards, sometimes large enough to shift the target's image off the fovea. In this article we address an aspect not previously rigorously studied, the time course of the upshift. The time course is critical for determining whether the upshift is indeed an attribute of visual fixation or, alternatively, of saccades that precede the fixation. These alternatives lead to contrasting predictions regarding the time course of the upshift (durable if the upshift is an attribute of fixation, transient if caused by saccades). We studied visual fixations with dark and bright background in three monkeys. We confined ourselves to a single upshift-inducing session in each monkey so as not to study changes in the upshift caused by training. Already at their first sessions, all monkeys showed clear upshift. During the first 0.5 s after the eye reached the vicinity of the target, the upshift was on average larger, but also more variable, than later in the trial; this initial high value 1) strongly depended on target location and was maximal at locations high on the screen, and 2) appears to reflect mostly the intervals between the primary and correction saccades. Subsequently, the upshift stabilized and remained constant, well above zero, throughout the 2-s fixation interval. Thus there is a persistent background-contingent upshift genuinely of visual fixation.


2020 ◽  
pp. 174702182091329
Author(s):  
Daniele Nardi ◽  
Samantha E Carpenter ◽  
Somer R Johnson ◽  
Greg A Gilliland ◽  
Viveka L Melo ◽  
...  

A visuocentric bias has dominated the literature on spatial navigation and reorientation. Studies on visually accessed environments indicate that, during reorientation, human and non-human animals encode the geometric shape of the environment, even if this information is unnecessary and insufficient for the task. In an attempt to extend our limited knowledge on the similarities and differences between visual and non-visual navigation, here we examined whether the same phenomenon would be observed during auditory-guided reorientation. Provided with a rectangular array of four distinct auditory landmarks, blindfolded, sighted participants had to learn the location of a target object situated on a panel of an octagonal arena. Subsequent test trials were administered to understand how the task was acquired. Crucially, in a condition in which the auditory cues were indistinguishable (same sound sample), participants could still identify the correct target location, suggesting that the rectangular array of auditory landmarks was encoded as a geometric configuration. This is the first evidence of incidental encoding of geometric information with auditory cues and, consistent with the theory of functional equivalence, it supports the generalisation of mechanisms of spatial learning across encoding modalities.


2017 ◽  
Vol 2017 ◽  
pp. 1-13 ◽  
Author(s):  
Suryo Adhi Wibowo ◽  
Hansoo Lee ◽  
Eun Kyeong Kim ◽  
Sungshin Kim

The representation of the object is an important factor in building a robust visual object tracking algorithm. To resolve this problem, complementary learners that use color histogram- and correlation filter-based representation to represent the target object can be used since they each have advantages that can be exploited to compensate the other’s drawback in visual tracking. Further, a tracking algorithm can fail because of the distractor, even when complementary learners have been implemented for the target object representation. In this study, we show that, in order to handle the distractor, first the distractor must be detected by learning the responses from the color-histogram- and correlation-filter-based representation. Then, to determine the target location, we can decide whether the responses from each representation should be merged or only the response from the correlation filter should be used. This decision depends on the result obtained from the distractor detection process. Experiments were performed on the widely used VOT2014 and VOT2015 benchmark datasets. It was verified that our proposed method performs favorably as compared with several state-of-the-art visual tracking algorithms.


2009 ◽  
Vol 62 (7) ◽  
pp. 1356-1376 ◽  
Author(s):  
Barbara J. Juhasz ◽  
Alexander Pollatsek ◽  
Jukka Hyönä ◽  
Denis Drieghe ◽  
Keith Rayner

Parafoveal preview was examined within and between words in two eye movement experiments. In Experiment 1, unspaced and spaced English compound words were used (e.g., basketball, tennis ball). Prior to fixating the second lexeme, either a correct or a partial parafoveal preview (e.g., ball or badk) was provided using the boundary paradigm (Rayner, 1975). There was a larger effect of parafoveal preview on unspaced compound words than on spaced compound words. However, the parafoveal preview effect on spaced compound words was larger than would be predicted on the basis of prior research. Experiment 2 examined whether this large effect was due to spaced compounds forming a larger linguistic unit by pairing spaced compounds with nonlexicalized adjective–noun pairs. There were no significant interactions between item type and parafoveal preview, suggesting that it is the syntactic predictability of the noun that is driving the large preview effect.


2020 ◽  
Vol 13 (6) ◽  
Author(s):  
Leigh Breakell Fernandez ◽  
Christoph Scheepers ◽  
Shanley E.M. Allen

Much reading research has found that informative parafoveal masks lead to a reading benefit for native speakers (see, Schotter et al., 2012). However, little reading research has tested the impact of uninformative parafoveal masks during reading. Additionally, parafoveal processing research is primarily restricted to native speakers. In the current study we manipulated the type of uninformative preview using a gaze contingent boundary paradigm with a group of L1 English speakers and a group of late L2 English speakers (L1 German). We were interested in how different types of uninformative masks impact on parafoveal processing, whether L1 and L2 speakers are similarly impacted, and whether they are sensitive to parafoveally viewed language-specific sub-lexical orthographic information. We manipulated six types of uninformative masks to test these objectives: an Identical, English pseudo-word, German pseudo-word, illegal string of letters, series of X’s, and a blank mask. We found that X masks affect reading the most with slight graded differences across the other masks, L1 and L2 speakers are impacted similarly, and neither group is sensitive to sub-lexical orthographic information. Overall these data show that not all previews are equal, and research should be aware of the way uninformative masks affect reading behavior. Additionally, we hope that future research starts to approach models of eye-movement behavior during reading from not only a monolingual but also from a multilingual perspective.


2018 ◽  
Author(s):  
Xiaoli Zhang ◽  
Julie D. Golomb

AbstractThe image on our retina changes every time we make an eye movement. To maintain visual stability across saccades, specifically to locate visual targets, we may use nontarget objects as “landmarks”. In the current study, we compared how the presence of nontargets affects target localization across saccades and during sustained fixation. Participants fixated a target object, which either maintained its location on the screen (sustained-fixation trials), or displaced to trigger a saccade (saccade trials). After the target disappeared, participants reported the most recent target location with a mouse click. We found that the presence of nontargets decreased response error magnitude and variability. However, this nontarget facilitation effect was not larger for saccade trials than sustained-fixation trials, indicating that nontarget facilitation might be a general effect for target localization, rather than of particular importance to saccadic stability. Additionally, participants’ responses were biased towards the nontarget locations, particularly when the nontarget-target relationships were preserved in relative coordinates across the saccade. This nontarget bias interacted with biases from other spatial references, e.g. eye movement paths, possibly in a way that emphasized non-redundant information. In summary, the presence of nontargets is one of several sources of reference that combine to influence (both facilitate and bias) target localization.


Sign in / Sign up

Export Citation Format

Share Document