Central and peripheral vision loss differentially affects contextual cueing in visual search.

2015 ◽  
Vol 41 (5) ◽  
pp. 1485-1496 ◽  
Author(s):  
Franziska Geringswald ◽  
Stefan Pollmann
2020 ◽  
Vol 9 (8) ◽  
pp. 15 ◽  
Author(s):  
Stefan Pollmann ◽  
Franziska Geringswald ◽  
Ping Wei ◽  
Eleonora Porracin

2020 ◽  
Vol 10 (11) ◽  
pp. 841
Author(s):  
Erwan David ◽  
Julia Beitner ◽  
Melissa Le-Hoa Võ

Central and peripheral fields of view extract information of different quality and serve different roles during visual tasks. Past research has studied this dichotomy on-screen in conditions remote from natural situations where the scene would be omnidirectional and the entire field of view could be of use. In this study, we had participants looking for objects in simulated everyday rooms in virtual reality. By implementing a gaze-contingent protocol we masked central or peripheral vision (masks of 6 deg. of radius) during trials. We analyzed the impact of vision loss on visuo-motor variables related to fixation (duration) and saccades (amplitude and relative directions). An important novelty is that we segregated eye, head and the general gaze movements in our analyses. Additionally, we studied these measures after separating trials into two search phases (scanning and verification). Our results generally replicate past on-screen literature and teach about the role of eye and head movements. We showed that the scanning phase is dominated by short fixations and long saccades to explore, and the verification phase by long fixations and short saccades to analyze. One finding indicates that eye movements are strongly driven by visual stimulation, while head movements serve a higher behavioral goal of exploring omnidirectional scenes. Moreover, losing central vision has a smaller impact than reported on-screen, hinting at the importance of peripheral scene processing for visual search with an extended field of view. Our findings provide more information concerning how knowledge gathered on-screen may transfer to more natural conditions, and attest to the experimental usefulness of eye tracking in virtual reality.


Author(s):  
Angela A. Manginelli ◽  
Franziska Geringswald ◽  
Stefan Pollmann

When distractor configurations are repeated over time, visual search becomes more efficient, even if participants are unaware of the repetition. This contextual cueing is a form of incidental, implicit learning. One might therefore expect that contextual cueing does not (or only minimally) rely on working memory resources. This, however, is debated in the literature. We investigated contextual cueing under either a visuospatial or a nonspatial (color) visual working memory load. We found that contextual cueing was disrupted by the concurrent visuospatial, but not by the color working memory load. A control experiment ruled out that unspecific attentional factors of the dual-task situation disrupted contextual cueing. Visuospatial working memory may be needed to match current display items with long-term memory traces of previously learned displays.


2020 ◽  
pp. bjophthalmol-2020-317034
Author(s):  
Meghal Gagrani ◽  
Jideofor Ndulue ◽  
David Anderson ◽  
Sachin Kedar ◽  
Vikas Gulati ◽  
...  

PurposeGlaucoma patients with peripheral vision loss have in the past subjectively described their field loss as ‘blurred’ or ‘no vision compromise’. We developed an iPad app for patients to self-characterise perception within areas of glaucomatous visual field loss.MethodsTwelve glaucoma patients with visual acuity ≥20/40 in each eye, stable and reliable Humphrey Visual Field (HVF) over 2 years were enrolled. An iPad app (held at 33 cm) allowed subjects to modify ‘blur’ or ‘dimness’ to match their perception of a 2×2 m wall-mounted poster at 1 m distance. Subjects fixated at the centre of the poster (spanning 45° of field from centre). The output was degree of blur/dim: normal, mild and severe noted on the iPad image at the 54 retinal loci tested by the HVF 24-2 and was compared to threshold sensitivity values at these loci. Monocular (Right eye (OD), left eye (OS)) HVF responses were used to calculate an integrated binocular (OU) visual field index (VFI). All three data sets were analysed separately.Results36 HVF and iPad responses from 12 subjects (mean age 71±8.2y) were analysed. The mean VFI was 77% OD, 76% OS, 83% OU. The most common iPad response reported was normal followed by blur. No subject reported dim response. The mean HVF sensitivity threshold was significantly associated with the iPad response at the corresponding retinal loci (For OD, OS and OU, respectively (dB): normal: 23, 25, 27; mild blur: 18, 16, 22; severe blur: 9, 9, 11). On receiver operative characteristic (ROC) curve analysis, the HVF retinal sensitivity cut-off at which subjects reported blur was 23.4 OD, 23 OS and 23.3 OU (dB).ConclusionsGlaucoma subjects self-pictorialised their field defects as blur; never dim or black. Our innovation allows translation of HVF data to quantitatively characterise visual perception in patients with glaucomatous field defects.


NeuroImage ◽  
2016 ◽  
Vol 124 ◽  
pp. 887-897 ◽  
Author(s):  
Stefan Pollmann ◽  
Jana Eštočinová ◽  
Susanne Sommer ◽  
Leonardo Chelazzi ◽  
Wolf Zinke

2011 ◽  
Vol 19 (2) ◽  
pp. 203-233 ◽  
Author(s):  
Markus Conci ◽  
Adrian von Mühlenen

2001 ◽  
Vol 54 (4) ◽  
pp. 1105-1124 ◽  
Author(s):  
Yuhong Jiang ◽  
Marvin M. Chun

The effect of selective attention on implicit learning was tested in four experiments using the “contextual cueing” paradigm (Chun & Jiang, 1998, 1999). Observers performed visual search through items presented in an attended colour (e.g., red) and an ignored colour (e.g., green). When the spatial configuration of items in the attended colour was invariant and was consistently paired with a target location, visual search was facilitated, showing contextual cueing (Experiments 1, 3, and 4). In contrast, repeating and pairing the configuration of the ignored items with the target location resulted in no contextual cueing (Experiments 2 and 4). We conclude that implicit learning is robust only when relevant, predictive information is selectively attended.


Sign in / Sign up

Export Citation Format

Share Document