scholarly journals Influences of luminance contrast and ambient lighting on visual context learning and retrieval

2020 ◽  
Vol 82 (8) ◽  
pp. 4007-4024
Author(s):  
Xuelian Zang ◽  
Lingyun Huang ◽  
Xiuna Zhu ◽  
Hermann J. Müller ◽  
Zhuanghua Shi

Abstract Invariant spatial context can guide attention and facilitate visual search, an effect referred to as “contextual cueing.” Most previous studies on contextual cueing were conducted under conditions of photopic vision and high search item to background luminance contrast, leaving open the question whether the learning and/or retrieval of context cues depends on luminance contrast and ambient lighting. Given this, we conducted three experiments (each contains two subexperiments) to compare contextual cueing under different combinations of luminance contrast (high/low) and ambient lighting (photopic/mesopic). With high-contrast displays, we found robust contextual cueing in both photopic and mesopic environments, but the acquired contextual cueing could not be transferred when the display contrast changed from high to low in the photopic environment. By contrast, with low-contrast displays, contextual facilitation manifested only in mesopic vision, and the acquired cues remained effective following a switch to high-contrast displays. This pattern suggests that, with low display contrast, contextual cueing benefited from a more global search mode, aided by the activation of the peripheral rod system in mesopic vision, but was impeded by a more local, fovea-centered search mode in photopic vision.

2008 ◽  
Vol 276 (1657) ◽  
pp. 781-786 ◽  
Author(s):  
Martin Stevens ◽  
Isabel S Winney ◽  
Abi Cantor ◽  
Julia Graham

Camouflage is an important strategy in animals to prevent predation. This includes disruptive coloration, where high-contrast markings placed at an animal's edge break up the true body shape. Successful disruption may also involve non-marginal markings found away from the body outline that create ‘false edges’ more salient than the true body form (‘surface disruption’). However, previous work has focused on breaking up the true body outline, not on surface disruption. Furthermore, while high contrast may enhance disruption, it is untested where on the body different contrasts should be placed for maximum effect. We used artificial prey presented to wild avian predators in the field, to determine the effectiveness of surface disruption, and of different luminance contrast placed in different prey locations. Disruptive coloration was no more effective when comprising high luminance contrast per se , but its effectiveness was dramatically increased with high-contrast markings placed away from the body outline, creating effective surface disruption. A model of avian visual edge processing showed that surface disruption does not make object detection more difficult simply by creating false edges away from the true body outline, but its effect may also be based on a different visual mechanism. Our study has implications for whether animals can combine disruptive coloration with other ‘conspicuous’ signalling strategies.


2020 ◽  
Vol 11 ◽  
Author(s):  
Xiaowei Xie ◽  
Siyi Chen ◽  
Xuelian Zang

In contextual cueing, previously encountered context tends to facilitate the detection of the target embedded in it than when the target appears in a novel context. In this study, we investigated whether the contextual cueing could develop at early time when the search display was presented briefly. In four experiments, participants searched for a target T in an array of distractor Ls. The results showed that with a rather short presentation time of the search display, participants were able to learn the spatial context and speeded up their response time overall, with the learning effect lasting for a long period. Specifically, the contextual cueing effect was observed either with or without a mask after a duration of 300-ms presentation of the search display. Such a context learning under rapid presentation could not operate only with the local context information repeated, thus suggesting that a global context was required to guide spatial attention when the viewing time of the search display was limited. Overall, these findings indicate that contextual cueing might arise at an “early,” target selection stage and that the global context is necessary for the context learning under rapid presentation to function.


2010 ◽  
Vol 27 (1-2) ◽  
pp. 43-55 ◽  
Author(s):  
MICHAEL L. RISNER ◽  
FRANKLIN R. AMTHOR ◽  
TIMOTHY J. GAWNE

AbstractRetinal ganglion cells (RGCs) are highly sensitive to changes in contrast, which is crucial for the detection of edges in a visual scene. However, in the natural environment, edges do not just vary in contrast, but edges also vary in the degree of blur, which can be caused by distance from the plane of fixation, motion, and shadows. Hence, blur is as much a characteristic of an edge as luminance contrast, yet its effects on the responses of RGCs are largely unexplored.We examined the responses of rabbit RGCs to sharp edges varying by contrast and also to high-contrast edges varying by blur. The width of the blur profile ranged from 0.73 to 13.05 deg of visual angle. For most RGCs, blurring a high-contrast edge produced the same pattern of reduction of response strength and increase in latency as decreasing the contrast of a sharp edge. In support of this, we found a significant correlation between the amount of blur required to reduce the response by 50% and the size of the receptive fields, suggesting that blur may operate by reducing the range of luminance values within the receptive field. These RGCs cannot individually encode for blur, and blur could only be estimated by comparing the responses of populations of neurons with different receptive field sizes. However, some RGCs showed a different pattern of changes in latency and magnitude with changes in contrast and blur; these neurons could encode blur directly.We also tested whether the response of a RGC to a blurred edge was linear, that is, whether the response of a neuron to a sharp edge was equal to the response to a blurred edge plus the response to the missing spatial components that were the difference between a sharp and blurred edge. Brisk-sustained cells were more linear; however, brisk-transient cells exhibited both linear and nonlinear behavior.


2021 ◽  
Vol 12 ◽  
Author(s):  
Xuelian Zang ◽  
Leonardo Assumpção ◽  
Jiao Wu ◽  
Xiaowei Xie ◽  
Artyom Zinchenko

In the contextual cueing task, visual search is faster for targets embedded in invariant displays compared to targets found in variant displays. However, it has been repeatedly shown that participants do not learn repeated contexts when these are irrelevant to the task. One potential explanation lays in the idea of associative blocking, where salient cues (task-relevant old items) block the learning of invariant associations in the task-irrelevant subset of items. An alternative explanation is that the associative blocking rather hinders the allocation of attention to task-irrelevant subsets, but not the learning per se. The current work examined these two explanations. In two experiments, participants performed a visual search task under a rapid presentation condition (300 ms) in Experiment 1, or under a longer presentation condition (2,500 ms) in Experiment 2. In both experiments, the search items within both old and new displays were presented in two colors which defined the irrelevant and task-relevant items within each display. The participants were asked to search for the target in the relevant subset in the learning phase. In the transfer phase, the instructions were reversed and task-irrelevant items became task-relevant (and vice versa). In line with previous studies, the search of task-irrelevant subsets resulted in no cueing effect post-transfer in the longer presentation condition; however, a reliable cueing effect was generated by task-irrelevant subsets learned under the rapid presentation. These results demonstrate that under rapid display presentation, global attentional selection leads to global context learning. However, under a longer display presentation, global attention is blocked, leading to the exclusive learning of invariant relevant items in the learning session.


2019 ◽  
Vol 31 (3) ◽  
pp. 442-452 ◽  
Author(s):  
Artyom Zinchenko ◽  
Markus Conci ◽  
Paul C. J. Taylor ◽  
Hermann J. Müller ◽  
Thomas Geyer

This study investigates the causal contribution of the left frontopolar cortex (FPC) to the processing of violated expectations from learned target–distractor spatial contingencies during visual search. The experiment consisted of two phases: learning and test. Participants searched for targets presented either among repeated or nonrepeated target–distractor configurations. Prior research showed that repeated encounters of identically arranged displays lead to memory about these arrays, which then can come to guide search (contextual cueing effect). The crucial manipulation was a change of the target location, in a nevertheless constant distractor layout, at the transition from learning to test. In addition to this change, we applied repetitive transcranial magnetic stimulation (rTMS) over the left lateral FPC, over a posterior control site, or no rTMS at all (baseline; between-group manipulation) to see how FPC rTMS influences the ability of observers to adapt context-based memories acquired in the training phase. The learning phase showed expedited search in repeated relative to nonrepeated displays, with this context-based facilitation being comparable across all experimental groups. For the test phase, the recovery of cueing was critically dependent on the stimulation site: Although there was evidence of context adaptation toward the end of the experiment in the occipital and no-rTMS conditions, observers with FPC rTMS showed no evidence of relearning at all after target location changes. This finding shows that FPC plays an important role in the regulation of prediction errors in statistical context learning, thus contributing to an update of the spatial target–distractor contingencies after target position changes in learned spatial arrays.


2019 ◽  
Author(s):  
Steven Wiesner ◽  
Ian W. Baumgart ◽  
Xin Huang

ABSTRACTNatural scenes often contain multiple objects and surfaces. However, how neurons in the visual cortex represent multiple visual stimuli is not well understood. Previous studies have shown that, when multiple stimuli compete in one feature domain, the evoked neuronal response is biased toward the stimulus that has a stronger signal strength. Here we investigate how neurons in the middle temporal (MT) cortex of macaques represent multiple stimuli that compete in more than one feature domain. Visual stimuli were two random-dot patches moving in different directions. One stimulus had low luminance contrast and moved with high coherence, whereas the other had high contrast and moved with low coherence. We found that how MT neurons represent multiple stimuli depended on the spatial arrangement of the stimuli. When two stimuli were overlapping, MT responses were dominated by the stimulus component that had high contrast. When two stimuli were spatially separated within the receptive fields, the contrast dominance was abolished. We found the same results when using contrast to compete with motion speed. Our neural data and computer simulations using a V1-MT model suggest that the contrast dominance found with overlapping stimuli is due to normalization occurring at an input stage fed to MT, and MT neurons cannot overturn this bias based on their own feature selectivity. The interaction between spatially separated stimuli can largely be explained by normalization within MT. Our results revealed new rules on stimulus competition and highlighted the impact of hierarchical processing on representing multiple stimuli in the visual cortex.SIGNIFICANCE STATEMENTPrevious studies have shown that the neural representation of multiple visual stimuli can be accounted for by a divisive normalization model. By using multiple stimuli that compete in more than one feature domain, we found that luminance contrast has a dominant effect in determining competition between multiple stimuli when they were overlapping but not spatially separated. Our results revealed that neuronal responses to multiple stimuli in a given cortical area cannot be simply predicted by the population neural responses elicited in that area by the individual stimulus components. To understand the neural representation of multiple stimuli, rather than considering response normalization only within the area of interest, one must consider the computations including normalization occurring along the hierarchical visual pathway.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Siyi Chen ◽  
Zhuanghua Shi ◽  
Hermann J. Müller ◽  
Thomas Geyer

AbstractDoes multisensory distractor-target context learning enhance visual search over and above unisensory learning? To address this, we had participants perform a visual search task under both uni- and multisensory conditions. Search arrays consisted of one Gabor target that differed from three homogeneous distractors in orientation; participants had to discriminate the target’s orientation. In the multisensory session, additional tactile (vibration-pattern) stimulation was delivered to two fingers of each hand, with the odd-one-out tactile target and the distractors co-located with the corresponding visual items in half the trials; the other half presented the visual array only. In both sessions, the visual target was embedded within identical (repeated) spatial arrangements of distractors in half of the trials. The results revealed faster response times to targets in repeated versus non-repeated arrays, evidencing ‘contextual cueing’. This effect was enhanced in the multisensory session—importantly, even when the visual arrays presented without concurrent tactile stimulation. Drift–diffusion modeling confirmed that contextual cueing increased the rate at which task-relevant information was accumulated, as well as decreasing the amount of evidence required for a response decision. Importantly, multisensory learning selectively enhanced the evidence-accumulation rate, expediting target detection even when the context memories were triggered by visual stimuli alone.


Sign in / Sign up

Export Citation Format

Share Document