scholarly journals Rhythmic modulation of visual contrast discrimination triggered by action

2016 ◽  
Vol 283 (1831) ◽  
pp. 20160692 ◽  
Author(s):  
Alessandro Benedetto ◽  
Donatella Spinelli ◽  
M. Concetta Morrone

Recent evidence suggests that ongoing brain oscillations may be instrumental in binding and integrating multisensory signals. In this experiment, we investigated the temporal dynamics of visual–motor integration processes. We show that action modulates sensitivity to visual contrast discrimination in a rhythmic fashion at frequencies of about 5 Hz (in the theta range), for up to 1 s after execution of action. To understand the origin of the oscillations, we measured oscillations in contrast sensitivity at different levels of luminance, which is known to affect the endogenous brain rhythms, boosting the power of alpha-frequencies. We found that the frequency of oscillation in sensitivity increased at low luminance, probably reflecting the shift in mean endogenous brain rhythm towards higher frequencies. Importantly, both at high and at low luminance, contrast discrimination showed a rhythmic motor-induced suppression effect, with the suppression occurring earlier at low luminance. We suggest that oscillations play a key role in sensory–motor integration, and that the motor-induced suppression may reflect the first manifestation of a rhythmic oscillation.

Author(s):  
Birgitta Dresp-Langley ◽  
Marie Monfouga

Pieron's and Chocholle’s seminal psychophysical work predicts that human response time to information relative to visual contrast and/or sound frequency decreases when contrast intensity or sound frequency increases. The goal of this study is to bring to the fore the ability of individuals to use visual contrast intensity and sound frequency in combination for faster perceptual decisions of relative depth (“nearer”) in planar (2D) object configurations on the basis of physical variations in luminance contrast. Computer controlled images with two abstract patterns of varying contrast intensity, one on the left and one on the right, preceded or not by a pure tone of varying frequency, were shown to healthy young humans in controlled experimental sequences. Their task (two-alternative forced-choice) was to decide as quickly as possible which of two patterns, the left or the right one, in a given image appeared to “stand out as if it were nearer” in terms of apparent (subjective) visual depth. The results show that the combinations of varying relative visual contrast with sounds of varying frequency exploited here produced an additive effect on choice response times in terms of facilitation, where a stronger visual contrast combined with a higher sound frequency produced shorter forced-choice response times. This new effect is predicted by cross-modal audio-visual probability summation.


2020 ◽  
Vol 32 (2) ◽  
pp. 201-211 ◽  
Author(s):  
Qiaoli Huang ◽  
Huan Luo

Objects, shown explicitly or held in mind internally, compete for limited processing resources. Recent studies have demonstrated that attention samples locations and objects rhythmically. Interestingly, periodic sampling not only operates over objects in the same scene but also occurs for multiple perceptual predictions that are held in attention for incoming inputs. However, how the brain coordinates perceptual predictions that are endowed with different levels of bottom–up saliency information remains unclear. To address the issue, we used a fine-grained behavioral measurement to investigate the temporal dynamics of processing of high- and low-salient visual stimuli, which have equal possibility to occur within experimental blocks. We demonstrate that perceptual predictions associated with different levels of saliency are organized via a theta-band rhythmic course and are optimally processed in different phases within each theta-band cycle. Meanwhile, when the high- and low-salient stimuli are presented in separate blocks and thus not competing with each other, the periodic behavioral profile is no longer present. In summary, our findings suggest that attention samples and coordinates multiple perceptual predictions through a theta-band rhythm according to their relative saliency. Our results, in combination with previous studies, advocate the rhythmic nature of attentional process.


2018 ◽  
Vol 3 (1) ◽  
pp. e000139
Author(s):  
Lee Lenton

ObjectiveTo compare the performance of adults with multifocal intraocular lenses (MIOLs) in a realistic flight simulator with age-matched adults with monofocal intraocular lenses (IOLs).Methods and AnalysisTwenty-five adults ≥60 years with either bilateral MIOL or bilateral IOL implantation were enrolled. Visual function tests included visual acuity and contrast sensitivity under photopic and mesopic conditions, defocus curves and low luminance contrast sensitivity tests in the presence and absence of glare (Mesotest II), as well as halo size measurement using an app-based halometer (Aston halometer). Flight simulator performance was assessed in a fixed-based flight simulator (PS4.5). Subjects completed three simulated landing runs in both daytime and night-time conditions in a randomised order, including a series of visual tasks critical for safety.ResultsOf the 25 age-matched enrolled subjects, 13 had bilateral MIOLs and 12 had bilateral IOLs. Photopic and mesopic visual acuity or contrast sensitivity were not significantly different between the groups. Larger halo areas were seen in the MIOL group and Mesotest values were significantly worse in the MIOL group, both with and without glare. The defocus curves showed better uncorrected visual acuity at intermediate and near distances for the MIOL group. There were no significant differences regarding performance of the vision-related flight simulator tasks between both groups.ConclusionsThe performance of visually related flight simulator tasks was not significantly impaired in older adults with MIOLs compared with age-matched adults with monofocal IOLs. These findings suggest that MIOLs do not impair visual performance in a flight simulator.


2020 ◽  
Author(s):  
Cedric P. van den Berg ◽  
Michelle Hollenkamp ◽  
Laurie J. Mitchell ◽  
Erin J. Watson ◽  
Naomi F. Green ◽  
...  

AbstractAchromatic (luminance) vision is used by animals to perceive motion, pattern, space and texture. Luminance contrast sensitivity thresholds are often poorly characterised for individual species and are applied across a diverse range of perceptual contexts using over-simplified assumptions of an animal’s visual system. Such thresholds are often estimated using the Receptor Noise Limited model (RNL) using quantum catch values and estimated noise levels of photoreceptors. However, the suitability of the RNL model to describe luminance contrast perception remains poorly tested.Here, we investigated context-dependent luminance discrimination using triggerfish (Rhinecanthus aculeatus) presented with large achromatic stimuli (spots) against uniform achromatic backgrounds of varying absolute and relative contrasts. ‘Dark’ and ‘bright’ spots were presented against relatively dark and bright backgrounds. We found significant differences in luminance discrimination thresholds across treatments. When measured using Michelson contrast, thresholds for bright spots on a bright background were significantly higher than for other scenarios, and the lowest threshold was found when dark spots were presented on dark backgrounds. Thresholds expressed in Weber contrast revealed increased contrast sensitivity for stimuli darker than their backgrounds, which is consistent with the literature. The RNL model was unable to estimate threshold scaling across scenarios as predicted by the Weber-Fechner law, highlighting limitations in the current use of the RNL model to quantify luminance contrast perception. Our study confirms that luminance contrast discrimination thresholds are context-dependent and should therefore be interpreted with caution.


Author(s):  
Katherine L. Hermann ◽  
Shridhar R. Singh ◽  
Isabelle A. Rosenthal ◽  
Dimitrios Pantazis ◽  
Bevil R. Conway

Hue and luminance contrast are the most basic visual features, emerging in early layers of convolutional neural networks trained to perform object categorization. In human vision, the timing of the neural computations that extract these features, and the extent to which they are determined by the same or separate neural circuits, is unknown. We addressed these questions using multivariate analyses of human brain responses measured with magnetoencephalography. We report four discoveries. First, it was possible to decode hue tolerant to changes in luminance contrast, and luminance contrast tolerant to changes in hue, consistent with the existence of separable neural mechanisms for these features. Second, the decoding time course for luminance contrast peaked 16-24 ms before hue and showed a more prominent secondary peak corresponding to decoding of stimulus cessation. These results support the idea that the brain uses luminance contrast as an updating signal to separate events within the constant stream of visual information. Third, neural representations of hue generalized to a greater extent across time, providing a neural correlate of the preeminence of hue over luminance contrast in perceptual grouping and memory. Finally, decoding of luminance contrast was more variable across participants for hues associated with daylight (orange and blue) than for anti-daylight (green and pink), suggesting that color-constancy mechanisms reflect individual differences in assumptions about natural lighting.


Information ◽  
2019 ◽  
Vol 10 (11) ◽  
pp. 346 ◽  
Author(s):  
Birgitta Dresp-Langley ◽  
Marie Monfouga

Pieron’s and Chocholle’s seminal psychophysical work predicts that human response time to information relative to visual contrast and/or sound frequency decreases when contrast intensity or sound frequency increases. The goal of this study is to bring to the forefront the ability of individuals to use visual contrast intensity and sound frequency in combination for faster perceptual decisions of relative depth (“nearer”) in planar (2D) object configurations based on physical variations in luminance contrast. Computer controlled images with two abstract patterns of varying contrast intensity, one on the left and one on the right, preceded or not by a pure tone of varying frequency, were shown to healthy young humans in controlled experimental sequences. Their task (two-alternative, forced-choice) was to decide as quickly as possible which of two patterns, the left or the right one, in a given image appeared to “stand out as if it were nearer” in terms of apparent (subjective) visual depth. The results showed that the combinations of varying relative visual contrast with sounds of varying frequency exploited here produced an additive effect on choice response times in terms of facilitation, where a stronger visual contrast combined with a higher sound frequency produced shorter forced-choice response times. This new effect is predicted by audio-visual probability summation.


2016 ◽  
Vol 108 ◽  
pp. 106
Author(s):  
Rodrigo Ortega ◽  
Vladimir Lopez ◽  
Fabricio Baglivo ◽  
Mario Parra ◽  
Agustín Ibañez

1999 ◽  
Vol 16 (4) ◽  
pp. 675-680 ◽  
Author(s):  
PAULINE PEARSON ◽  
BRIAN TIMNEY

It has been suggested that acetylcholine plays a role in contrast discrimination performance and the regulation of visual contrast gain (Smith, 1996). Since alcohol has been shown to reduce levels of acetylcholine and contrast sensitivity, the present study measured the effects of alcohol on contrast discrimination and explored whether the deficits could be explained as a consequence of reduction in contrast gain. Detection thresholds and contrast increment thresholds under placebo and alcohol (0.06% BAC) conditions were measured in six volunteers. Alcohol was found to impair both detection and discrimination of only high spatial frequencies. However, when the base contrasts used in the increment threshold task were equal multiples of detection threshold, no alcohol-induced changes in increment thresholds were obtained at any spatial frequency. We conclude that alcohol impairs contrast discrimination performance but that no change in contrast gain mechanisms need be postulated to account for the data.


Sign in / Sign up

Export Citation Format

Share Document