scholarly journals Interocular Differences in Spatial Frequency Influence the Pulfrich Effect

Vision ◽  
2020 ◽  
Vol 4 (1) ◽  
pp. 20
Author(s):  
Seung Hyun Min ◽  
Alexandre Reynaud ◽  
Robert F. Hess

The Pulfrich effect is a stereo-motion phenomenon. When the two eyes are presented with visual targets moving in fronto-parallel motion at different luminances or contrasts, the perception is of a target moving-in-depth. It is thought that this percept of motion-in-depth occurs because lower luminance or contrast delays the speed of visual processing. Spatial properties of an image such as spatial frequency and size have also been shown to influence the speed of visual processing. In this study, we use a paradigm to measure interocular delay based on the Pulfrich effect where a structure-from-motion defined cylinder, composed of Gabor elements displayed at different interocular phases, rotates in depth. This allows us to measure any relative interocular processing delay while independently manipulating the spatial frequency and size of the micro elements (i.e., Gabor patches). We show that interocular spatial frequency differences, but not interocular size differences of image features, produce interocular processing delays.

The existence of multiple channels, or multiple receptive field sizes, in the visual system does not commit us to any particular theory of spatial encoding in vision. However, distortions of apparent spatial frequency and width in a wide variety of conditions favour the idea that each channel carries a width- or frequency-related code or ‘label’ rather than a ‘local sign’ or positional label. When distortions of spatial frequency occur without prior adaptation (e.g. at low contrast or low luminance) they are associated with lowered sensitivity, and may be due to a mismatch between the perceptual labels and the actual tuning of the channels. A low-level representation of retinal space could be constructed from the spatial information encoded by the channels, rather than being projected intact from the retina.


2016 ◽  
Vol 16 (12) ◽  
pp. 554
Author(s):  
Antoine Barbot ◽  
Krystel Huxlin ◽  
Duje Tadin ◽  
Geunyoung Yoon

Author(s):  
N Seijdel ◽  
N Tsakmakidis ◽  
EHF De Haan ◽  
SM Bohte ◽  
HS Scholte

AbstractFeedforward deep convolutional neural networks (DCNNs) are, under specific conditions, matching and even surpassing human performance in object recognition in natural scenes. This performance suggests that the analysis of a loose collection of image features could support the recognition of natural object categories, without dedicated systems to solve specific visual subtasks. Research in humans however suggests that while feedforward activity may suffice for sparse scenes with isolated objects, additional visual operations (‘routines’) that aid the recognition process (e.g. segmentation or grouping) are needed for more complex scenes. Linking human visual processing to performance of DCNNs with increasing depth, we here explored if, how, and when object information is differentiated from the backgrounds they appear on. To this end, we controlled the information in both objects and backgrounds, as well as the relationship between them by adding noise, manipulating background congruence and systematically occluding parts of the image. Results indicate that with an increase in network depth, there is an increase in the distinction between object- and background information. For more shallow networks, results indicated a benefit of training on segmented objects. Overall, these results indicate that, de facto, scene segmentation can be performed by a network of sufficient depth. We conclude that the human brain could perform scene segmentation in the context of object identification without an explicit mechanism, by selecting or “binding” features that belong to the object and ignoring other features, in a manner similar to a very deep convolutional neural network.


2012 ◽  
Vol 25 (0) ◽  
pp. 40
Author(s):  
Alexis Pérez-Bellido ◽  
Joan López-Moliner ◽  
Salvador Soto-Faraco

Prior knowledge about the spatial frequency (SF) of upcoming visual targets (Gabor patches) speeds up average reaction times and decreases standard deviation. This has often been regarded as evidence for a multichannel processing of SF in vision. Multisensory research, on the other hand, has often reported the existence of sensory interactions between auditory and visual signals. These interactions result in enhancements in visual processing, leading to lower sensory thresholds and/or more precise visual estimates. However, little is known about how multisensory interactions may affect the uncertainty regarding visual SF. We conducted a reaction time study in which we manipulated the uncertanty about SF (SF was blocked or interleaved across trials) of visual targets, and compared visual only versus audio–visual presentations. Surprisingly, the analysis of the reaction times and their standard deviation revealed an impairment of the selective monitoring of the SF channel by the presence of a concurrent sound. Moreover, this impairment was especially pronounced when the relevant channels were high SFs at high visual contrasts. We propose that an accessory sound automatically favours visual processing of low SFs through the magnocellular channels, thereby detracting from the potential benefits from tuning into high SF psychophysical-channels.


1999 ◽  
Vol 16 (3) ◽  
pp. 527-540 ◽  
Author(s):  
ISABELLE MARESCHAL ◽  
CURTIS L. BAKER

Neurons in the mammalian visual cortex have been found to respond to second-order features which are not defined by changes in luminance over the retina (Albright, 1992; Zhou & Baker, 1993, 1994, 1996; Mareschal & Baker, 1998a,b). The detection of these stimuli is most often accounted for by a separate nonlinear processing stream, acting in parallel to the linear stream in the visual system. Here we examine the two-dimensional spatial properties of these nonlinear neurons in area 18 using envelope stimuli, which consist of a high spatial-frequency carrier whose contrast is modulated by a low spatial-frequency envelope. These stimuli would fail to elicit a response in a conventional linear neuron because they are designed to contain no spatial-frequency components overlapping the neuron's luminance defined passband. We measured neurons' responses to these stimuli as a function of both the relative spatial frequencies and relative orientations of the carrier and envelope. Neurons' responses to envelope stimuli were narrowband to the carrier spatial frequency, with optimal values ranging from 8- to 30-fold higher than the envelope spatial frequencies. Neurons' responses to the envelope stimuli were strongly dependent on the orientation of the envelope and less so on the orientation of the carrier. Although the selectivity to the carrier orientation was broader, neurons' responses were clearly tuned, suggesting that the source of nonlinear input is cortical. There was no fixed relationship between the optimal carrier and envelope spatial frequencies or orientations, such that nonlinear neurons responding to these stimuli could perhaps respond to a variety of stimuli defined by changes in scale or orientation.


2015 ◽  
Vol 45 (10) ◽  
pp. 2111-2122 ◽  
Author(s):  
W. Li ◽  
T. M. Lai ◽  
C. Bohon ◽  
S. K. Loo ◽  
D. McCurdy ◽  
...  

BackgroundAnorexia nervosa (AN) and body dysmorphic disorder (BDD) are characterized by distorted body image and are frequently co-morbid with each other, although their relationship remains little studied. While there is evidence of abnormalities in visual and visuospatial processing in both disorders, no study has directly compared the two. We used two complementary modalities – event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI) – to test for abnormal activity associated with early visual signaling.MethodWe acquired fMRI and ERP data in separate sessions from 15 unmedicated individuals in each of three groups (weight-restored AN, BDD, and healthy controls) while they viewed images of faces and houses of different spatial frequencies. We used joint independent component analyses to compare activity in visual systems.ResultsAN and BDD groups demonstrated similar hypoactivity in early secondary visual processing regions and the dorsal visual stream when viewing low spatial frequency faces, linked to the N170 component, as well as in early secondary visual processing regions when viewing low spatial frequency houses, linked to the P100 component. Additionally, the BDD group exhibited hyperactivity in fusiform cortex when viewing high spatial frequency houses, linked to the N170 component. Greater activity in this component was associated with lower attractiveness ratings of faces.ConclusionsResults provide preliminary evidence of similar abnormal spatiotemporal activation in AN and BDD for configural/holistic information for appearance- and non-appearance-related stimuli. This suggests a common phenotype of abnormal early visual system functioning, which may contribute to perceptual distortions.


1998 ◽  
Vol 10 (2) ◽  
pp. 199-215 ◽  
Author(s):  
Alexander Grunewald ◽  
Stephen Grossberg

This article develops a neural model of how sharp disparity tuning can arise through experience-dependent development of cortical complex cells. This learning process clarifies how complex cells can binocularly match left and right eye image features with the same contrast polarity, yet also pool signals with opposite contrast polarities. Antagonistic rebounds between LGN ON and OFF cells and cortical simple cells sensitive to opposite contrast polarities enable anticorrelated simple cells to learn to activate a shared set of complex cells. Feedback from binocularly tuned cortical cells to monocular LGN cells is proposed to carry out a matching process that dynamically stabilizes the learning process. This feedback represents a type of matching process that is elaborated at higher visual processing areas into a volitionally controllable type of attention. We show stable learning when both of these properties hold. Learning adjusts the initially coarsely tuned disparity preference to match the disparities present in the environment, and the tuning width decreases to yield high disparity selectivity, which enables the model to quickly detect image disparities. Learning is impaired in the absence of either antagonistic rebounds or corticogeniculate feedback. The model also helps to explain psychophysical and neurobiological data about adult 3-D vision.


Author(s):  
Zhanshen Feng

With the progress and development of multimedia image processing technology, and the rapid growth of image data, how to efficiently extract the interesting and valuable information from the huge image data, and effectively filter out the redundant data, these have become an urgent problem in the field of image processing and computer vision. In recent years, as one of the important branches of computer vision, image detection can assist and improve a series of visual processing tasks. It has been widely used in many fields, such as scene classification, visual tracking, object redirection, semantic segmentation and so on. Intelligent algorithms have strong non-linear mapping capability, data processing capacity and generalization ability. Support vector machine (SVM) by using the structural risk minimization principle constructs the optimal classification hyper-plane in the attribute space to make the classifier get the global optimum and has the expected risk meet a certain upper bound at a certain probability in the entire sample space. This paper combines SVM and artificial fish swarm algorithm (AFSA) for parameter optimization, builds AFSA-SVM classification model to achieve the intelligent identification of image features, and provides reliable technological means to accelerate sensing technology. The experiment result proves that AFSA-SVM has better classification accuracy and indicates that the algorithm of this paper can effectively realize the intelligent identification of image features.


2019 ◽  
Author(s):  
Johannes Burge ◽  
Victor Rodriguez-Lopez ◽  
Carlos Dorronsoro

Monovision corrections are a common treatment for presbyopia. Each eye is fit with a lens that sharply focuses light from a different distance, causing the image in one eye to be blurrier than the other. Millions of people in the United States and Europe have monovision corrections, but little is known about how differential blur affects motion perception. We investigated by measuring the Pulfrich effect, a stereo-motion phenomenon first reported nearly 100 years ago. When a moving target is viewed with unequal retinal illuminance or contrast in the two eyes, the target appears to be closer or further in depth than it actually is, depending on its frontoparallel direction. The effect occurs because the image with lower illuminance or contrast is processed more slowly. The mismatch in processing speed causes a neural disparity, which results in the illusory motion in depth. What happens with differential blur? Remarkably, differential blur causes a reverse Pulfrich effect, an apparent paradox. Blur reduces contrast and should therefore cause processing delays. But the reverse Pulfrich effect implies that the blurry image is processed more quickly. The paradox is resolved by recognizing that: i) blur reduces the contrast of high-frequency image components more than low-frequency image components, and ii) high spatial frequencies are processed more slowly than low spatial frequencies, all else equal. Thus, this new illusion—the reverse Pulfrich effect—can be explained by known properties of the early visual system. A quantitative analysis shows that the associated misperceptions are large enough to impact public safety.


Sign in / Sign up

Export Citation Format

Share Document