A Computational Model for Color Perception

2012 ◽  
Vol 8 (4) ◽  
pp. 387-415
Author(s):  
Marc Ebner

ABSTRACT Color is not a physical quantity of an object. It cannot be measured. We can only measure reflectance, i.e. the amount of light reflected for each wavelength. Nevertheless, we attach colors to the objects around us. A human observer perceives colors as being approximately constant irrespective of the illuminant which is used to illuminate the scene. Colors are a very important cue in everyday life. They can be used to recognize or distinguish different objects. Currently, we do not yet know how the brain arrives at a color constant or approximately color constant descriptor, i.e. what computational processing is actually performed by the brain. What we need is a computational description of color perception in particular and color vision in general. Only if we are able to write down a full computational theory of the visual system then we have understood how the visual system works. With this contribution, a computational model of color perception is presented. This model is much simpler compared to previous theories. It is able to compute a color constant descriptor even in the presence of spatially varying illuminants. According to this model, the cones respond approximately logarithmic to the irradiance entering the eye. Cells in V1 perform a change of the coordinate system such that colors are represented along a red-green, a blue-yellow and a black-white axis. Cells in V4 compute local space average color using a resistive grid. The resistive grid is formed by cells in V4. The left and right hemispheres are connected via the corpus callosum. A color constant descriptor which is presumably used for color based object recognition is computed by subtracting local space average color from the cone response within a rotated coordinate system.

2020 ◽  
Author(s):  
Alejandro Lerer ◽  
Hans Supèr ◽  
Matthias S.Keil

AbstractThe visual system is highly sensitive to spatial context for encoding luminance patterns. Context sensitivity inspired the proposal of many neural mechanisms for explaining the perception of luminance (brightness). Here we propose a novel computational model for estimating the brightness of many visual illusions. We hypothesize that many aspects of brightness can be explained by a predictive coding mechanism, which reduces the redundancy in edge representations on the one hand, while non-redundant activity is enhanced on the other (response equalization). Response equalization is implemented with a dynamic filtering process, which (dynamically) adapts to each input image. Dynamic filtering is applied to the responses of complex cells in order to build a gain control map. The gain control map then acts on simple cell responses before they are used to create a brightness map via activity propagation. Our approach is successful in predicting many challenging visual illusions, including contrast effects, assimilation, and reverse contrast.Author summaryWe hardly notice that what we see is often different from the physical world “outside” of the brain. This means that the visual experience that the brain actively constructs may be different from the actual physical properties of objects in the world. In this work, we propose a hypothesis about how the visual system of the brain may construct a representation for achromatic images. Since this process is not unambiguous, sometimes we notice “errors” in our perception, which cause visual illusions. The challenge for theorists, therefore, is to propose computational principles that recreate a large number of visual illusions and to explain why they occur. Notably, our proposed mechanism explains a broader set of visual illusions than any previously published proposal. We achieved this by trying to suppress predictable information. For example, if an image contained repetitive structures, then these structures are predictable and would be suppressed. In this way, non-predictable structures stand out. Predictive coding mechanisms act as early as in the retina (which enhances luminance changes but suppresses uniform regions of luminance), and our computational model holds that this principle also acts at the next stage in the visual system, where representations of perceived luminance (brightness) are created.


2020 ◽  
Author(s):  
Samson Chengetanai ◽  
Adhil Bhagwandin ◽  
Mads F. Bertelsen ◽  
Therese Hård ◽  
Patrick R. Hof ◽  
...  

2020 ◽  
pp. 304-312

Background: Insult to the brain, whether from trauma or other etiologies, can have a devastating effect on an individual. Symptoms can be many and varied, depending on the location and extent of damage. This presentation can be a challenge to the optometrist charged with treating the sequelae of this event as multiple functional components of the visual system can be affected. Case Report: This paper describes the diagnosis and subsequent ophthalmic management of an acquired brain injury in a 22 year old male on active duty in the US Army. After developing acute neurological symptoms, the patient was diagnosed with a pilocytic astrocytoma of the cerebellum. Emergent neurosurgery to treat the neoplasm resulted in iatrogenic cranial nerve palsies and a hemispheric syndrome. Over the next 18 months, he was managed by a series of providers, including a strabismus surgeon, until presenting to our clinic. Lenses, prism, and in-office and out-of-office neurooptometric rehabilitation therapy were utilized to improve his functioning and make progress towards his goals. Conclusions: Pilocytic astrocytomas are the most common primary brain tumors, and the vast majority are benign with excellent surgical prognosis. Although the most common site is the cerebellum, the visual pathway is also frequently affected. If the eye or visual system is affected, optometrists have the ability to drastically improve quality of life with neuro-optometric rehabilitation.


2009 ◽  
Vol 05 (01) ◽  
pp. 115-121
Author(s):  
ANDREW R. PARKER ◽  
H. JOHN CAULFIELD

"What comes first: the chicken or the egg?" Eyes and vision were a great concern for Darwin. Recently, religious fundamentalists have started to attack evolution on the grounds that this is a chicken and egg problem. How could eyes improve without the brain module to use the new information that eye provides? But how could the brain evolve a neural circuit to process data not available to it until a new eye capability emerges? We argue that neural plasticity in the brain allows it to make use of essentially any useful information the eye can produce. And it does so easily within the animal's lifetime. Richard Gregory suggested something like this 40 years ago. Our work resolves a problem with his otherwise-insightful work.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Arthur-Ervin Avramiea ◽  
Richard Hardstone ◽  
Jan-Matthis Lueckmann ◽  
Jan Bím ◽  
Huibert D Mansvelder ◽  
...  

Understanding why identical stimuli give differing neuronal responses and percepts is a central challenge in research on attention and consciousness. Ongoing oscillations reflect functional states that bias processing of incoming signals through amplitude and phase. It is not known, however, whether the effect of phase or amplitude on stimulus processing depends on the long-term global dynamics of the networks generating the oscillations. Here, we show, using a computational model, that the ability of networks to regulate stimulus response based on pre-stimulus activity requires near-critical dynamics—a dynamical state that emerges from networks with balanced excitation and inhibition, and that is characterized by scale-free fluctuations. We also find that networks exhibiting critical oscillations produce differing responses to the largest range of stimulus intensities. Thus, the brain may bring its dynamics close to the critical state whenever such network versatility is required.


Sign in / Sign up

Export Citation Format

Share Document