scholarly journals Computational models of cortical visual processing.

1996 ◽  
Vol 93 (2) ◽  
pp. 623-627 ◽  
Author(s):  
D. J. Heeger ◽  
E. P. Simoncelli ◽  
J. A. Movshon
2016 ◽  
Vol 28 (1) ◽  
pp. 111-124 ◽  
Author(s):  
Sabrina Walter ◽  
Christian Keitel ◽  
Matthias M. Müller

Visual attention can be focused concurrently on two stimuli at noncontiguous locations while intermediate stimuli remain ignored. Nevertheless, behavioral performance in multifocal attention tasks falters when attended stimuli fall within one visual hemifield as opposed to when they are distributed across left and right hemifields. This “different-hemifield advantage” has been ascribed to largely independent processing capacities of each cerebral hemisphere in early visual cortices. Here, we investigated how this advantage influences the sustained division of spatial attention. We presented six isoeccentric light-emitting diodes (LEDs) in the lower visual field, each flickering at a different frequency. Participants attended to two LEDs that were spatially separated by an intermediate LED and responded to synchronous events at to-be-attended LEDs. Task-relevant pairs of LEDs were either located in the same hemifield (“within-hemifield” conditions) or separated by the vertical meridian (“across-hemifield” conditions). Flicker-driven brain oscillations, steady-state visual evoked potentials (SSVEPs), indexed the allocation of attention to individual LEDs. Both behavioral performance and SSVEPs indicated enhanced processing of attended LED pairs during “across-hemifield” relative to “within-hemifield” conditions. Moreover, SSVEPs demonstrated effective filtering of intermediate stimuli in “across-hemifield” condition only. Thus, despite identical physical distances between LEDs of attended pairs, the spatial profiles of gain effects differed profoundly between “across-hemifield” and “within-hemifield” conditions. These findings corroborate that early cortical visual processing stages rely on hemisphere-specific processing capacities and highlight their limiting role in the concurrent allocation of visual attention to multiple locations.


1999 ◽  
Vol 11 (3) ◽  
pp. 300-311 ◽  
Author(s):  
Edmund T. Rolls ◽  
Martin J. Tovée ◽  
Stefano Panzeri

Backward masking can potentially provide evidence of the time needed for visual processing, a fundamental constraint that must be incorporated into computational models of vision. Although backward masking has been extensively used psychophysically, there is little direct evidence for the effects of visual masking on neuronal responses. To investigate the effects of a backward masking paradigm on the responses of neurons in the temporal visual cortex, we have shown that the response of the neurons is interrupted by the mask. Under conditions when humans can just identify the stimulus, with stimulus onset asynchronies (SOA) of 20 msec, neurons in macaques respond to their best stimulus for approximately 30 msec. We now quantify the information that is available from the responses of single neurons under backward masking conditions when two to six faces were shown. We show that the information available is greatly decreased as the mask is brought closer to the stimulus. The decrease is more marked than the decrease in firing rate because it is the selective part of the firing that is especially attenuated by the mask, not the spontaneous firing, and also because the neuronal response is more variable at short SOAs. However, even at the shortest SOA of 20 msec, the information available is on average 0.1 bits. This compares to 0.3 bits with only the 16-msec target stimulus shown and a typical value for such neurons of 0.4 to 0.5 bits with a 500-msec stimulus. The results thus show that considerable information is available from neuronal responses even under backward masking conditions that allow the neurons to have their main response in 30 msec. This provides evidence for how rapid the processing of visual information is in a cortical area and provides a fundamental constraint for understanding how cortical information processing operates.


2007 ◽  
Vol 26 (2) ◽  
pp. 529-536 ◽  
Author(s):  
Michael Siniatchkin ◽  
Friederike Moeller ◽  
Alex Shepherd ◽  
Hartwig Siebner ◽  
Ulrich Stephani

2020 ◽  
Vol 30 (8) ◽  
pp. 4496-4514 ◽  
Author(s):  
Fakhereh Movahedian Attar ◽  
Evgeniya Kirilina ◽  
Daniel Haenelt ◽  
Kerrin J Pine ◽  
Robert Trampel ◽  
...  

Abstract Short association fibers (U-fibers) connect proximal cortical areas and constitute the majority of white matter connections in the human brain. U-fibers play an important role in brain development, function, and pathology but are underrepresented in current descriptions of the human brain connectome, primarily due to methodological challenges in diffusion magnetic resonance imaging (dMRI) of these fibers. High spatial resolution and dedicated fiber and tractography models are required to reliably map the U-fibers. Moreover, limited quantitative knowledge of their geometry and distribution makes validation of U-fiber tractography challenging. Submillimeter resolution diffusion MRI—facilitated by a cutting-edge MRI scanner with 300 mT/m maximum gradient amplitude—was used to map U-fiber connectivity between primary and secondary visual cortical areas (V1 and V2, respectively) in vivo. V1 and V2 retinotopic maps were obtained using functional MRI at 7T. The mapped V1–V2 connectivity was retinotopically organized, demonstrating higher connectivity for retinotopically corresponding areas in V1 and V2 as expected. The results were highly reproducible, as demonstrated by repeated measurements in the same participants and by an independent replication group study. This study demonstrates a robust U-fiber connectivity mapping in vivo and is an important step toward construction of a more complete human brain connectome.


PLoS ONE ◽  
2011 ◽  
Vol 6 (9) ◽  
pp. e25607 ◽  
Author(s):  
Davide Bottari ◽  
Anne Caclin ◽  
Marie-Hélène Giard ◽  
Francesco Pavani

Perception ◽  
1994 ◽  
Vol 23 (10) ◽  
pp. 1111-1134 ◽  
Author(s):  
Nicholas J Wade

The visual motion aftereffect (MAE) was initially described after observation of movements in the natural environment, like those seen in rivers and waterfalls: stationary objects appeared to move briefly in the opposite direction. In the second half of the nineteenth century the MAE was displaced into the laboratory for experimental enquiry with the aid of Plateau's spiral. Such was the interest in the phenomenon that a major review of empirical and theoretical research was written in 1911. In the latter half of the present century novel stimuli (like drifting gratings, isoluminance patterns, spatial and luminance ramps, random-dot kinematograms, and first-order and second-order motions), introduced to study space and motion perception generally, have been applied to examine MAEs. Developing theories of cortical visual processing have drawn upon MAEs to provide a link between psychophysics and physiology; this has been most pronounced in the context of monocular and binocular channels in the visual system, the combination of colour and contour information, and in the cortical sites most associated with motion processing. The relatively unchanging characteristic of the study of MAEs has been the mode of measurement: duration continues to be used as an index of its strength, although measures of threshold elevation and nulling with computer-generated motions are becoming more prevalent. The MAE is a part of the armoury of motion phenomena employed to uncover the mysteries of vision. Over the last 150 years it has proved itself immensely adaptable to the shifts of fashion in visual science, and it is likely to continue in this vein.


ICANN ’93 ◽  
1993 ◽  
pp. 250-250
Author(s):  
Luigi Raffo ◽  
Silvio P. Sabatini ◽  
Giacomo Indiveri ◽  
Daniele D. Caviglia ◽  
Giacomo M. Bisio

Sign in / Sign up

Export Citation Format

Share Document