scholarly journals Cross-dataset reproducibility of human retinotopic maps

NeuroImage ◽  
2021 ◽  
Vol 244 ◽  
pp. 118609 ◽  
Author(s):  
Marc M. Himmelberg ◽  
Jan W. Kurzawski ◽  
Noah C. Benson ◽  
Denis G. Pelli ◽  
Marisa Carrasco ◽  
...  
Keyword(s):  
2006 ◽  
Vol 274 (1611) ◽  
pp. 827-832 ◽  
Author(s):  
Colin R Tosh ◽  
Andrew L Jackson ◽  
Graeme D Ruxton

Individuals of many quite distantly related animal species find each other attractive and stay together for long periods in groups. We present a mechanism for mixed-species grouping in which individuals from different-looking prey species come together because the appearance of the mixed-species group is visually confusing to shared predators. Using an artificial neural network model of retinotopic mapping in predators, we train networks on random projections of single- and mixed-species prey groups and then test the ability of networks to reconstruct individual prey items from mixed-species groups in a retinotopic map. Over the majority of parameter space, cryptic prey items benefit from association with conspicuous prey because this particular visual combination worsens predator targeting of cryptic individuals. However, this benefit is not mutual as conspicuous prey tends to be targeted most poorly when in same-species groups. Many real mixed-species groups show the asymmetry in willingness to initiate and maintain the relationship predicted by our study. The agreement of model predictions with published empirical work, the efficacy of our modelling approach in previous studies, and the taxonomic ubiquity of retinotopic maps indicate that we may have uncovered an important, generic selective agent in the evolution of mixed-species grouping.


PLoS ONE ◽  
2012 ◽  
Vol 7 (5) ◽  
pp. e36859 ◽  
Author(s):  
Linda Henriksson ◽  
Juha Karvonen ◽  
Niina Salminen-Vaparanta ◽  
Henry Railo ◽  
Simo Vanni

Cell ◽  
2018 ◽  
Vol 173 (2) ◽  
pp. 485-498.e11 ◽  
Author(s):  
Filipe Pinto-Teixeira ◽  
Clara Koo ◽  
Anthony Michael Rossi ◽  
Nathalie Neriec ◽  
Claire Bertet ◽  
...  

2020 ◽  
Vol 117 (25) ◽  
pp. 14453-14463 ◽  
Author(s):  
Kévin Blaize ◽  
Fabrice Arcizet ◽  
Marc Gesnik ◽  
Harry Ahnine ◽  
Ulisse Ferrari ◽  
...  

Deep regions of the brain are not easily accessible to investigation at the mesoscale level in awake animals or humans. We have recently developed a functional ultrasound (fUS) technique that enables imaging hemodynamic responses to visual tasks. Using fUS imaging on two awake nonhuman primates performing a passive fixation task, we constructed retinotopic maps at depth in the visual cortex (V1, V2, and V3) in the calcarine and lunate sulci. The maps could be acquired in a single-hour session with relatively few presentations of the stimuli. The spatial resolution of the technology is illustrated by mapping patterns similar to ocular dominance (OD) columns within superficial and deep layers of the primary visual cortex. These acquisitions using fUS suggested that OD selectivity is mostly present in layer IV but with extensions into layers II/III and V. This imaging technology provides a new mesoscale approach to the mapping of brain activity at high spatiotemporal resolution in awake subjects within the whole depth of the cortex.


2015 ◽  
Vol 32 ◽  
Author(s):  
JONATHAN WINAWER ◽  
NATHAN WITTHOFT

AbstractThe ventral surface of the human occipital lobe contains multiple retinotopic maps. The most posterior of these maps is considered a potential homolog of macaque V4, and referred to as human V4 (“hV4”). The location of the hV4 map, its retinotopic organization, its role in visual encoding, and the cortical areas it borders have been the subject of considerable investigation and debate over the last 25 years. We review the history of this map and adjacent maps in ventral occipital cortex, and consider the different hypotheses for how these ventral occipital maps are organized. Advances in neuroimaging, computational modeling, and characterization of the nearby anatomical landmarks and functional brain areas have improved our understanding of where human V4 is and what kind of visual representations it contains.


Neuroreport ◽  
1999 ◽  
Vol 10 (17) ◽  
pp. 3479-3483 ◽  
Author(s):  
Isabelle Israël ◽  
Jocelyne Ventre-Dominey ◽  
Pierre Denise

2011 ◽  
Vol 366 (1564) ◽  
pp. 516-527 ◽  
Author(s):  
Sebastiaan Mathôt ◽  
Jan Theeuwes

In the present review, we address the relationship between attention and visual stability. Even though with each eye, head and body movement the retinal image changes dramatically, we perceive the world as stable and are able to perform visually guided actions. However, visual stability is not as complete as introspection would lead us to believe. We attend to only a few items at a time and stability is maintained only for those items. There appear to be two distinct mechanisms underlying visual stability. The first is a passive mechanism: the visual system assumes the world to be stable, unless there is a clear discrepancy between the pre- and post-saccadic image of the region surrounding the saccade target. This is related to the pre-saccadic shift of attention, which allows for an accurate preview of the saccade target. The second is an active mechanism: information about attended objects is remapped within retinotopic maps to compensate for eye movements. The locus of attention itself, which is also characterized by localized retinotopic activity, is remapped as well. We conclude that visual attention is crucial in our perception of a stable world.


2003 ◽  
Vol 54 (1) ◽  
pp. 51-65 ◽  
Author(s):  
C. Ernesto Restrepo ◽  
Paul R. Manger ◽  
Christian Spenger ◽  
Giorgio M. Innocenti
Keyword(s):  

2015 ◽  
Vol 35 (27) ◽  
pp. 9836-9847 ◽  
Author(s):  
K. DeSimone ◽  
J. D. Viviano ◽  
K. A. Schneider

Sign in / Sign up

Export Citation Format

Share Document