scholarly journals Serial reproduction reveals the geometry of visuospatial representations

2021 ◽  
Vol 118 (13) ◽  
pp. e2012938118
Author(s):  
Thomas A. Langlois ◽  
Nori Jacoby ◽  
Jordan W. Suchow ◽  
Thomas L. Griffiths

An essential function of the human visual system is to locate objects in space and navigate the environment. Due to limited resources, the visual system achieves this by combining imperfect sensory information with a belief state about locations in a scene, resulting in systematic distortions and biases. These biases can be captured by a Bayesian model in which internal beliefs are expressed in a prior probability distribution over locations in a scene. We introduce a paradigm that enables us to measure these priors by iterating a simple memory task where the response of one participant becomes the stimulus for the next. This approach reveals an unprecedented richness and level of detail in these priors, suggesting a different way to think about biases in spatial memory. A prior distribution on locations in a visual scene can reflect the selective allocation of coding resources to different visual regions during encoding (“efficient encoding”). This selective allocation predicts that locations in the scene will be encoded with variable precision, in contrast to previous work that has assumed fixed encoding precision regardless of location. We demonstrate that perceptual biases covary with variations in discrimination accuracy, a finding that is aligned with simulations of our efficient encoding model but not the traditional fixed encoding view. This work demonstrates the promise of using nonparametric data-driven approaches that combine crowdsourcing with the careful curation of information transmission within social networks to reveal the hidden structure of shared visual representations.

1976 ◽  
Vol 9 (3) ◽  
pp. 311-375 ◽  
Author(s):  
Werner Reichardt ◽  
Tomaso Poggio

An understanding of sensory information processing in the nervous system will probably require investigations with a variety of ‘model’ systems at different levels of complexity.Our choice of a suitable model system was constrained by two conflicting requirements: on one hand the information processing properties of the system should be rather complex, on the other hand the system should be amenable to a quantitative analysis. In this sense the fly represents a compromise.In these two papers we explore how optical information is processed by the fly's visual system. Our objective is to unravel the logical organization of the fly's visual system and its underlying functional and computational principles. Our approach is at a highly integrative level. There are different levels of analysing and ‘understanding’ complex systems, like a brain or a sophisticated computer.


Author(s):  
Robert J. Hendley ◽  
Barry Wilkins ◽  
Russell Beale

This article presents a mechanism for generating visually appealing but also effective representations for document visualisation. The mechanism is based on an organic growth model which is driven by features of the object to be visualised. In the examples used, the authors focus on the visualisation of text documents, but the methods are readily transferable to other domains. They are also scaleable to documents of any size.The objective of this research is to build visual representations that enable the human visual system to efficiently and effectively recognise documents without the need for higher level cognitive processing. In particular, the authors want the user to be able to recognise similarities within sets of documents and to be able to easily discriminate between dissimilar objects.


Author(s):  
Shewkar Ibrahim ◽  
Tarek Sayed

Enforcement agencies generally operate under a strict budget and with limited resources. For this reason, they are continually searching for new approaches to maximize the efficiency and effectiveness of their deployment. The Data-Driven Approaches to Crime and Traffic Safety approach attempts to identify opportunities where increased visibility of traffic enforcement can lead to a reduction in collision frequencies as well as criminal incidents. Previous research developed functions to model collisions and crime separately, despite evidence suggesting that the two events could be correlated. Additionally, there is little knowledge of the implications of automated enforcement programs on crime. This study developed a Multivariate Poisson-Lognormal model for the city of Edmonton to quantify the correlation between collisions and crime and to determine whether automated enforcement programs can also reduce crime within a neighborhood. The results of this study found a high correlation between collisions and crime of 0.72 which indicates that collision hotspots were also likely to be crime hotspots. The results of this paper also showed that increased enforcement presence resulted in reductions not only in collisions but also in crime. If a single deployment can achieve multiple objectives (e.g., reducing crime and collisions), then optimizing an agency’s deployment strategy would decrease the demand on their resources and allow them to achieve more with less.


1997 ◽  
Vol 78 (2) ◽  
pp. 1062-1081 ◽  
Author(s):  
Wendy A. Suzuki ◽  
Earl K. Miller ◽  
Robert Desimone

Suzuki, Wendy A., Earl K. Miller, and Robert Desimone. Object and place memory in the macaque entorhinal cortex. J. Neurophysiol. 78: 1062–1081, 1997. Lesions of the entorhinal cortex in humans, monkeys, and rats impair memory for a variety of kinds of information, including memory for objects and places. To begin to understand the contribution of entorhinal cells to different forms of memory, responses of entorhinal cells were recorded as monkeys performed either an object or place memory task. The object memory task was a variation of delayed matching to sample. A sample picture was presented at the start of the trial, followed by a variable sequence of zero to four test pictures, ending with a repetition of the sample (i.e., a match). The place memory task was a variation of delayed matching to place. In this task, a cue stimulus was presented at a variable sequence of one to four “places” on a computer screen, ending with a repetition of one of the previously shown places (i.e., a match). For both tasks, the animals were rewarded for releasing a bar to the match. To solve these tasks, the monkey must 1) discriminate the stimuli, 2) maintain a memory of the appropriate stimuli during the course of the trial, and 3) evaluate whether a test stimulus matches previously presented stimuli. The responses of entorhinal cortex neurons were consistent with a role in all three of these processes in both tasks. We found that 47% and 55% of the visually responsive entorhinal cells responded selectively to the different objects or places presented during the object or place task, respectively. Similar to previous findings in prefrontal but not perirhinal cortex on the object task, some entorhinal cells had sample-specific delay activity that was maintained throughout all of the delay intervals in the sequence. For the place task, some cells had location-specific maintained activity in the delay immediately following a specific cue location. In addition, 59% and 22% of the visually responsive cells recorded during the object and place task, respectively, responded differently to the test stimuli according to whether they were matching or nonmatching to the stimuli held in memory. Responses of some cells were enhanced to matching stimuli, whereas others were suppressed. This suppression or enhancement typically occurred well before the animals' behavioral response, suggesting that this information could be used to perform the task. These results indicate that entorhinal cells receive sensory information about both objects and spatial locations and that their activity carries information about objects and locations held in short-term memory.


2009 ◽  
Vol 10 (1) ◽  
pp. 65-81 ◽  
Author(s):  
Christian Tominski

Visualization has become an increasingly important tool to support exploration and analysis of the large volumes of data we are facing today. However, interests and needs of users are still not being considered sufficiently. The goal of this work is to shift the user into the focus. To that end, we apply the concept of event-based visualization that combines event-based methodology and visualization technology. Previous approaches that make use of events are mostly specific to a particular application case, and hence, can not be applied otherwise. We introduce a novel general model of event-based visualization that comprises three fundamental stages. (1) Users are enabled to specify what their interests are. (2) During visualization, matches of these interests are sought in the data. (3) It is then possible to automatically adjust visual representations according to the detected matches. This way, it is possible to generate visual representations that better reflect what users need for their task at hand. The model's generality allows its application in many visualization contexts. We substantiate the general model with specific data-driven events that focus on relational data so prevalent in today's visualization scenarios. We show how the developed methods and concepts can be implemented in an interactive event-based visualization framework, which includes event-enhanced visualizations for temporal and spatio-temporal data.


2018 ◽  
Author(s):  
Miaomiao Jin ◽  
Jeffrey M. Beck ◽  
Lindsey L. Glickfeld

AbstractSensory information is encoded by populations of cortical neurons. Yet, it is unknown how this information is used for even simple perceptual choices such as discriminating orientation. To determine the computation underlying this perceptual choice, we took advantage of the robust adaptation in the mouse visual system. We find that adaptation increases animals’ thresholds for orientation discrimination. This was unexpected since optimal computations that take advantage of all available sensory information predict that the shift in tuning and increase in signal-to-noise ratio in the adapted condition should improve discrimination. Instead, we find that the effects of adaptation on behavior can be explained by the appropriate reliance of the perceptual choice circuits on target preferring neurons, but the failure to discount neurons that prefer the distractor. This suggests that to solve this task the circuit has adopted a suboptimal strategy that discards important task-related information to implement a feed-forward visual computation.


2018 ◽  
Author(s):  
Sreya Banerjee ◽  
Walter J. Scheirer ◽  
Lei Li

AbstractWe propose a computational model of vision that describes the integration of cross-modal sensory information between the olfactory and visual systems in zebrafish based on the principles of the statistical extreme value theory. The integration of olfacto-retinal information is mediated by the centrifugal pathway that originates from the olfactory bulb and terminates in the neural retina. Motivation for using extreme value theory stems from physiological evidence suggesting that extremes and not the mean of the cell responses direct cellular activity in the vertebrate brain. We argue that the visual system, as measured by retinal ganglion cell responses in spikes/sec, follows an extreme value process for sensory integration and the increase in visual sensitivity from the olfactory input can be better modeled using extreme value distributions. As zebrafish maintains high evolutionary proximity to mammals, our model can be extended to other vertebrates as well.


2019 ◽  
Author(s):  
Jack Lindsey ◽  
Samuel A. Ocko ◽  
Surya Ganguli ◽  
Stephane Deny

AbstractThe vertebrate visual system is hierarchically organized to process visual information in successive stages. Neural representations vary drastically across the first stages of visual processing: at the output of the retina, ganglion cell receptive fields (RFs) exhibit a clear antagonistic center-surround structure, whereas in the primary visual cortex (V1), typical RFs are sharply tuned to a precise orientation. There is currently no unified theory explaining these differences in representations across layers. Here, using a deep convolutional neural network trained on image recognition as a model of the visual system, we show that such differences in representation can emerge as a direct consequence of different neural resource constraints on the retinal and cortical networks, and for the first time we find a single model from which both geometries spontaneously emerge at the appropriate stages of visual processing. The key constraint is a reduced number of neurons at the retinal output, consistent with the anatomy of the optic nerve as a stringent bottleneck. Second, we find that, for simple downstream cortical networks, visual representations at the retinal output emerge as nonlinear and lossy feature detectors, whereas they emerge as linear and faithful encoders of the visual scene for more complex cortical networks. This result predicts that the retinas of small vertebrates (e.g. salamander, frog) should perform sophisticated nonlinear computations, extracting features directly relevant to behavior, whereas retinas of large animals such as primates should mostly encode the visual scene linearly and respond to a much broader range of stimuli. These predictions could reconcile the two seemingly incompatible views of the retina as either performing feature extraction or efficient coding of natural scenes, by suggesting that all vertebrates lie on a spectrum between these two objectives, depending on the degree of neural resources allocated to their visual system.


2008 ◽  
Vol 11 (2) ◽  
pp. 349-362 ◽  
Author(s):  
Dmitry V. Evtikhin ◽  
Vladimir B. Polianskii ◽  
Dzekshen E. Alymkulov ◽  
Evgenii N. Sokolov†

The neuronal activity in the rabbit's visual cortex, lateral geniculate nucleus and superior colliculus was investigated in responses to 8 color stimuli changes in pairs. This activity consisted of phasic responses (50-90 and 130-300 Ms after stimuli changes) and tonic response (after 300 Ms). The phasic responses used as a basis for the matrices (8 × 8) constructed for each neuron included the average of spikes/sec in responses to all stimuli changes. All matrices were treated by factor analysis and the basic axes of sensory spaces were revealed. Sensory spaces reconstructed from neuronal spike discharges had a two-dimensional (with brightness and darkness axes) or four-dimensional (with two color and two achromatic axes) structure. Thus it allowed us to split neurons into groups measuring only brightness differences and the measuring of color and brightness differences between stimuli. The tonic component of most of the neurons in the lateral geniculate nucleus showed linear correlation with changes in intensities; therefore, these neurons could be characterized as pre-detectors for cortical selective detectors. The neuronal spaces demonstrated a coincidence with spaces revealed by other methods. This fact may reflect the general principle of vector coding (Sokolov, 2000) of sensory information in the visual system.


2019 ◽  
Author(s):  
Samson Chota ◽  
Rufin VanRullen

AbstractIt has long been debated whether visual processing is, at least partially, a discrete process. Although vision appears to be a continuous stream of sensory information, sophisticated experiments reveal periodic modulations of perception and behavior. Previous work has demonstrated that the phase of endogenous neural oscillations in the 10 Hz range predicts the “lag” of the flash lag effect, a temporal visual illusion in which a static object is perceived to be lagging in time behind a moving object. Consequently, it has been proposed that the flash lag illusion could be a manifestation of a periodic, discrete sampling mechanism in the visual system. In this experiment we set out to causally test this hypothesis by entraining the visual system to a periodic 10 Hz stimulus and probing the flash lag effect (FLE) at different time points during entrainment. We hypothesized that the perceived FLE would be modulated over time, at the same frequency as the entrainer (10 Hz). A frequency analysis of the average FLE time-course indeed reveals a significant peak at 10 Hz as well as a strong phase consistency between subjects (N=26). Our findings provide evidence for a causal relationship between alpha oscillations and fluctuations in temporal perception.


Sign in / Sign up

Export Citation Format

Share Document