Implicit and explicit representations of visual space

1999 ◽  
Vol 22 (5) ◽  
pp. 759-760
Author(s):  
Bruce Bridgeman

The visual system captures a unique contrast between implicit and explicit representation where the same event (location of a visible object) is coded in both ways in parallel. A method of differentiating the two representations is described using an illusion that affects only the explicit representation. Consistent with predictions, implicit information is available only from targets presently visible, but, surprisingly, a two-alternative decision does not disturb the implicit representation.

2020 ◽  
Vol 34 (07) ◽  
pp. 12112-12119
Author(s):  
Mikko Vihlman ◽  
Arto Visala

Single-target tracking of generic objects is a difficult task since a trained tracker is given information present only in the first frame of a video. In recent years, increasingly many trackers have been based on deep neural networks that learn generic features relevant for tracking. This paper argues that deep architectures are often fit to learn implicit representations of optical flow. Optical flow is intuitively useful for tracking, but most deep trackers must learn it implicitly. This paper is among the first to study the role of optical flow in deep visual tracking. The architecture of a typical tracker is modified to reveal the presence of implicit representations of optical flow and to assess the effect of using the flow information more explicitly. The results show that the considered network learns implicitly an effective representation of optical flow. The implicit representation can be replaced by an explicit flow input without a notable effect on performance. Using the implicit and explicit representations at the same time does not improve tracking accuracy. The explicit flow input could allow constructing lighter networks for tracking.


Geosciences ◽  
2018 ◽  
Vol 9 (1) ◽  
pp. 8 ◽  
Author(s):  
Christophe Baron-Hyppolite ◽  
Christopher Lashley ◽  
Juan Garzon ◽  
Tyler Miesse ◽  
Celso Ferreira ◽  
...  

Assessing the accuracy of nearshore numerical models—such as SWAN—is important to ensure their effectiveness in representing physical processes and predicting flood hazards. In particular, for application to coastal wetlands, it is important that the model accurately represents wave attenuation by vegetation. In SWAN, vegetation might be implemented either implicitly, using an enhanced bottom friction; or explicitly represented as drag on an immersed body. While previous studies suggest that the implicit representation underestimates dissipation, field data has only recently been used to assess fully submerged vegetation. Therefore, the present study investigates the performance of both the implicit and explicit representations of vegetation in SWAN in simulating wave attenuation over a natural emergent marsh. The wave and flow modules within Delft3D are used to create an open-ocean model to simulate offshore wave conditions. The domain is then decomposed to simulate nearshore processes and provide the boundary conditions necessary to run a standalone SWAN model. Here, the implicit and explicit representations of vegetation are finally assessed. Results show that treating vegetation simply as enhanced bottom roughness (implicitly) under-represents the complexity of wave-vegetation interaction and, consequently, underestimates wave energy dissipation (error > 30%). The explicit vegetation representation, however, shows good agreement with field data (error < 20%).


2018 ◽  
Author(s):  
Naohide Yamamoto ◽  
Dagmara E. Mach ◽  
John W. Philbeck ◽  
Jennifer Van Pelt

Generally, imagining an action and physically executing it are thought to be controlled by common motor representations. However, imagined walking to a previewed target tends to be terminated more quickly than real walking to the same target, raising a question as to what representations underlie the two modes of walking. To address this question, the present study put forward a hypothesis that both explicit and implicit representations of gait are involved in imagined walking, and further proposed that the underproduction of imagined walking duration largely stems from the explicit representation due to its susceptibility to a general undershooting tendency in time production (i.e., the error of anticipation). Properties of the explicit and implicit representations were examined by manipulating their relative dominance during imagined walking through concurrent bodily motions, and also by using non-spatial tasks that extracted the temporal structure of imagined walking. Results showed that the duration of imagined walking subserved by the implicit representation was equal to that of real walking, and a time production task exhibited an equivalent underproduction bias as in imagined walking tasks that were based on the explicit representation. These findings are interpreted as evidence for the dual-representation view of imagined walking.


2020 ◽  
Vol 15 (4) ◽  
pp. 1509-1546
Author(s):  
Simone Cerreia-Vioglio ◽  
David Dillenberger ◽  
Pietro Ortoleva

One of the most well known models of non‐expected utility is Gul's (1991) model of disappointment aversion. This model, however, is defined implicitly, as the solution to a functional equation; its explicit utility representation is unknown, which may limit its applicability. We show that an explicit representation can be easily constructed, using solely the components of the implicit representation. We also provide a more general result: an explicit representation for preferences in the betweenness class that also satisfy negative certainty independence (Dillenberger 2010) or its counterpart. We show how our approach gives a simple way to identify the parameters of the representation behaviorally and to study the consequences of disappointment aversion in a variety of applications.


1992 ◽  
Vol 99 (2) ◽  
pp. 322-348 ◽  
Author(s):  
Douglas L. Nelson ◽  
Thomas A. Schreiber ◽  
Cathy L. McEvoy

2020 ◽  
Author(s):  
Doris Voina ◽  
Stefano Recanatesi ◽  
Brian Hu ◽  
Eric Shea-Brown ◽  
Stefan Mihalas

AbstractAs animals adapt to their environments, their brains are tasked with processing stimuli in different sensory contexts. Whether these computations are context dependent or independent, they are all implemented in the same neural tissue. A crucial question is what neural architectures can respond flexibly to a range of stimulus conditions and switch between them. This is a particular case of flexible architecture that permits multiple related computations within a single circuit.Here, we address this question in the specific case of the visual system circuitry, focusing on context integration, defined as the integration of feedforward and surround information across visual space. We show that a biologically inspired microcircuit with multiple inhibitory cell types can switch between visual processing of the static context and the moving context. In our model, the VIP population acts as the switch and modulates the visual circuit through a disinhibitory motif. Moreover, the VIP population is efficient, requiring only a relatively small number of neurons to switch contexts. This circuit eliminates noise in videos by using appropriate lateral connections for contextual spatio-temporal surround modulation, having superior denoising performance compared to circuits where only one context is learned. Our findings shed light on a minimally complex architecture that is capable of switching between two naturalistic contexts using few switching units.Author SummaryThe brain processes information at all times and much of that information is context-dependent. The visual system presents an important example: processing is ongoing, but the context changes dramatically when an animal is still vs. running. How is context-dependent information processing achieved? We take inspiration from recent neurophysiology studies on the role of distinct cell types in primary visual cortex (V1).We find that relatively few “switching units” — akin to the VIP neuron type in V1 in that they turn on and off in the running vs. still context and have connections to and from the main population — is sufficient to drive context dependent image processing. We demonstrate this in a model of feature integration, and in a test of image denoising. The underlying circuit architecture illustrates a concrete computational role for the multiple cell types under increasing study across the brain, and may inspire more flexible neurally inspired computing architectures.


Author(s):  
T. G. Pogibenko ◽  

The paper deals with representation of obligatory participants of a situation described by the verb which do not get a syntactic role in the syntactic structure of a Khmer sentence, i. e. incorporation in the verb semantic structure, excorporation into a lexical complex, deictic zero, zero anaphors. Special attention is paid to the role of lexical complex, which is a unique resource of the Khmer language, and its use for implicit and explicit representation of the participants of the situation described. An issue of a particular interest is participants’ representation as a component of a lexical complex, rather than a component of the sentence syntactic structure. Language data of Modern Khmer, Middle Khmer, and Old Khmer is used to show that this mode of representation has been used throughout the whole period of the evolution of Khmer beginning with the Old Khmer inscriptions. An attempt is made to reveal the functional character of the phenomenon discussed. It is maintained that this strategy is used for semantic derivation, for a more detailed conceptualization of the situation described, as well as for word polysemy elimination in the text. Examples are cited where lexical complexes with incorporated participants are used to make up for the inherent semantic emptiness of predicates of evaluation. In case of participants incorporated in deictic verbs, the deictic zero in Khmer may refer to participants other than “observer”. Specific features of zero anaphora in Khmer are also mentioned.


eLife ◽  
2015 ◽  
Vol 4 ◽  
Author(s):  
Michael J Arcaro ◽  
Christopher J Honey ◽  
Ryan EB Mruczek ◽  
Sabine Kastner ◽  
Uri Hasson

The human visual system can be divided into over two-dozen distinct areas, each of which contains a topographic map of the visual field. A fundamental question in vision neuroscience is how the visual system integrates information from the environment across different areas. Using neuroimaging, we investigated the spatial pattern of correlated BOLD signal across eight visual areas on data collected during rest conditions and during naturalistic movie viewing. The correlation pattern between areas reflected the underlying receptive field organization with higher correlations between cortical sites containing overlapping representations of visual space. In addition, the correlation pattern reflected the underlying widespread eccentricity organization of visual cortex, in which the highest correlations were observed for cortical sites with iso-eccentricity representations including regions with non-overlapping representations of visual space. This eccentricity-based correlation pattern appears to be part of an intrinsic functional architecture that supports the integration of information across functionally specialized visual areas.


2019 ◽  
Author(s):  
Florian A. Dehmelt ◽  
Rebecca Meier ◽  
Julian Hinz ◽  
Takeshi Yoshimatsu ◽  
Clara A. Simacek ◽  
...  

AbstractMany animals have large visual fields, and sensory circuits may sample those regions of visual space most relevant to behaviours such as gaze stabilisation and hunting. Despite this, relatively small displays are often used in vision neuroscience. To sample stimulus locations across most of the visual field, we built a spherical stimulus arena with 14,848 independently controllable LEDs, measured the optokinetic response gain of immobilised zebrafish larvae, and related behaviour to previously published retinal photoreceptor densities. We measured tuning to steradian stimulus size and spatial frequency, and show it to be independent of visual field position. However, zebrafish react most strongly and consistently to lateral, nearly equatorial stimuli, consistent with previously reported higher spatial densities in the central retina of red, green and blue photoreceptors. Upside-down experiments suggest further extra-retinal processing. Our results demonstrate that motion vision circuits in zebrafish are anisotropic, and preferentially monitor areas with putative behavioural relevance.Author summaryThe visual system of larval zebrafish mirrors many features present in the visual system of other vertebrates, including its ability to mediate optomotor and optokinetic behaviour. Although the presence of such behaviours and some of the underlying neural correlates have been firmly established, previous experiments did not consider the large visual field of zebrafish, which covers more than 160° for each eye. Given that different parts of the visual field likely carry unequal amount of behaviourally relevant information for the animal, this raises the question whether optic flow is integrated across the entire visual field or just parts of it, and how this shapes behaviour such as the optokinetic response. We constructed a spherical LED arena to present visual stimuli almost anywhere across their visual field, while tracking horizontal eye movements. By displaying moving gratings on this LED arena, we demonstrate that the optokinetic response, one of the most prominent visually induced behaviours of zebrafish, indeed strongly depends on stimulus location and stimulus size, as well as on other parameters such as the spatial and temporal frequency of the gratings. This location dependence is consistent with areas of high retinal photoreceptor densities, though evidence suggests further extraretinal processing.


Sign in / Sign up

Export Citation Format

Share Document