visual inputs
Recently Published Documents


TOTAL DOCUMENTS

285
(FIVE YEARS 96)

H-INDEX

35
(FIVE YEARS 4)

2021 ◽  
Vol 15 ◽  
Author(s):  
He Chen ◽  
Yuji Naya

Recent work has shown that the medial temporal lobe (MTL), including the hippocampus (HPC) and its surrounding limbic cortices, plays a role in scene perception in addition to episodic memory. The two basic factors of scene perception are the object (“what”) and location (“where”). In this review, we first summarize the anatomical knowledge related to visual inputs to the MTL and physiological studies examining object-related information processed along the ventral pathway briefly. Thereafter, we discuss the space-related information, the processing of which was unclear, presumably because of its multiple aspects and a lack of appropriate task paradigm in contrast to object-related information. Based on recent electrophysiological studies using non-human primates and the existing literature, we proposed the “reunification theory,” which explains brain mechanisms which construct object-location signals at each gaze. In this reunification theory, the ventral pathway signals a large-scale background image of the retina at each gaze position. This view-center background signal reflects the first person’s perspective and specifies the allocentric location in the environment by similarity matching between images. The spatially invariant object signal and view-center background signal, both of which are derived from the same retinal image, are integrated again (i.e., reunification) along the ventral pathway-MTL stream, particularly in the perirhinal cortex. The conjunctive signal, which represents a particular object at a particular location, may play a role in scene perception in the HPC as a key constituent element of an entire scene.


2021 ◽  
Vol 12 ◽  
Author(s):  
Thomas Romeas ◽  
Selma Greffou ◽  
Remy Allard ◽  
Robert Forget ◽  
Michelle McKerral ◽  
...  

Motor control deficits outlasting self-reported symptoms are often reported following mild traumatic brain injury (mTBI). The exact duration and nature of these deficits remains unknown. The current study aimed to compare postural responses to static or dynamic virtual visual inputs and during standard clinical tests of balance in 38 children between 9 and 18 years-of-age, at 2 weeks, 3 and 12 months post-concussion. Body sway amplitude (BSA) and postural instability (vRMS) were measured in a 3D virtual reality (VR) tunnel (i.e., optic flow) moving in the antero-posterior direction in different conditions. Measures derived from standard clinical balance evaluations (BOT-2, Timed tasks) and post-concussion symptoms (PCSS-R) were also assessed. Results were compared to those of 38 healthy non-injured children following a similar testing schedule and matched according to age, gender, and premorbid level of physical activity. Results highlighted greater postural response with BSA and vRMS measures at 3 months post-mTBI, but not at 12 months when compared to controls, whereas no differences were observed in post-concussion symptoms between mTBI and controls at 3 and 12 months. These deficits were specifically identified using measures of postural response in reaction to 3D dynamic visual inputs in the VR paradigm, while items from the BOT-2 and the 3 timed tasks did not reveal deficits at any of the test sessions. PCSS-R scores correlated between sessions and with the most challenging condition of the BOT-2 and as well as with the timed tasks, but not with BSA and vRMS. Scores obtained in the most challenging conditions of clinical balance tests also correlated weakly with BSA and vRMS measures in the dynamic conditions. These preliminary findings suggest that using 3D dynamic visual inputs such as optic flow in a controlled VR environment could help detect subtle postural impairments and inspire the development of clinical tools to guide rehabilitation and return to play recommendations.


Author(s):  
Aner Tal ◽  
Yaniv Gvili ◽  
Moty Amar

Consumers’ calorie estimates are often biased and inaccurate. Even the presence of relevant nutritional information may not suffice to prevent consumer biases in calorie estimation. The current work demonstrates across two studies that visual cues given by larger product depictions lead to increased calorie estimates. Further, it demonstrates that these effects occur even when consumers are given, and notice, information about product quantity. The findings thus shed light on a novel biasing effect on consumer calorie evaluation, and, more generally, the findings provide evidence for the importance of visual inputs over textual ones in consumers’ nutritional assessment of food products. In this, the current research provides insights relevant to helping nutritional literacy via awareness of biasing influences on caloric assessment. In the same manner, the research also provides insights that may assist the regulator protecting consumers by highlighting factors biasing nutritional assessment.


2021 ◽  
Author(s):  
Andrey Chetverikov ◽  
Árni Kristjánsson

Prominent theories of perception suggest that the brain builds probabilistic models of the world, assessing the statistics of the visual input to inform this construction. However, the evidence for this idea is often based on simple impoverished stimuli, and the results have often been discarded as an illusion reflecting simple "summary statistics" of visual inputs. Here we show that the visual system represents probabilistic distributions of complex heterogeneous stimuli. Importantly, we show how these statistical representations are integrated with representations of other features and bound to locations, and can therefore serve as building blocks for object and scene processing. We uncover the organization of these representations at different spatial scales by showing how expectations for incoming features are biased by neighboring locations. We also show that there is not only a bias, but also a skew in the representations, arguing against accounts positing that probabilistic representations are discarded in favor of simplified summary statistics (e.g., mean and variance). In sum, our results reveal detailed probabilistic encoding of stimulus distributions, representations that are bound with other features and to particular locations.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Roy Harpaz ◽  
Minh Nguyet Nguyen ◽  
Armin Bahl ◽  
Florian Engert

AbstractComplex schooling behaviors result from local interactions among individuals. Yet, how sensory signals from neighbors are analyzed in the visuomotor stream of animals is poorly understood. Here, we studied aggregation behavior in larval zebrafish and found that over development larvae transition from overdispersed groups to tight shoals. Using a virtual reality assay, we characterized the algorithms fish use to transform visual inputs from neighbors into movement decisions. We found that young larvae turn away from virtual neighbors by integrating and averaging retina-wide visual occupancy within each eye, and by using a winner-take-all strategy for binocular integration. As fish mature, their responses expand to include attraction to virtual neighbors, which is based on similar algorithms of visual integration. Using model simulations, we show that the observed algorithms accurately predict group structure over development. These findings allow us to make testable predictions regarding the neuronal circuits underlying collective behavior in zebrafish.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Lanyu Shang ◽  
Daniel Zhang ◽  
Jialie Shen ◽  
Eamon Lopez Marmion ◽  
Dong Wang

Author(s):  
Valentina Presta ◽  
Costanza Vitale ◽  
Luca Ambrosini ◽  
Giuliana Gobbi

Visual skills in sport are considered relevant variables of athletic performance. However, data on the specific contribution of stereopsis—as the ability to perceive depth—in sport performance are still scarce and scattered in the literature. The aim of this review is therefore to take stock of the effects of stereopsis on the athletic performance, also looking at the training tools to improve visual abilities and potential differences in the visuomotor integration processes of professional and non-professional athletes. Dynamic stereopsis is mainly involved in catching or interceptive actions of ball sports, whereas strategic sports use different visual skills (peripheral and spatial vision) due to the sport-specific requirements. As expected, professional athletes show better visual skills as compared to non-professionals. However, both non-professional and professional athletes should train their visual skills by using sensory stations and light boards systems. Non-professional athletes use the visual inputs as the main method for programming motor gestures. In contrast, professional athletes integrate visual information with sport expertise, thus, they encode the match (or the athletic performance) through a more complex visuomotor integration system. Although studies on visual skills and stereopsis in sports still appear to be in their early stages, they show a large potential for both scientific knowledge and technical development.


2021 ◽  
Vol 21 (9) ◽  
pp. 1976
Author(s):  
Jacopo Turini ◽  
Klara Gregorová ◽  
Benjamin Gagl ◽  
Melissa Le-Hoa Võ

2021 ◽  
Vol 17 (9) ◽  
pp. e1009434
Author(s):  
Yijia Yan ◽  
Neil Burgess ◽  
Andrej Bicanski

Environmental information is required to stabilize estimates of head direction (HD) based on angular path integration. However, it is unclear how this happens in real-world (visually complex) environments. We present a computational model of how visual feedback can stabilize HD information in environments that contain multiple cues of varying stability and directional specificity. We show how combinations of feature-specific visual inputs can generate a stable unimodal landmark bearing signal, even in the presence of multiple cues and ambiguous directional specificity. This signal is associated with the retrosplenial HD signal (inherited from thalamic HD cells) and conveys feedback to the subcortical HD circuitry. The model predicts neurons with a unimodal encoding of the egocentric orientation of the array of landmarks, rather than any one particular landmark. The relationship between these abstract landmark bearing neurons and head direction cells is reminiscent of the relationship between place cells and grid cells. Their unimodal encoding is formed from visual inputs via a modified version of Oja’s Subspace Algorithm. The rule allows the landmark bearing signal to disconnect from directionally unstable or ephemeral cues, incorporate newly added stable cues, support orientation across many different environments (high memory capacity), and is consistent with recent empirical findings on bidirectional HD firing reported in the retrosplenial cortex. Our account of visual feedback for HD stabilization provides a novel perspective on neural mechanisms of spatial navigation within richer sensory environments, and makes experimentally testable predictions.


2021 ◽  
Author(s):  
Xiaoqian Yan ◽  
Valérie Goffaux ◽  
Bruno Rossion

Abstract At what level of spatial resolution can the human brain recognize a familiar face in a crowd of strangers? Does it depend on whether one approaches or rather moves back from the crowd? To answer these questions, 16 observers viewed different unsegmented images of unfamiliar faces alternating at 6 Hz, with spatial frequency (SF) content progressively increasing (i.e., coarse-to-fine) or decreasing (fine-to-coarse) in different sequences. Variable natural images of celebrity faces every sixth stimulus generated an objective neural index of single-glanced automatic familiar face recognition (FFR) at 1 Hz in participants’ electroencephalogram (EEG). For blurry images increasing in spatial resolution, the neural FFR response over occipitotemporal regions emerged abruptly with additional cues at about 6.3–8.7 cycles/head width, immediately reaching amplitude saturation. When the same images progressively decreased in resolution, the FFR response disappeared already below 12 cycles/head width, thus providing no support for a predictive coding hypothesis. Overall, these observations indicate that rapid automatic recognition of heterogenous natural views of familiar faces is achieved from coarser visual inputs than generally thought, and support a coarse-to-fine FFR dynamics in the human brain.


Sign in / Sign up

Export Citation Format

Share Document