visual motion
Recently Published Documents


TOTAL DOCUMENTS

1883
(FIVE YEARS 297)

H-INDEX

97
(FIVE YEARS 7)

2022 ◽  
Vol 191 ◽  
pp. 107969
Author(s):  
Shunya Umemoto ◽  
Yutaka Hirata

2022 ◽  
Author(s):  
Kenji Ogawa ◽  
Yuiko Matsuyama

Visual perspective taking (VPT), particularly level 2 VPT (VPT2), which allows an individual to understand that the same object can be seen differently by others, is related to the theory of mind (ToM), because both functions require a decoupled representation from oneself. Although previous neuroimaging studies have shown that VPT and ToM activate the temporo-parietal junction (TPJ), it is unclear whether common neural substrates are involved in VPT and ToM. To clarify this point, the present study directly compared the TPJ activation patterns of individual participants performing VPT2 and ToM tasks using functional magnetic resonance imaging and within-subjects design. VPT2-induced activations were compared with activations observed during a mental rotation task as a control task, whereas ToM-related activities were identified with a standard ToM localizer using false-belief stories. A whole-brain analysis revealed that VPT2 and ToM activated overlapping areas in the posterior part of the TPJ. By comparing the activations induced by VPT2 and ToM in individual participants, we found that the peak voxels induced by ToM were located significantly more anteriorly and dorsally within the bilateral TPJ than those measured during the VPT2 task. We further confirmed that these activity areas were spatially distinct from the nearby extrastriate body area (EBA), visual motion area (MT+), and the posterior superior temporal sulcus (pSTS) using independent localizer scans. Our findings revealed that VPT2 and ToM have distinct representations, albeit partially overlapping, indicating the functional heterogeneity of social cognition within the TPJ.


2021 ◽  
Author(s):  
Kazunori Shinomiya ◽  
Aljoscha Nern ◽  
Ian Meinertzhagen ◽  
Stephen M Plaza ◽  
Michael B Reiser

The detection of visual motion enables sophisticated animal navigation, and studies in flies have provided profound insights into the cellular and circuit basis of this neural computation. The fly's directionally selective T4 and T5 neurons respectively encode ON and OFF motion. Their axons terminate in one of four retinotopic layers in the lobula plate, where each layer encodes one of four cardinal directions of motion. While the input circuitry of the directionally selective neurons has been studied in detail, the synaptic connectivity of circuits integrating T4/T5 motion signals is largely unknown. Here we report a 3D electron microscopy reconstruction, wherein we comprehensively identified T4/T5's synaptic partners in the lobula plate, revealing a diverse set of new cell types and attributing new connectivity patterns to known cell types. Our reconstruction explains how the ON and OFF motion pathways converge. T4 and T5 cells that project to the same layer, connect to common synaptic partners symmetrically, that is with similar weights, and also comprise a core motif together with bilayer interneurons, detailing the circuit basis for computing motion opponency. We discovered pathways that likely encode new directions of motion by integrating vertical and horizontal motion signals from upstream T4/T5 neurons. Finally, we identify substantial projections into the lobula, extending the known motion pathways and suggesting that directionally selective signals shape feature detection there. The circuits we describe enrich the anatomical basis for experimental and computations analyses of motion vision and bring us closer to understanding complete sensory-motor pathways.


PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0261266
Author(s):  
Maëlle Tixier ◽  
Stéphane Rousset ◽  
Pierre-Alain Barraud ◽  
Corinne Cian

A large body of research has shown that visually induced self-motion (vection) and cognitive processing may interfere with each other. The aim of this study was to assess the interactive effects of a visual motion inducing vection (uniform motion in roll) versus a visual motion without vection (non-uniform motion) and long-term memory processing using the characteristics of standing posture (quiet stance). As the level of interference may be related to the nature of the cognitive tasks used, we examined the effect of visual motion on a memory task which requires a spatial process (episodic recollection) versus a memory task which does not require this process (semantic comparisons). Results confirm data of the literature showing that compensatory postural response in the same direction as background motion. Repeatedly watching visual uniform motion or increasing the cognitive load with a memory task did not decrease postural deviations. Finally, participants were differentially controlling their balance according to the memory task but this difference was significant only in the vection condition and in the plane of background motion. Increased sway regularity (decreased entropy) combined with decreased postural stability (increase variance) during vection for the episodic task would indicate an ineffective postural control. The different interference of episodic and semantic memory on posture during visual motion is consistent with the involvement of spatial processes during episodic memory recollection. It can be suggested that spatial disorientation due to visual roll motion preferentially interferes with spatial cognitive tasks, as spatial tasks can draw on resources expended to control posture.


2021 ◽  
Author(s):  
Ryosuke Tanaka ◽  
Damon A. Clark

Visual motion provides rich geometrical cues about the three-dimensional configuration the world. However, how brains decode the spatial information carried by motion signals remains poorly understood. Here, we study a collision avoidance behavior in Drosophila as a simple model of motion-based spatial vision. With simulations and psychophysics, we demonstrate that walking Drosophila exhibit a pattern of slowing to avoid collisions by exploiting the geometry of positional changes of objects on near-collision courses. This behavior requires the visual neuron LPLC1, whose tuning mirrors the behavior and whose activity drives slowing. LPLC1 pools inputs from object- and motion-detectors, and spatially biased inhibition tunes it to the geometry of collisions. Connectomic analyses identified circuitry downstream of LPLC1 that faithfully inherits its response properties. Overall, our results reveal how a small neural circuit solves a specific spatial vision task by combining distinct visual features to exploit universal geometrical constraints of the visual world.


2021 ◽  
Author(s):  
Fatemeh Molaei Vaneghi ◽  
Natalia Zaretskaya ◽  
Tim van Mourik ◽  
Jonas Bause ◽  
Klaus Scheffler ◽  
...  

Neural mechanisms underlying a stable perception of the world during pursuit eye movements are not fully understood. Both, perceptual stability as well as perception of real (i.e. objective) motion are the product of integration between motion signals on the retina and efference copies of eye movements. Human areas V3A and V6 have previously been shown to have strong objective ('real') motion responses. Here we used high-resolution laminar fMRI at ultra-high magnetic field (9.4T) in human subjects to examine motion integration across cortical depths in these areas. We found an increased preference for objective motion in areas V3A and V6+ i.e. V6 and possibly V6A towards the upper layers. When laminar responses were detrended to remove the upper-layer bias present in all responses, we found a unique, condition-specific laminar profile in V6+, showing reduced mid-layer responses for retinal motion only. The results provide evidence for differential, motion-type dependent laminar processing in area V6+. Mechanistically, the mid-layer dip suggests a special contribution of retinal motion to integration, either in the form of a subtractive (inhibitory) mid-layer input, or in the form of feedback into extragranular or infragranular layers. The results show that differential laminar signals can be measured in high-level motion areas in human occipitoparietal cortex, opening the prospect of new mechanistic insights using non-invasive brain imaging.


2021 ◽  
Author(s):  
Hayden Scott ◽  
Klaus Wimmer ◽  
Tatiana Pasternak ◽  
Adam Snyder

Neurons in the primate Middle Temporal (MT) area signal information about visual motion and work together with the lateral prefrontal cortex (LPFC) to support memory-guided comparisons of visual motion direction. These areas are reciprocally connected and both contain neurons that signal visual motion direction in the strength of their responses. Previously, LPFC was shown to display marked changes in stimulus coding with altered task demands. Since MT and LPFC work together, we sought to determine if MT neurons display similar changes with heightened task demands. We hypothesized that heightened working-memory task demands would improve the task-relevant information and precipitate memory-related signals in MT. Here we show that engagement in a motion direction comparison task altered non-sensory activity and improved stimulus encoding by MT neurons. We found that this improvement in stimulus information transmission was largely due to preferential reduction in trial-to-trial variability within a sub-population of highly direction-selective neurons. We also found that a divisive normalization mechanism accounted for seemingly contradictory effects of task-demands on a heterogeneous population of neurons.


2021 ◽  
Vol 12 ◽  
Author(s):  
Alessandro Carlini ◽  
Emmanuel Bigand

Multimodal perception is a key factor in obtaining a rich and meaningful representation of the world. However, how each stimulus combines to determine the overall percept remains a matter of research. The present work investigates the effect of sound on the bimodal perception of motion. A visual moving target was presented to the participants, associated with a concurrent sound, in a time reproduction task. Particular attention was paid to the structure of both the auditory and the visual stimuli. Four different laws of motion were tested for the visual motion, one of which is biological. Nine different sound profiles were tested, from an easier constant sound to more variable and complex pitch profiles, always presented synchronously with motion. Participants’ responses show that constant sounds produce the worst duration estimation performance, even worse than the silent condition; more complex sounds, instead, guarantee significantly better performance. The structure of the visual stimulus and that of the auditory stimulus appear to condition the performance independently. Biological motion provides the best performance, while the motion featured by a constant-velocity profile provides the worst performance. Results clearly show that a concurrent sound influences the unified perception of motion; the type and magnitude of the bias depends on the structure of the sound stimulus. Contrary to expectations, the best performance is not generated by the simplest stimuli, but rather by more complex stimuli that are richer in information.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jennifer Sudkamp ◽  
Mateusz Bocian ◽  
David Souto

AbstractTo avoid collisions, pedestrians depend on their ability to perceive and interpret the visual motion of other road users. Eye movements influence motion perception, yet pedestrians’ gaze behavior has been little investigated. In the present study, we ask whether observers sample visual information differently when making two types of judgements based on the same virtual road-crossing scenario and to which extent spontaneous gaze behavior affects those judgements. Participants performed in succession a speed and a time-to-arrival two-interval discrimination task on the same simple traffic scenario—a car approaching at a constant speed (varying from 10 to 90 km/h) on a single-lane road. On average, observers were able to discriminate vehicle speeds of around 18 km/h and times-to-arrival of 0.7 s. In both tasks, observers placed their gaze closely towards the center of the vehicle’s front plane while pursuing the vehicle. Other areas of the visual scene were sampled infrequently. No differences were found in the average gaze behavior between the two tasks and a pattern classifier (Support Vector Machine), trained on trial-level gaze patterns, failed to reliably classify the task from the spontaneous eye movements it elicited. Saccadic gaze behavior could predict time-to-arrival discrimination performance, demonstrating the relevance of gaze behavior for perceptual sensitivity in road-crossing.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7969
Author(s):  
Lianen Qu ◽  
Matthew N. Dailey

Driver situation awareness is critical for safety. In this paper, we propose a fast, accurate method for obtaining real-time situation awareness using a single type of sensor: monocular cameras. The system tracks the host vehicle’s trajectory using sparse optical flow and tracks vehicles in the surrounding environment using convolutional neural networks. Optical flow is used to measure the linear and angular velocity of the host vehicle. The convolutional neural networks are used to measure target vehicles’ positions relative to the host vehicle using image-based detections. Finally, the system fuses host and target vehicle trajectories in the world coordinate system using the velocity of the host vehicle and the target vehicles’ relative positions with the aid of an Extended Kalman Filter (EKF). We implement and test our model quantitatively in simulation and qualitatively on real-world test video. The results show that the algorithm is superior to state-of-the-art sequential state estimation methods such as visual SLAM in performing accurate global localization and trajectory estimation for host and target vehicles.


Sign in / Sign up

Export Citation Format

Share Document