scholarly journals Neural circuits mediating visual stabilization during active motion in zebrafish

2019 ◽  
Author(s):  
Sha Sun ◽  
Zhentao Zuo ◽  
Michelle Manxiu Ma ◽  
Chencan Qian ◽  
Lin Chen ◽  
...  

ABSTRACTVisual stabilization is an inevitable requirement for animals during active motion interaction with the environment. Visual motion cues of the surroundings or induced by self-generated behaviors are perceived then trigger proper motor responses mediated by neural representations conceptualized as the internal model: one part of it predicts the consequences of sensory dynamics as a forward model, another part generates proper motor control as a reverse model. However, the neural circuits between the two models remain mostly unknown. Here, we demonstrate that an internal component, the efference copy, coordinated the two models in a push-pull manner by generating extra reset saccades during active motion processing in larval zebrafish. Calcium imaging indicated that the saccade preparation circuit is enhanced while the velocity integration circuit is inhibited during the interaction, balancing the internal representations from both directions. This is the first model of efference copy on visual stabilization beyond the sensorimotor stage.

PLoS ONE ◽  
2019 ◽  
Vol 14 (9) ◽  
pp. e0220878 ◽  
Author(s):  
Sean Dean Lynch ◽  
Anne-Hélène Olivier ◽  
Benoit Bideau ◽  
Richard Kulpa

Neuroforum ◽  
2018 ◽  
Vol 24 (2) ◽  
pp. A61-A72 ◽  
Author(s):  
Giordano Ramos-Traslosheros ◽  
Miriam Henning ◽  
Marion Silies

Abstract Many animals use visual motion cues to inform different behaviors. The basis for motion detection is the comparison of light signals over space and time. How a nervous system performs such spatiotemporal correlations has long been considered a paradigmatic neural computation. Here, we will first describe classical models of motion detection and introduce core motion detecting circuits in Drosophila. Direct measurements of the response properties of the first direction-selective cells in the Drosophila visual system have revealed new insights about the implementation of motion detection algorithms. Recent data suggest a combination of two mechanisms, a nonlinear enhancement of signals moving into the preferred direction, as well as a suppression of signals moving into the opposite direction. These findings as well as a functional analysis of the circuit components have shown that the microcircuits that process elementary motion are more complex than anticipated. Building on this, we have the opportunity to understand detailed properties of elementary, yet intricate microcircuits.


2013 ◽  
Vol 109 (10) ◽  
pp. 2632-2644 ◽  
Author(s):  
Ian S. Howard ◽  
Daniel M. Wolpert ◽  
David W. Franklin

Several studies have shown that sensory contextual cues can reduce the interference observed during learning of opposing force fields. However, because each study examined a small set of cues, often in a unique paradigm, the relative efficacy of different sensory contextual cues is unclear. In the present study we quantify how seven contextual cues, some investigated previously and some novel, affect the formation and recall of motor memories. Subjects made movements in a velocity-dependent curl field, with direction varying randomly from trial to trial but always associated with a unique contextual cue. Linking field direction to the cursor or background color, or to peripheral visual motion cues, did not reduce interference. In contrast, the orientation of a visual object attached to the hand cursor significantly reduced interference, albeit by a small amount. When the fields were associated with movement in different locations in the workspace, a substantial reduction in interference was observed. We tested whether this reduction in interference was due to the different locations of the visual feedback (targets and cursor) or the movements (proprioceptive). When the fields were associated only with changes in visual display location (movements always made centrally) or only with changes in the movement location (visual feedback always displayed centrally), a substantial reduction in interference was observed. These results show that although some visual cues can lead to the formation and recall of distinct representations in motor memory, changes in spatial visual and proprioceptive states of the movement are far more effective than changes in simple visual contextual cues.


2019 ◽  
Author(s):  
Clara H Ferreira ◽  
Marta A Moita

AbstractLiving in a group allows individuals to decrease their defenses enabling other beneficial behaviors such as foraging. The detection of a threat through social cues is widely reported, however the safety cues that guide animals to break away from a defensive behavior and resume alternate activities remain elusive. Here we show that fruit flies displayed a graded decrease in freezing behavior, triggered by an inescapable threat, with increasing group sizes. Furthermore, flies used the cessation of movement of other flies as a cue of threat and its resumption as a cue of safety. Finally, we found that lobula columnar neurons, LC11, mediate the propensity for freezing flies to resume moving in response to the movement of others. By identifying visual motion cues, and the neurons involved in their processing, as the basis of a social safety cue this study brings new insights into the neuronal basis of safety in numbers.


2019 ◽  
Vol 32 (1) ◽  
pp. 45-65 ◽  
Author(s):  
G. M. Hanada ◽  
J. Ahveninen ◽  
F. J. Calabro ◽  
A. Yengo-Kahn ◽  
L. M. Vaina

Abstract The everyday environment brings to our sensory systems competing inputs from different modalities. The ability to filter these multisensory inputs in order to identify and efficiently utilize useful spatial cues is necessary to detect and process the relevant information. In the present study, we investigate how feature-based attention affects the detection of motion across sensory modalities. We were interested to determine how subjects use intramodal, cross-modal auditory, and combined audiovisual motion cues to attend to specific visual motion signals. The results showed that in most cases, both the visual and the auditory cues enhance feature-based orienting to a transparent visual motion pattern presented among distractor motion patterns. Whereas previous studies have shown cross-modal effects of spatial attention, our results demonstrate a spread of cross-modal feature-based attention cues, which have been matched for the detection threshold of the visual target. These effects were very robust in comparisons of the effects of valid vs. invalid cues, as well as in comparisons between cued and uncued valid trials. The effect of intramodal visual, cross-modal auditory, and bimodal cues also increased as a function of motion-cue salience. Our results suggest that orienting to visual motion patterns among distracters can be facilitated not only by intramodal priors, but also by feature-based cross-modal information from the auditory system.


2008 ◽  
Vol 99 (5) ◽  
pp. 2329-2346 ◽  
Author(s):  
Ryusuke Hayashi ◽  
Kenichiro Miura ◽  
Hiromitsu Tabata ◽  
Kenji Kawano

Brief movements of a large-field visual stimulus elicit short-latency tracking eye movements termed “ocular following responses” (OFRs). To address the question of whether OFRs can be elicited by purely binocular motion signals in the absence of monocular motion cues, we measured OFRs from monkeys using dichoptic motion stimuli, the monocular inputs of which were flickering gratings in spatiotemporal quadrature, and compared them with OFRs to standard motion stimuli including monocular motion cues. Dichoptic motion did elicit OFRs, although with longer latencies and smaller amplitudes. In contrast to these findings, we observed that other types of motion stimuli categorized as non-first-order motion, which is undetectable by detectors for standard luminance-defined (first-order) motion, did not elicit OFRs, although they did evoke the sensation of motion. These results indicate that OFRs can be driven solely by cortical visual motion processing after binocular integration, which is distinct from the process incorporating non-first-order motion for elaborated motion perception. To explore the nature of dichoptic motion processing in terms of interaction with monocular motion processing, we further recorded OFRs from both humans and monkeys using our novel motion stimuli, the monocular and dichoptic motion signals of which move in opposite directions with a variable motion intensity ratio. We found that monocular and dichoptic motion signals are processed in parallel to elicit OFRs, rather than suppressing each other in a winner-take-all fashion, and the results were consistent across the species.


Author(s):  
S. Negahdaripour ◽  
M.D. Aykin ◽  
M. Babaee ◽  
S. Sinnarajah ◽  
A. Perez

Sign in / Sign up

Export Citation Format

Share Document