scholarly journals The impulse response of optic flow-sensitive descending neurons to roll m-sequences

2021 ◽  
Vol 224 (23) ◽  
Author(s):  
Richard Leibbrandt ◽  
Sarah Nicholas ◽  
Karin Nordström

ABSTRACT When animals move through the world, their own movements generate widefield optic flow across their eyes. In insects, such widefield motion is encoded by optic lobe neurons. These lobula plate tangential cells (LPTCs) synapse with optic flow-sensitive descending neurons, which in turn project to areas that control neck, wing and leg movements. As the descending neurons play a role in sensorimotor transformation, it is important to understand their spatio-temporal response properties. Recent work shows that a relatively fast and efficient way to quantify such response properties is to use m-sequences or other white noise techniques. Therefore, here we used m-sequences to quantify the impulse responses of optic flow-sensitive descending neurons in male Eristalis tenax hoverflies. We focused on roll impulse responses as hoverflies perform exquisite head roll stabilizing reflexes, and the descending neurons respond particularly well to roll. We found that the roll impulse responses were fast, peaking after 16.5–18.0 ms. This is similar to the impulse response time to peak (18.3 ms) to widefield horizontal motion recorded in hoverfly LPTCs. We found that the roll impulse response amplitude scaled with the size of the stimulus impulse, and that its shape could be affected by the addition of constant velocity roll or lift. For example, the roll impulse response became faster and stronger with the addition of excitatory stimuli, and vice versa. We also found that the roll impulse response had a long return to baseline, which was significantly and substantially reduced by the addition of either roll or lift.

PLoS ONE ◽  
2015 ◽  
Vol 10 (5) ◽  
pp. e0126265 ◽  
Author(s):  
Yu-Jen Lee ◽  
H. Olof Jönsson ◽  
Karin Nordström

2015 ◽  
Vol 41 (6) ◽  
pp. 856-865 ◽  
Author(s):  
Takuro Ikeda ◽  
Susan E. Boehnke ◽  
Robert A. Marino ◽  
Brian J. White ◽  
Chin-An Wang ◽  
...  

2021 ◽  
Vol 910 ◽  
Author(s):  
Pier Giuseppe Ledda ◽  
Gioele Balestra ◽  
Gaétan Lerisson ◽  
Benoit Scheid ◽  
Matthieu Wyart ◽  
...  

Abstract


2021 ◽  
Author(s):  
Krešimir Kavčić ◽  
Tena Radočaj ◽  
Luca Corlatti ◽  
Toni Safner ◽  
Ana Gračanin ◽  
...  

2021 ◽  
Vol 118 (38) ◽  
pp. e2024966118
Author(s):  
Sarah Nicholas ◽  
Karin Nordström

For the human observer, it can be difficult to follow the motion of small objects, especially when they move against background clutter. In contrast, insects efficiently do this, as evidenced by their ability to capture prey, pursue conspecifics, or defend territories, even in highly textured surrounds. We here recorded from target selective descending neurons (TSDNs), which likely subserve these impressive behaviors. To simulate the type of optic flow that would be generated by the pursuer’s own movements through the world, we used the motion of a perspective corrected sparse dot field. We show that hoverfly TSDN responses to target motion are suppressed when such optic flow moves syn-directional to the target. Indeed, neural responses are strongly suppressed when targets move over either translational sideslip or rotational yaw. More strikingly, we show that TSDNs are facilitated by optic flow moving counterdirectional to the target, if the target moves horizontally. Furthermore, we show that a small, frontal spatial window of optic flow is enough to fully facilitate or suppress TSDN responses to target motion. We argue that such TSDN response facilitation could be beneficial in modulating corrective turns during target pursuit.


2021 ◽  
Author(s):  
Sundaram Muthu ◽  
Ruwan Tennakoon ◽  
Reza Hoseinnezhad ◽  
Alireza Bab-Hadiashar

<div>This paper presents a new approach to solve unsupervised video object segmentation~(UVOS) problem (called TMNet). The UVOS is still a challenging problem as prior methods suffer from issues like generalization errors to segment multiple objects in unseen test videos (category agnostic), over reliance on inaccurate optic flow, and problem towards capturing fine details at object boundaries. These issues make the UVOS, particularly in presence of multiple objects, an ill-defined problem. Our focus is to constrain the problem and improve the segmentation results by inclusion of multiple available cues such as appearance, motion, image edge, flow edge and tracking information through neural attention. To solve the challenging category agnostic multiple object UVOS, our model is designed to predict neighbourhood affinities for being part of the same object and cluster those to obtain accurate segmentation. To achieve multi cue based neural attention, we designed a Temporal Motion Attention module, as part of our segmentation framework, to learn the spatio-temporal features. To refine and improve the accuracy of object segmentation boundaries, an edge refinement module (using image and optic flow edges) and a geometry based loss function are incorporated. The overall framework is capable of segmenting and finding accurate objects' boundaries without any heuristic post processing. This enables the method to be used for unseen videos. Experimental results on challenging DAVIS16 and multi object DAVIS17 datasets shows that our proposed TMNet performs favourably compared to the state-of-the-art methods without post processing.</div>


Sign in / Sign up

Export Citation Format

Share Document