scholarly journals Synergy of color and motion vision for detecting approaching objects in Drosophila

2021 ◽  
Author(s):  
Kit D Longden ◽  
Edward M Rogers ◽  
Aljoscha Nern ◽  
Heather Dionne ◽  
Michael B Reiser

Color and motion are used by many species to identify salient moving objects. They are processed largely independently, but color contributes to motion processing in humans, for example, enabling moving colored objects to be detected when their luminance matches the background. Here, we demonstrate an unexpected, additional contribution of color to motion vision in Drosophila. We show that behavioral ON-motion responses are more sensitive to UV than for OFF-motion, and we identify cellular pathways connecting UV-sensitive R7 photoreceptors to ON and OFF-motion-sensitive T4 and T5 cells, using neurogenetics and calcium imaging. Remarkably, the synergy of color and motion vision enhances the detection of approaching UV discs, but not green discs with the same chromatic contrast, and we show how this generalizes for visual systems with ON and OFF pathways. Our results provide a computational and circuit basis for how color enhances motion vision to favor the detection of saliently colored objects.

2020 ◽  
Vol 114 (4-5) ◽  
pp. 443-460
Author(s):  
Qinbing Fu ◽  
Shigang Yue

Abstract Decoding the direction of translating objects in front of cluttered moving backgrounds, accurately and efficiently, is still a challenging problem. In nature, lightweight and low-powered flying insects apply motion vision to detect a moving target in highly variable environments during flight, which are excellent paradigms to learn motion perception strategies. This paper investigates the fruit fly Drosophila motion vision pathways and presents computational modelling based on cutting-edge physiological researches. The proposed visual system model features bio-plausible ON and OFF pathways, wide-field horizontal-sensitive (HS) and vertical-sensitive (VS) systems. The main contributions of this research are on two aspects: (1) the proposed model articulates the forming of both direction-selective and direction-opponent responses, revealed as principal features of motion perception neural circuits, in a feed-forward manner; (2) it also shows robust direction selectivity to translating objects in front of cluttered moving backgrounds, via the modelling of spatiotemporal dynamics including combination of motion pre-filtering mechanisms and ensembles of local correlators inside both the ON and OFF pathways, which works effectively to suppress irrelevant background motion or distractors, and to improve the dynamic response. Accordingly, the direction of translating objects is decoded as global responses of both the HS and VS systems with positive or negative output indicating preferred-direction or null-direction translation. The experiments have verified the effectiveness of the proposed neural system model, and demonstrated its responsive preference to faster-moving, higher-contrast and larger-size targets embedded in cluttered moving backgrounds.


2017 ◽  
Vol 284 (1858) ◽  
pp. 20170673 ◽  
Author(s):  
Irene Senna ◽  
Cesare V. Parise ◽  
Marc O. Ernst

Unlike vision, the mechanisms underlying auditory motion perception are poorly understood. Here we describe an auditory motion illusion revealing a novel cue to auditory speed perception: the temporal frequency of amplitude modulation (AM-frequency), typical for rattling sounds. Naturally, corrugated objects sliding across each other generate rattling sounds whose AM-frequency tends to directly correlate with speed. We found that AM-frequency modulates auditory speed perception in a highly systematic fashion: moving sounds with higher AM-frequency are perceived as moving faster than sounds with lower AM-frequency. Even more interestingly, sounds with higher AM-frequency also induce stronger motion aftereffects. This reveals the existence of specialized neural mechanisms for auditory motion perception, which are sensitive to AM-frequency. Thus, in spatial hearing, the brain successfully capitalizes on the AM-frequency of rattling sounds to estimate the speed of moving objects. This tightly parallels previous findings in motion vision, where spatio-temporal frequency of moving displays systematically affects both speed perception and the magnitude of the motion aftereffects. Such an analogy with vision suggests that motion detection may rely on canonical computations, with similar neural mechanisms shared across the different modalities.


2015 ◽  
Vol 11 (10) ◽  
pp. 20150687 ◽  
Author(s):  
Finlay J. Stewart ◽  
Michiyo Kinoshita ◽  
Kentaro Arikawa

Many insects’ motion vision is achromatic and thus dependent on brightness rather than on colour contrast. We investigate whether this is true of the butterfly Papilio xuthus , an animal noted for its complex retinal organization, by measuring head movements of restrained animals in response to moving two-colour patterns. Responses were never eliminated across a range of relative colour intensities, indicating that motion can be detected through chromatic contrast in the absence of luminance contrast. Furthermore, we identify an interaction between colour and contrast polarity in sensitivity to achromatic patterns, suggesting that ON and OFF contrasts are processed by two channels with different spectral sensitivities. We propose a model of the motion detection process in the retina/lamina based on these observations.


Author(s):  
Bart G. Borghuis ◽  
Duje Tadin ◽  
Martin J.M. Lankheet ◽  
Joseph S. Lappin ◽  
Wim A. van de Grind

Under optimal conditions, just 3–6 ms of visual stimulation suffices for humans to see motion. Motion perception on this time scale implies that the visual system under these conditions reliably encodes, transmits, and processes neural signals with near-millisecond precision. Motivated by in vitro evidence for high temporal precision of motion signals in the primate retina, we investigated how neuronal and perceptual limits of motion encoding relate. Specifically, we examined the correspondence between the time scale at which cat retinal ganglion cells in vivo represent motion information and temporal thresholds for human motion discrimination. The time scale for motion encoding by ganglion cells ranged from 4.6–91 ms, depended nonlinearly on temporal frequency but not on contrast. Human psychophysics revealed that minimal stimulus durations required for perceiving motion direction were similarly brief, 5.6–65 ms, similarly depended on temporal frequency but, above ~10%, not on contrast. Notably, physiological and psychophysical measurements corresponded closely throughout (r = 0.99), despite more than a 20-fold variation in both human thresholds and optimal time scales for motion encoding in the retina. These results demonstrate that neural circuits for motion vision in cortex can maintain and make use of the high temporal fidelity of the retinal output signals.


Author(s):  
Tyler S. Manning ◽  
Kenneth H. Britten

The ability to see motion is critical to survival in a dynamic world. Decades of physiological research have established that motion perception is a distinct sub-modality of vision supported by a network of specialized structures in the nervous system. These structures are arranged hierarchically according to the spatial scale of the calculations they perform, with more local operations preceding those that are more global. The different operations serve distinct purposes, from the interception of small moving objects to the calculation of self-motion from image motion spanning the entire visual field. Each cortical area in the hierarchy has an independent representation of visual motion. These representations, together with computational accounts of their roles, provide clues to the functions of each area. Comparisons between neural activity in these areas and psychophysical performance can identify which representations are sufficient to support motion perception. Experimental manipulation of this activity can also define which areas are necessary for motion-dependent behaviors like self-motion guidance.


2019 ◽  
Vol 121 (4) ◽  
pp. 1207-1221 ◽  
Author(s):  
Ryo Sasaki ◽  
Dora E. Angelaki ◽  
Gregory C. DeAngelis

Multiple areas of macaque cortex are involved in visual motion processing, but their relative functional roles remain unclear. The medial superior temporal (MST) area is typically divided into lateral (MSTl) and dorsal (MSTd) subdivisions that are thought to be involved in processing object motion and self-motion, respectively. Whereas MSTd has been studied extensively with regard to processing visual and nonvisual self-motion cues, little is known about self-motion signals in MSTl, especially nonvisual signals. Moreover, little is known about how self-motion and object motion signals interact in MSTl and how this differs from interactions in MSTd. We compared the visual and vestibular heading tuning of neurons in MSTl and MSTd using identical stimuli. Our findings reveal that both visual and vestibular heading signals are weaker in MSTl than in MSTd, suggesting that MSTl is less well suited to participate in self-motion perception than MSTd. We also tested neurons in both areas with a variety of combinations of object motion and self-motion. Our findings reveal that vestibular signals improve the separability of coding of heading and object direction in both areas, albeit more strongly in MSTd due to the greater strength of vestibular signals. Based on a marginalization technique, population decoding reveals that heading and object direction can be more effectively dissociated from MSTd responses than MSTl responses. Our findings help to clarify the respective contributions that MSTl and MSTd make to processing of object motion and self-motion, although our conclusions may be somewhat specific to the multipart moving objects that we employed. NEW & NOTEWORTHY Retinal image motion reflects contributions from both the observer’s self-motion and the movement of objects in the environment. The neural mechanisms by which the brain dissociates self-motion and object motion remain unclear. This study provides the first systematic examination of how the lateral subdivision of area MST (MSTl) contributes to dissociating object motion and self-motion. We also examine, for the first time, how MSTl neurons represent translational self-motion based on both vestibular and visual cues.


Nature ◽  
2010 ◽  
Vol 468 (7321) ◽  
pp. 300-304 ◽  
Author(s):  
Maximilian Joesch ◽  
Bettina Schnell ◽  
Shamprasad Varija Raghu ◽  
Dierk F. Reiff ◽  
Alexander Borst

PLoS ONE ◽  
2021 ◽  
Vol 16 (7) ◽  
pp. e0254105
Author(s):  
Serena Castellotti ◽  
Carlo Francisci ◽  
Maria Michela Del Viva

The perception of moving objects (real motion) is a critical function for interacting with a dynamic environment. Motion perception can be also induced by particular structural features of static images (illusory motion) or by photographic images of subjects in motion (implied motion, IM). Many cortical areas are involved in motion processing, particularly the medial temporal cortical area (MT), dedicated to the processing of real, illusory, and implied motion. Recently, there has been a growing interest in the influence of high-level visual processes on pupillary responses. However, just a few studies have measured the effect of motion processing on the pupil, and not always with consistent results. Here we systematically investigate the effects of real, illusory, and implied motion on the pupil diameter for the first time, by showing different types of stimuli (movies, illusions, and photos) with the same average luminance to the same observers. We find different pupillary responses depending on the nature of motion. Real motion elicits a larger pupillary dilation than IM, which in turn induces more dilation than control photos representing static subjects (No-IM). The pupil response is sensitive even to the strength of IM, as photos with enhanced IM (blur, motion streaks, speed lines) induce larger dilation than simple freezed IM (subjects captured in the instant they are moving). Also, the subject represented in the stimulus matters: human figures are interpreted as more dynamic and induce larger dilation than objects/animals. Interestingly, illusory motion induces much less dilation than all the other motion categories, despite being seen as moving. Overall, pupil responses depend on the individual perception of dynamicity, confirming that the pupil is modulated by the subjective interpretation of complex stimuli. We argue that the different pupillary responses to real, illusory, and implied motion reflect the top-down modulations of different cortical areas involved in their processing.


Sign in / Sign up

Export Citation Format

Share Document