Spectral inputs and ocellar contributions to a pitch-sensitive descending neuron in the honeybee

2013 ◽  
Vol 109 (4) ◽  
pp. 1202-1213 ◽  
Author(s):  
Y.-S. Hung ◽  
J. P. van Kleef ◽  
G. Stange ◽  
M. R. Ibbotson

By measuring insect compensatory optomotor reflexes to visual motion, researchers have examined the computational mechanisms of the motion processing system. However, establishing the spectral sensitivity of the neural pathways that underlie this motion behavior has been difficult, and the contribution of the simple eyes (ocelli) has been rarely examined. In this study we investigate the spectral response properties and ocellar inputs of an anatomically identified descending neuron (DNII2) in the honeybee optomotor pathway. Using a panoramic stimulus, we show that it responds selectively to optic flow associated with pitch rotations. The neuron is also stimulated with a custom-built light-emitting diode array that presented moving bars that were either all-green (spectrum 500–600 nm, peak 530 nm) or all-short wavelength (spectrum 350–430 nm, peak 380 nm). Although the optomotor response is thought to be dominated by green-sensitive inputs, we show that DNII2 is equally responsive to, and direction selective to, both green- and short-wavelength stimuli. The color of the background image also influences the spontaneous spiking behavior of the cell: a green background produces significantly higher spontaneous spiking rates. Stimulating the ocelli produces strong modulatory effects on DNII2, significantly increasing the amplitude of its responses in the preferred motion direction and decreasing the response latency by adding a directional, short-latency response component. Our results suggest that the spectral sensitivity of the optomotor response in honeybees may be more complicated than previously thought and that ocelli play a significant role in shaping the timing of motion signals.

2020 ◽  
Author(s):  
Nardin Nakhla ◽  
Yavar Korkian ◽  
Matthew R. Krause ◽  
Christopher C. Pack

AbstractThe processing of visual motion is carried out by dedicated pathways in the primate brain. These pathways originate with populations of direction-selective neurons in the primary visual cortex, which project to dorsal structures like the middle temporal (MT) and medial superior temporal (MST) areas. Anatomical and imaging studies have suggested that area V3A might also be specialized for motion processing, but there have been very few studies of single-neuron direction selectivity in this area. We have therefore performed electrophysiological recordings from V3A neurons in two macaque monkeys (one male and one female) and measured responses to a large battery of motion stimuli that includes translation motion, as well as more complex optic flow patterns. For comparison, we simultaneously recorded the responses of MT neurons to the same stimuli. Surprisingly, we find that overall levels of direction selectivity are similar in V3A and MT and moreover that the population of V3A neurons exhibits somewhat greater selectivity for optic flow patterns. These results suggest that V3A should be considered as part of the motion processing machinery of the visual cortex, in both human and non-human primates.Significance statementAlthough area V3A is frequently the target of anatomy and imaging studies, little is known about its functional role in processing visual stimuli. Its contribution to motion processing has been particularly unclear, with different studies yielding different conclusions. We report a detailed study of direction selectivity in V3A. Our results show that single V3A neurons are, on average, as capable of representing motion direction as are neurons in well-known structures like MT. Moreover, we identify a possible specialization for V3A neurons in representing complex optic flow, which has previously been thought to emerge in higher-order brain regions. Thus it appears that V3A is well-suited to a functional role in motion processing.


i-Perception ◽  
2020 ◽  
Vol 11 (3) ◽  
pp. 204166952093732
Author(s):  
Masahiko Terao ◽  
Shin’ya Nishida

Many studies have investigated various effects of smooth pursuit on visual motion processing, especially the effects related to the additional retinal shifts produced by eye movement. In this article, we show that the perception of apparent motion during smooth pursuit is determined by the interelement proximity in retinal coordinates and also by the proximity in objective world coordinates. In Experiment 1, we investigated the perceived direction of the two-frame apparent motion of a square-wave grating with various displacement sizes under fixation and pursuit viewing conditions. The retinal and objective displacements between the two frames agreed with each other under the fixation condition. However, the displacements differed by 180 degrees in terms of phase shift, under the pursuit condition. The proportions of the reported motion direction between the two viewing conditions did not coincide when they were plotted as a function of either the retinal displacement or of the objective displacement; however, they did coincide when plotted as a function of a mixture of the two. The result from Experiment 2 showed that the perceived jump size of the apparent motion was also dependent on both retinal and objective displacements. Our findings suggest that the detection of the apparent motion during smooth pursuit considers the retinal proximity and also the objective proximity. This mechanism may assist with the selection of a motion path that is more likely to occur in the real world and, therefore, be useful for ensuring perceptual stability during smooth pursuit.


2019 ◽  
Vol 30 (4) ◽  
pp. 2659-2673
Author(s):  
Shaun L Cloherty ◽  
Jacob L Yates ◽  
Dina Graf ◽  
Gregory C DeAngelis ◽  
Jude F Mitchell

Abstract Visual motion processing is a well-established model system for studying neural population codes in primates. The common marmoset, a small new world primate, offers unparalleled opportunities to probe these population codes in key motion processing areas, such as cortical areas MT and MST, because these areas are accessible for imaging and recording at the cortical surface. However, little is currently known about the perceptual abilities of the marmoset. Here, we introduce a paradigm for studying motion perception in the marmoset and compare their psychophysical performance with human observers. We trained two marmosets to perform a motion estimation task in which they provided an analog report of their perceived direction of motion with an eye movement to a ring that surrounded the motion stimulus. Marmosets and humans exhibited similar trade-offs in speed versus accuracy: errors were larger and reaction times were longer as the strength of the motion signal was reduced. Reverse correlation on the temporal fluctuations in motion direction revealed that both species exhibited short integration windows; however, marmosets had substantially less nondecision time than humans. Our results provide the first quantification of motion perception in the marmoset and demonstrate several advantages to using analog estimation tasks.


2021 ◽  
Author(s):  
Scott T. Steinmetz ◽  
Oliver W. Layton ◽  
Nate V. Powell ◽  
Brett Fajen

This paper introduces a self-tuning mechanism for capturing rapid adaptation to changing visual stimuli by a population of neurons. Building upon the principles of efficient sensory encoding, we show how neural tuning curve parameters can be continually updated to optimally encode a time-varying distribution of recently detected stimulus values. We implemented this mechanism in a neural model that produces human-like estimates of self-motion direction (i.e., heading) based on optic flow. The parameters of speed-sensitive units were dynamically tuned in accordance with efficient sensory encoding such that the network remained sensitive as the distribution of optic flow speeds varied. In two simulation experiments, we found that model performance with dynamic tuning yielded more accurate, shorter latency heading estimates compared to the model with static tuning. We conclude that dynamic efficient sensory encoding offers a plausible approach for capturing adaptation to varying visual environments in biological visual systems and neural models alike.


2020 ◽  
Vol 117 (50) ◽  
pp. 32165-32168
Author(s):  
Arvid Guterstam ◽  
Michael S. A. Graziano

Recent evidence suggests a link between visual motion processing and social cognition. When person A watches person B, the brain of A apparently generates a fictitious, subthreshold motion signal streaming from B to the object of B’s attention. These previous studies, being correlative, were unable to establish any functional role for the false motion signals. Here, we directly tested whether subthreshold motion processing plays a role in judging the attention of others. We asked, if we contaminate people’s visual input with a subthreshold motion signal streaming from an agent to an object, can we manipulate people’s judgments about that agent’s attention? Participants viewed a display including faces, objects, and a subthreshold motion hidden in the background. Participants’ judgments of the attentional state of the faces was significantly altered by the hidden motion signal. Faces from which subthreshold motion was streaming toward an object were judged as paying more attention to the object. Control experiments showed the effect was specific to the agent-to-object motion direction and to judging attention, not action or spatial orientation. These results suggest that when the brain models other minds, it uses a subthreshold motion signal, streaming from an individual to an object, to help represent attentional state. This type of social-cognitive model, tapping perceptual mechanisms that evolved to process physical events in the real world, may help to explain the extraordinary cultural persistence of beliefs in mind processes having physical manifestation. These findings, therefore, may have larger implications for human psychology and cultural belief.


Author(s):  
Daniela Perani ◽  
Paola Scifo ◽  
Guido M. Cicchini ◽  
Pasquale Della Rosa ◽  
Chiara Banfi ◽  
...  

AbstractMotion perception deficits in dyslexia show a large intersubjective variability, partly reflecting genetic factors influencing brain architecture development. In previous work, we have demonstrated that dyslexic carriers of a mutation of the DCDC2 gene have a very strong impairment in motion perception. In the present study, we investigated structural white matter alterations associated with the poor motion perception in a cohort of twenty dyslexics with a subgroup carrying the DCDC2 gene deletion (DCDC2d+) and a subgroup without the risk variant (DCDC2d–). We observed significant deficits in motion contrast sensitivity and in motion direction discrimination accuracy at high contrast, stronger in the DCDC2d+ group. Both motion perception impairments correlated significantly with the fractional anisotropy in posterior ventral and dorsal tracts, including early visual pathways both along the optic radiation and in proximity of occipital cortex, MT and VWFA. However, the DCDC2d+ group showed stronger correlations between FA and motion perception impairments than the DCDC2d– group in early visual white matter bundles, including the optic radiations, and in ventral pathways located in the left inferior temporal cortex. Our results suggest that the DCDC2d+ group experiences higher vulnerability in visual motion processing even at early stages of visual analysis, which might represent a specific feature associated with the genotype and provide further neurobiological support to the visual-motion deficit account of dyslexia in a specific subpopulation.


2008 ◽  
Vol 276 (1655) ◽  
pp. 263-268 ◽  
Author(s):  
William Curran ◽  
Colin W.G Clifford ◽  
Christopher P Benton

It is well known that context influences our perception of visual motion direction. For example, spatial and temporal context manipulations can be used to induce two well-known motion illusions: direction repulsion and the direction after-effect (DAE). Both result in inaccurate perception of direction when a moving pattern is either superimposed on (direction repulsion), or presented following adaptation to (DAE), another pattern moving in a different direction. Remarkable similarities in tuning characteristics suggest that common processes underlie the two illusions. What is not clear, however, is whether the processes driving the two illusions are expressions of the same or different neural substrates. Here we report two experiments demonstrating that direction repulsion and the DAE are, in fact, expressions of different neural substrates. Our strategy was to use each of the illusions to create a distorted perceptual representation upon which the mechanisms generating the other illusion could potentially operate. We found that the processes mediating direction repulsion did indeed access the distorted perceptual representation induced by the DAE. Conversely, the DAE was unaffected by direction repulsion. Thus parallels in perceptual phenomenology do not necessarily imply common neural substrates. Our results also demonstrate that the neural processes driving the DAE occur at an earlier stage of motion processing than those underlying direction repulsion.


2008 ◽  
Vol 25 (1) ◽  
pp. 17-26 ◽  
Author(s):  
A. ANTAL ◽  
J. BAUDEWIG ◽  
W. PAULUS ◽  
P. DECHENT

The posterior cingulate cortex (PCC) is involved in higher order sensory and sensory-motor integration while the planum temporale/parietal operculum (PT/PO) junction takes part in auditory motion and vestibular processing. Both regions are activated during different types of visual stimulation. Here, we describe the response characteristics of the PCC and PT/PO to basic types of visual motion stimuli of different complexity (complex and simple coherent as well as incoherent motion). Functional magnetic resonance imaging (fMRI) was performed in 10 healthy subjects at 3 Tesla, whereby different moving dot stimuli (vertical, horizontal, rotational, radial, and random) were contrasted against a static dot pattern. All motion stimuli activated a distributed cortical network, including previously described motion-sensitive striate and extrastriate visual areas. Bilateral activations in the dorsal region of the PCC (dPCC) were evoked using coherent motion stimuli, irrespective of motion direction (vertical, horizontal, rotational, radial) with increasing activity and with higher complexity of the stimulus. In contrast, the PT/PO responded equally well to all of the different coherent motion types. Incoherent (random) motion yielded significantly less activation both in the dPCC and in the PT/PO area. These results suggest that the dPCC and the PT/PO take part in the processing of basic types of visual motion. However, in dPCC a possible effect of attentional modulation resulting in the higher activity evoked by the complex stimuli should also be considered. Further studies are warranted to incorporate these regions into the current model of the cortical motion processing network.


2021 ◽  
Author(s):  
Ana Gómez-Granados ◽  
Isaac Kurtzer ◽  
Tarkeshwar Singh

AbstractAn important window into sensorimotor function is how we catch moving objects. Studies that examined catching of free-falling objects report that the timing of the motor response is independent of the momentum of the projectile, whereas the motor response amplitude scales with projectile momentum. However, this pattern may not be a general strategy of catching since objects accelerate under gravity in a characteristic manner (unlike object motion in the horizontal plane) and the human visual motion-processing system is not adept at encoding acceleration. Accordingly, we developed a new experimental paradigm using a robotic manipulandum and augmented reality where participants stabilized against the impact of a virtual object moving at constant velocity in the horizontal plane. Participants needed to apply an impulse that mirrored the object momentum to bring it to rest and received explicit feedback on their performance. In different blocks, object momentum was varied by an increase in its speed or mass. In contrast to previous reports on free falling objects, we observed that increasing object speed caused earlier onset of arm muscle activity and limb force relative to the impending time to contact. Also, arm force increased as a function of target momentum linked to changes in speed or mass. Our results demonstrate velocity-dependent timing to catch objects and a complex pattern of scaling to momentum.


2019 ◽  
Author(s):  
Shaun L. Cloherty ◽  
Jacob L. Yates ◽  
Dina Graf ◽  
Gregory C. DeAngelis ◽  
Jude F. Mitchell

AbstractVisual motion processing is a well-established model system for studying neural population codes in primates. The common marmoset, a small new world primate, offers unparalleled opportunities to probe these population codes in key motion processing areas, such as cortical areas MT and MST, because these areas are accessible for imaging and recording at the cortical surface. However, little is currently known about the perceptual abilities of the marmoset. Here, we introduce a paradigm for studying motion perception in the marmoset and compare their psychophysical performance to human observers. We trained two marmosets to perform a motion estimation task in which they provided an analog report of their perceived direction of motion with an eye movement to a ring that surrounded the motion stimulus. Marmosets and humans exhibited similar trade-offs in speed vs. accuracy: errors were larger and reaction times were longer as the strength of the motion signal was reduced. Reverse correlation on the temporal fluctuations in motion direction revealed that both species exhibited short integration windows, however, marmosets had substantially less non-decision time than humans. Our results provide the first quantification of motion perception in the marmoset and demonstrate several advantages to using analog estimation tasks.


Sign in / Sign up

Export Citation Format

Share Document