scholarly journals Visual input into the Drosophila melanogaster mushroom body

Author(s):  
Jinzhi Li ◽  
Brennan Dale Mahoney ◽  
Miles Solomon Jacob ◽  
Sophie Jeanne Cécile Caron

ABSTRACTThe ability to integrate input from different sensory systems is a fundamental property of many brains. Yet, the patterns of neuronal connectivity that underlie such multisensory integration remain poorly characterized. The Drosophila melanogaster mushroom body — an associative center required for the formation of olfactory and visual memories — is an ideal system to investigate how different sensory channels converge in higher-order brain centers. The neurons connecting the mushroom body to the olfactory system have been described in great detail, but input from other sensory systems remains poorly defined. Here, we use a range of anatomical and genetic techniques to identify two novel types of mushroom body input neuron that connect visual processing centers — namely the lobula and the posterior lateral protocerebrum — to the dorsal accessory calyx of the mushroom body. Together with previous work that described a pathway conveying visual information from the medulla to the ventral accessory calyx of the mushroom body (Vogt et al., 2016), our study defines a second, parallel pathway that is anatomically poised to convey information from the visual system to the dorsal accessory calyx. This connectivity pattern — the segregation of the visual information into two separate pathways — could be a fundamental feature of the neuronal architecture underlying multisensory integration in associative brain centers.

2019 ◽  
Author(s):  
Maureen M Sampson ◽  
Katherine M Myers Gschweng ◽  
Ben J Hardcastle ◽  
Shivan L Bonanno ◽  
Tyler R Sizemore ◽  
...  

AbstractSensory systems rely on neuromodulators, such as serotonin, to provide flexibility for information processing in the face of a highly variable stimulus space. Serotonergic neurons broadly innervate the optic ganglia of Drosophila melanogaster, a widely used model for studying vision. The role for serotonergic signaling in the Drosophila optic lobe and the mechanisms by which serotonin regulates visual neurons remain unclear. Here we map the expression patterns of serotonin receptors in the visual system, focusing on a subset of cells with processes in the first optic ganglion, the lamina, and show that serotonin can modulate visual responses. Serotonin receptors are expressed in several types of columnar cells in the lamina including 5-HT2B in lamina monopolar cell L2, required for the initial steps of visual processing, and both 5-HT1A and 5-HT1B in T1 cells, whose function is unknown. Subcellular mapping with GFP-tagged 5-HT2B and 5-HT1A constructs indicates that these receptors localize to layer M2 of the medulla, proximal to serotonergic boutons, suggesting that the medulla is the primary site of serotonergic regulation for these neurons. Serotonin increases intracellular calcium in L2 terminals in layer M2 and alters the kinetics of visually induced calcium transients in L2 neurons following dark flashes. These effects were not observed in flies without a functional 5-HT2B, which displayed severe differences in the amplitude and kinetics of their calcium response to both dark and light flashes. While we did not detect serotonin receptor expression in L1 neurons, they also undergo serotonin-induced calcium changes, presumably via cell non-autonomous signaling pathways. We provide the first functional data showing a role for serotonergic neuromodulation of neurons required for initiating visual processing in Drosophila and establish a new platform for investigating the serotonergic neuromodulation of sensory networks.Author SummarySerotonergic neurons innervate the Drosophila melanogaster eye, but the function of serotonergic signaling is not known. We found that serotonin receptors are expressed in all neuropils of the optic lobe and identify specific neurons involved in visual information processing that express serotonin receptors. We then demonstrate that activation of these receptors can alter how visual information is processed. These are the first data suggesting a functional role for serotonergic signaling in Drosophila vision. This study contributes to the understanding of serotonin biology and modulation of sensory circuits.


2002 ◽  
Vol 17 (4) ◽  
pp. 194-199 ◽  
Author(s):  
I. Viaud-Delmon ◽  
A. Berthoz ◽  
R. Jouvent

SummaryStudies suggest a greater reliance on visual information for maintaining balance in anxious subjects. Nevertheless, links between this supposed preferred visual processing and spatial orientation have not yet been evaluated. Two groups of subjects differing in their level of trait anxiety were formed. Equipped with a head-mounted visual display, they learned a virtual corridor using passive translation but active rotation, both with normal and with two different conflicting sensory conditions. After two visual navigation trials in the corridor, they were blindfolded and asked to reproduce the same trajectory from memory. In addition, subjects drew a map of the remembered corridor. Anxious subjects were comparable to non-anxious subjects when asked to reproduce the trajectory from memory, but exhibited a deficit when drawing a map of the corridor they were in. The results do not support the hypothesis that anxious subjects use preferentially one type of sensory cue over another for spatial orientation, but instead manifest difficulties in constructing more global representations of space.


2021 ◽  
Author(s):  
Tatsuya Hayashi ◽  
Alexander John MacKenzie ◽  
Ishani Ganguly ◽  
Hayley Smihula ◽  
Miles Solomon Jacob ◽  
...  

Associative brain centers, such as the insect mushroom body, need to represent sensory information in an efficient manner. In Drosophila melanogaster, the Kenyon cells of the mushroom body integrate inputs from a random set of olfactory projection neurons, but some projection neurons, namely those activated by a few ethologically meaningful odors, connect to Kenyon cells more frequently than others. This biased and random connectivity pattern is conceivably advantageous, as it enables the mushroom body to represent a large number of odors as unique activity patterns while prioritizing the representation of a few specific odors. How this connectivity pattern is established remains largely unknown. Here, we test whether the mechanisms patterning the connections between Kenyon cells and projection neurons depend on sensory activity or whether they are hardwired. We mapped a large number of mushroom body input connections in anosmic flies, flies lacking the obligate odorant co-receptor Orco, and in wildtype flies. Statistical analyses of these datasets reveal that the random and biased connectivity pattern observed between Kenyon cells and projection neurons forms normally in the absence of most olfactory sensory activity. This finding supports the idea that even comparatively subtle, population-level patterns of neuronal connectivity can be encoded by fixed genetic programs and are likely to be the result of evolved prioritization of ecologically and ethologically salient stimuli.


1983 ◽  
Vol 27 (5) ◽  
pp. 354-354
Author(s):  
Bruce W. Hamill ◽  
Robert A. Virzi

This investigation addresses the problem of attention in the processing of symbolic information from visual displays. Its scope includes the nature of attentive processes, the structural properties of stimuli that influence visual information processing mechanisms, and the manner in which these factors interact in perception. Our purpose is to determine the effects of configural feature structure on visual information processing. It is known that for stimuli comprising separable features, one can distinguish between conditions in which only one relevant feature differs among stimuli in the array being searched and conditions in which conjunctions of two (or more) features differ: Since the visual process of conjoining separable features is additive, this fact is reflected in search time as a function of array size, with feature conditions yielding flat curves associated with parallel search (no increase in search time across array sizes) and conjunction conditions yielding linearly increasing curves associated with serial search. We studied configural-feature stimuli within this framework to determine the nature of visual processing for such stimuli as a function of their feature structure. Response times of subjects searching for particular targets among structured arrays of distractors were measured in a speeded visual search task. Two different sets of stimulus materials were studied in array sizes of up to 32 stimuli, using both tachistoscope and microcomputer-based CRT presentation for each. Our results with configural stimuli indicate serial search in all of the conditions, with the slope of the response-time-by-array-size function being steeper for conjunction conditions than for feature conditions. However, for each of the two sets of stimuli we studied, there was one configuration that stood apart from the others in its set in that it yielded significantly faster response times, and in that conjunction conditions involving these particular stimuli tended to cluster with the feature conditions rather than with the other conjunction conditions. In addition to these major effects of particular targets, context effects also appeared in our results as effects of the various distractor sets used; certain of these context effects appear to be reversible. The effects of distractor sets on target search were studied in considerable detail. We have found interesting differences in visual processing between stimuli comprising separable features and those comprising configural features. We have also been able to characterize the effects we have found with configural-feature stimuli as being related to the specific feature structure of the target stimulus in the context of the specific feature structure of distractor stimuli. These findings have strong implications for the design of symbology that can enhance visual performance in the use of automated displays.


1999 ◽  
Vol 11 (3) ◽  
pp. 300-311 ◽  
Author(s):  
Edmund T. Rolls ◽  
Martin J. Tovée ◽  
Stefano Panzeri

Backward masking can potentially provide evidence of the time needed for visual processing, a fundamental constraint that must be incorporated into computational models of vision. Although backward masking has been extensively used psychophysically, there is little direct evidence for the effects of visual masking on neuronal responses. To investigate the effects of a backward masking paradigm on the responses of neurons in the temporal visual cortex, we have shown that the response of the neurons is interrupted by the mask. Under conditions when humans can just identify the stimulus, with stimulus onset asynchronies (SOA) of 20 msec, neurons in macaques respond to their best stimulus for approximately 30 msec. We now quantify the information that is available from the responses of single neurons under backward masking conditions when two to six faces were shown. We show that the information available is greatly decreased as the mask is brought closer to the stimulus. The decrease is more marked than the decrease in firing rate because it is the selective part of the firing that is especially attenuated by the mask, not the spontaneous firing, and also because the neuronal response is more variable at short SOAs. However, even at the shortest SOA of 20 msec, the information available is on average 0.1 bits. This compares to 0.3 bits with only the 16-msec target stimulus shown and a typical value for such neurons of 0.4 to 0.5 bits with a 500-msec stimulus. The results thus show that considerable information is available from neuronal responses even under backward masking conditions that allow the neurons to have their main response in 30 msec. This provides evidence for how rapid the processing of visual information is in a cortical area and provides a fundamental constraint for understanding how cortical information processing operates.


2021 ◽  
Author(s):  
Ning Mei ◽  
Roberto Santana ◽  
David Soto

AbstractDespite advances in the neuroscience of visual consciousness over the last decades, we still lack a framework for understanding the scope of unconscious processing and how it relates to conscious experience. Previous research observed brain signatures of unconscious contents in visual cortex, but these have not been identified in a reliable manner, with low trial numbers and signal detection theoretic constraints not allowing to decisively discard conscious perception. Critically, the extent to which unconscious content is represented in high-level processing stages along the ventral visual stream and linked prefrontal areas remains unknown. Using a within-subject, high-precision, highly-sampled fMRI approach, we show that unconscious contents, even those associated with null sensitivity, can be reliably decoded from multivoxel patterns that are highly distributed along the ventral visual pathway and also involving prefrontal substrates. Notably, the neural representation in these areas generalised across conscious and unconscious visual processing states, placing constraints on prior findings that fronto-parietal substrates support the representation of conscious contents and suggesting revisions to models of consciousness such as the neuronal global workspace. We then provide a computational model simulation of visual information processing/representation in the absence of perceptual sensitivity by using feedforward convolutional neural networks trained to perform a similar visual task to the human observers. The work provides a novel framework for pinpointing the neural representation of unconscious knowledge across different task domains.


F1000Research ◽  
2013 ◽  
Vol 2 ◽  
pp. 58 ◽  
Author(s):  
J Daniel McCarthy ◽  
Colin Kupitz ◽  
Gideon P Caplovitz

Our perception of an object’s size arises from the integration of multiple sources of visual information including retinal size, perceived distance and its size relative to other objects in the visual field. This constructive process is revealed through a number of classic size illusions such as the Delboeuf Illusion, the Ebbinghaus Illusion and others illustrating size constancy. Here we present a novel variant of the Delbouef and Ebbinghaus size illusions that we have named the Binding Ring Illusion. The illusion is such that the perceived size of a circular array of elements is underestimated when superimposed by a circular contour – a binding ring – and overestimated when the binding ring slightly exceeds the overall size of the array. Here we characterize the stimulus conditions that lead to the illusion, and the perceptual principles that underlie it. Our findings indicate that the perceived size of an array is susceptible to the assimilation of an explicitly defined superimposed contour. Our results also indicate that the assimilation process takes place at a relatively high level in the visual processing stream, after different spatial frequencies have been integrated and global shape has been constructed. We hypothesize that the Binding Ring Illusion arises due to the fact that the size of an array of elements is not explicitly defined and therefore can be influenced (through a process of assimilation) by the presence of a superimposed object that does have an explicit size.


2020 ◽  
Author(s):  
Han Zhang ◽  
Nicola C Anderson ◽  
Kevin Miller

Recent studies have shown that mind-wandering (MW) is associated with changes in eye movement parameters, but have not explored how MW affects the sequential pattern of eye movements involved in making sense of complex visual information. Eye movements naturally unfold over time and this process may reveal novel information about cognitive processing during MW. The current study used Recurrence Quantification Analysis (Anderson, Bischof, Laidlaw, Risko, & Kingstone, 2013) to describe the pattern of refixations (fixations directed to previously-inspected regions) during MW. Participants completed a real-world scene encoding task and responded to thought probes assessing intentional and unintentional MW. Both types of MW were associated with worse memory of the scenes. Importantly, RQA showed that scanpaths during unintentional MW were more repetitive than during on-task episodes, as indicated by a higher recurrence rate and more stereotypical fixation sequences. This increased repetitiveness suggests an adaptive response to processing failures through re-examining previous locations. Moreover, this increased repetitiveness contributed to fixations focusing on a smaller spatial scale of the stimuli. Finally, we were also able to validate several traditional measures: both intentional and unintentional MW were associated with fewer and longer fixations; Eye-blinking increased numerically during both types of MW but the difference was only significant for unintentional MW. Overall, the results advanced our understanding of how visual processing is affected during MW by highlighting the sequential aspect of eye movements.


Author(s):  
Mohammad S.E Sendi ◽  
Godfrey D Pearlson ◽  
Daniel H Mathalon ◽  
Judith M Ford ◽  
Adrian Preda ◽  
...  

Although visual processing impairments have been explored in schizophrenia (SZ), their underlying neurobiology of the visual processing impairments has not been widely studied. Also, while some research has hinted at differences in information transfer and flow in SZ, there are few investigations of the dynamics of functional connectivity within visual networks. In this study, we analyzed resting-state fMRI data of the visual sensory network (VSN) in 160 healthy control (HC) subjects and 151 SZ subjects. We estimated 9 independent components within the VSN. Then, we calculated the dynamic functional network connectivity (dFNC) using the Pearson correlation. Next, using k-means clustering, we partitioned the dFNCs into five distinct states, and then we calculated the portion of time each subject spent in each state, that we termed the occupancy rate (OCR). Using OCR, we compared HC with SZ subjects and investigated the link between OCR and visual learning in SZ subjects. Besides, we compared the VSN functional connectivity of SZ and HC subjects in each state. We found that this network is indeed highly dynamic. Each state represents a unique connectivity pattern of fluctuations in VSN FNC, and all states showed significant disruption in SZ. Overall, HC showed stronger connectivity within the VSN in states. SZ subjects spent more time in a state in which the connectivity between the middle temporal gyrus and other regions of VNS is highly negative. Besides, OCR in a state with strong positive connectivity between middle temporal gyrus and other regions correlated significantly with visual learning scores in SZ.


Author(s):  
Angie M. Michaiel ◽  
Elliott T.T. Abe ◽  
Cristopher M. Niell

ABSTRACTMany studies of visual processing are conducted in unnatural conditions, such as head- and gaze-fixation. As this radically limits natural exploration of the visual environment, there is much less known about how animals actively use their sensory systems to acquire visual information in natural, goal-directed contexts. Recently, prey capture has emerged as an ethologically relevant behavior that mice perform without training, and that engages vision for accurate orienting and pursuit. However, it is unclear how mice target their gaze during such natural behaviors, particularly since, in contrast to many predatory species, mice have a narrow binocular field and lack foveate vision that would entail fixing their gaze on a specific point in the visual field. Here we measured head and bilateral eye movements in freely moving mice performing prey capture. We find that the majority of eye movements are compensatory for head movements, thereby acting to stabilize the visual scene. During head turns, however, these periods of stabilization are interspersed with non-compensatory saccades that abruptly shift gaze position. Analysis of eye movements relative to the cricket position shows that the saccades do not preferentially select a specific point in the visual scene. Rather, orienting movements are driven by the head, with the eyes following in coordination to sequentially stabilize and recenter the gaze. These findings help relate eye movements in the mouse to other species, and provide a foundation for studying active vision during ethological behaviors in the mouse.


Sign in / Sign up

Export Citation Format

Share Document