scholarly journals Multimodal integration and stimulus categorization in putative mushroom body output neurons of the honeybee

2018 ◽  
Vol 5 (2) ◽  
pp. 171785 ◽  
Author(s):  
Martin F. Strube-Bloss ◽  
Wolfgang Rössler

Flowers attract pollinating insects like honeybees by sophisticated compositions of olfactory and visual cues. Using honeybees as a model to study olfactory–visual integration at the neuronal level, we focused on mushroom body (MB) output neurons (MBON). From a neuronal circuit perspective, MBONs represent a prominent level of sensory-modality convergence in the insect brain. We established an experimental design allowing electrophysiological characterization of olfactory, visual, as well as olfactory–visual induced activation of individual MBONs. Despite the obvious convergence of olfactory and visual pathways in the MB, we found numerous unimodal MBONs. However, a substantial proportion of MBONs (32%) responded to both modalities and thus integrated olfactory–visual information across MB input layers. In these neurons, representation of the olfactory–visual compound was significantly increased compared with that of single components, suggesting an additive, but nonlinear integration. Population analyses of olfactory–visual MBONs revealed three categories: (i) olfactory, (ii) visual and (iii) olfactory–visual compound stimuli. Interestingly, no significant differentiation was apparent regarding different stimulus qualities within these categories. We conclude that encoding of stimulus quality within a modality is largely completed at the level of MB input, and information at the MB output is integrated across modalities to efficiently categorize sensory information for downstream behavioural decision processing.

Author(s):  
Jürgen Rybak ◽  
Randolf Menzel

The mushroom body (MB) in the insect brain is composed of a large number of densely packed neurons called Kenyon cells (KCs) (Drosophila, 2200; honeybee, 170,000). In most insect species, the MB consists of two caplike dorsal structures, the calyces, which contain the dendrites of KCs, and two to four lobes formed by collaterals of branching KC axons. Although the MB receives input and provides output throughout its whole structure, the neuropil part of the calyx receives predominantly multimodal input from sensory projection neurons (PNs) of second or a higher order, and the lobes send output neurons to many other parts of the brain, including recurrent neurons to the MB calyx. Widely branching, supposedly modulatory neurons (serotonergic, octopaminergic) innervate the MB at all levels (calyx, peduncle, and lobes), including the somata of KCs in the calyx (dopamine).


2021 ◽  
pp. 1-21
Author(s):  
Xinyue Wang ◽  
Clemens Wöllner ◽  
Zhuanghua Shi

Abstract Compared to vision, audition has been considered to be the dominant sensory modality for temporal processing. Nevertheless, recent research suggests the opposite, such that the apparent inferiority of visual information in tempo judgements might be due to the lack of ecological validity of experimental stimuli, and reliable visual movements may have the potential to alter the temporal location of perceived auditory inputs. To explore the role of audition and vision in overall time perception, audiovisual stimuli with various degrees of temporal congruence were developed in the current study. We investigated which sensory modality weighs more in holistic tempo judgements with conflicting audiovisual information, and whether biological motion (point-light displays of dancers) rather than auditory cues (rhythmic beats) dominate judgements of tempo. A bisection experiment found that participants relied more on visual tempo compared to auditory tempo in overall tempo judgements. For fast tempi (150 to 180 BPM), participants judged ‘fast’ significantly more often with visual cues regardless of the auditory tempo, whereas for slow tempi (60 to 90 BPM), they did so significantly less often. Our results support the notion that visual stimuli with higher ecological validity have the potential to drive up or down the holistic perception of tempo.


Author(s):  
Jose Adrian Vega Vermehren ◽  
Cornelia Buehlmann ◽  
Ana Sofia David Fernandes ◽  
Paul Graham

AbstractAnts are excellent navigators taking into account multimodal sensory information as they move through the world. To be able to accurately localise the nest at the end of a foraging journey, visual cues, wind direction and also olfactory cues need to be learnt. Learning walks are performed at the start of an ant’s foraging career or when the appearance of the nest surrounding has changed. We investigated here whether the structure of such learning walks in the desert ant Cataglyphis fortis takes into account wind direction in conjunction with the learning of new visual information. Ants learnt to travel back and forth between their nest and a feeder, and we then introduced a black cylinder near their nest to induce learning walks in regular foragers. By doing this across days with different prevailing wind directions, we were able to probe how ants balance the influence of different sensory modalities. We found that (i) the ants’ outwards headings are influenced by the direction of the wind with their routes deflected in such a way that they will arrive downwind of their nest when homing, (ii) a novel object along the route induces learning walks in experienced ants and (iii) the structure of learning walks is shaped by the wind direction rather than the position of the visual cue.


2019 ◽  
Vol 16 (154) ◽  
pp. 20180903
Author(s):  
Edward D. Lee ◽  
Edward Esposito ◽  
Itai Cohen

Swing in a crew boat, a good jazz riff, a fluid conversation: these tasks require extracting sensory information about how others flow in order to mimic and respond. To determine what factors influence coordination, we build an environment to manipulate incoming sensory information by combining virtual reality and motion capture. We study how people mirror the motion of a human avatar’s arm as we occlude the avatar. We efficiently map the transition from successful mirroring to failure using Gaussian process regression. Then, we determine the change in behaviour when we introduce audio cues with a frequency proportional to the speed of the avatar’s hand or train individuals with a practice session. Remarkably, audio cues extend the range of successful mirroring to regimes where visual information is sparse. Such cues could facilitate joint coordination when navigating visually occluded environments, improve reaction speed in human–computer interfaces or measure altered physiological states and disease.


2014 ◽  
Vol 27 (3-4) ◽  
pp. 247-262 ◽  
Author(s):  
Emiliano Ricciardi ◽  
Leonardo Tozzi ◽  
Andrea Leo ◽  
Pietro Pietrini

Cross-modal responses in occipital areas appear to be essential for sensory processing in visually deprived subjects. However, it is yet unclear whether this functional recruitment might be dependent on the sensory channel conveying the information. In order to characterize brain areas showing task-independent, but sensory specific, cross-modal responses in blind individuals, we pooled together distinct brain functional studies in a single based meta-analysis according only to the modality conveying experimental stimuli (auditory or tactile). Our approach revealed a specific functional cortical segregation according to the sensory modality conveying the non-visual information, irrespectively from the cognitive features of the tasks. In particular, dorsal and posterior subregions of occipital and superior parietal cortex showed a higher cross-modal recruitment across tactile tasks in blind as compared to sighted individuals. On the other hand, auditory stimuli activated more medial and ventral clusters within early visual areas, the lingual and inferior temporal cortex. These findings suggest a modality-specific functional modification of cross-modal responses within different portions of the occipital cortex of blind individuals. Cross-modal recruitment can thus be specifically influenced by the intrinsic features of sensory information.


2012 ◽  
Vol 21 (3) ◽  
pp. 295-304 ◽  
Author(s):  
Maria Korman ◽  
Kinneret Teodorescu ◽  
Adi Cohen ◽  
Miriam Reiner ◽  
Daniel Gopher

The stiffness properties of an environment are perceived during active manual manipulation primarily by processing force cues and position-based tactile, kinesthetic, and visual information. Using a two alternative forced choice (2AFC) stiffness discrimination task, we tested how the perceiver integrates stiffness-related information based on sensory feedback from one or two modalities and the origins of within-session shifts in stiffness discrimination ability. Two factors were investigated: practice and the amount of available sensory information. Subjects discriminated between the stiffness of two targets that were presented either haptically or visuohaptically in two subsequent blocks. Our results show that prior experience in a unisensory haptic stiffness discrimination block greatly improved performance when visual feedback was subsequently provided along with haptic feedback. This improvement could not be attributed to effects induced by practice or multisensory stimulus presentation. Our findings suggest that optimization integration theories of multisensory perception need to account for past sensory experience that may affect current perception of the task even within a single session.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Claire Eschbach ◽  
Akira Fushiki ◽  
Michael Winding ◽  
Bruno Afonso ◽  
Ingrid V Andrade ◽  
...  

Animal behavior is shaped both by evolution and by individual experience. Parallel brain pathways encode innate and learned valences of cues, but the way in which they are integrated during action-selection is not well understood. We used electron microscopy to comprehensively map with synaptic resolution all neurons downstream of all Mushroom Body output neurons (encoding learned valences) and characterized their patterns of interaction with Lateral Horn neurons (encoding innate valences) in Drosophila larva. The connectome revealed multiple convergence neuron types that receive convergent Mushroom Body and Lateral Horn inputs. A subset of these receives excitatory input from positive-valence MB and LH pathways and inhibitory input from negative-valence MB pathways. We confirmed functional connectivity from LH and MB pathways and behavioral roles of two of these neurons. These neurons encode integrated odor value and bidirectionally regulate turning. Based on this we speculate that learning could potentially skew the balance of excitation and inhibition onto these neurons and thereby modulate turning. Together, our study provides insights into the circuits that integrate learned and innate to modify behavior.


2020 ◽  
Author(s):  
Nicola Meda ◽  
Giulio M. Menti ◽  
Aram Megighian ◽  
Mauro A. Zordan

ABSTRACTAnimals rely on multiple sensory information systems to make decisions. The integration of information stemming from these systems is believed to result in a precise behavioural output. To what degree a single sensory system may override the others is unknown. Evidence for a hierarchical use of different systems to guide navigation is lacking. We used Drosophila melanogaster to investigate whether, in order to relieve an unpleasant stimulation, fruit flies employed an idiothetically-based local search strategy before making use of visual information, or viceversa. Fruit flies appear to initially resort to idiothetic information and only later, if the first strategy proves unsuccessful to relieve the unpleasant stimulation, make use of other information, such as visual cues. By leveraging on this innate preference for a hierarchical use of one strategy over another, we believe that in vivo recordings of brain activity during the navigation of fruit flies could provide mechanistic insights into how simultaneous information from multiple sensory modalities is evaluated, integrated, and motor responses elicited, thus shedding new light on the neural basis of decision-making.


2017 ◽  
Author(s):  
Katharina Eichler ◽  
Feng Li ◽  
Ashok Litwin-Kumar ◽  
Youngser Park ◽  
Ingrid Andrade ◽  
...  

Associating stimuli with positive or negative reinforcement is essential for survival, but a complete wiring diagram of a higherorder circuit supporting associative memory has not been previously available. We reconstructed one such circuit at synaptic resolution, theDrosophilalarval mushroom body, and found that most Kenyon cells integrate random combinations of inputs but a subset receives stereotyped inputs from single projection neurons. This organization maximizes performance of a model output neuron on a stimulus discrimination task. We also report a novel canonical circuit in each mushroom body compartment with previously unidentified connections: reciprocal Kenyon cell to modulatory neuron connections, modulatory neuron to output neuron connections, and a surprisingly high number of recurrent connections between Kenyon cells. Stereotyped connections between output neurons could enhance the selection of learned responses. The complete circuit map of the mushroom body should guide future functional studies of this learning and memory center.


2020 ◽  
Author(s):  
Madeline S. Cappelloni ◽  
Sabyasachi Shivkumar ◽  
Ralf M. Haefner ◽  
Ross K. Maddox

ABSTRACTThe brain combines information from multiple sensory modalities to interpret the environment. Multisensory integration is often modeled by ideal Bayesian causal inference, a model proposing that perceptual decisions arise from a statistical weighting of information from each sensory modality based on its reliability and relevance to the observer’s task. However, ideal Bayesian causal inference fails to describe human behavior in a simultaneous auditory spatial discrimination task in which spatially aligned visual stimuli improve performance despite providing no information about the correct response. This work tests the hypothesis that humans weight auditory and visual information in this task based on their relative reliabilities, even though the visual stimuli are task-uninformative, carrying no information about the correct response, and should be given zero weight. Listeners perform an auditory spatial discrimination task with relative reliabilities modulated by the stimulus durations. By comparing conditions in which task-uninformative visual stimuli are spatially aligned with auditory stimuli or centrally located (control condition), listeners are shown to have a larger multisensory effect when their auditory thresholds are worse. Even in cases in which visual stimuli are not task-informative, the brain combines sensory information that is scene-relevant, especially when the task is difficult due to unreliable auditory information.


Sign in / Sign up

Export Citation Format

Share Document