scholarly journals Olfactory and neuromodulatory signals reverse visual object avoidance to approach in Drosophila

2018 ◽  
Author(s):  
Karen Y. Cheng ◽  
Mark A. Frye

ABSTRACTInnate behavioral reactions to sensory stimuli may be subject to modulation by contextual conditions including signals from other modalities. Whereas sensory processing by individual modalities has been well-studied, the cell circuit mechanisms by which signals from different sensory systems are integrated to control behavior remains poorly understood. Here, we provide a new behavioral model to study the mechanisms of multisensory integration. This behavior, which we termed odor-induced visual valence reversal, occurs when the innate avoidance response to a small moving object by flying Drosophila melanogaster is reversed by the presence of an appetitive odor. Instead of steering away from the small object representing an approaching threat, flies begin to steer towards the object in the presence of odor. Odor-induced visual valence reversal occurs rapidly without associative learning and occurs for attractive odors including apple cider vinegar and ethanol, but not for innately aversive benzaldehyde. Optogenetic activation of octopaminergic neurons robustly induces visual valence reversal in the absence of odor, as does optogenetic activation of directional columnar motion detecting neurons that express octopamine receptors. Optogenetic activation of octopamine neurons drives calcium responses in the motion detectors. Taken together, our results implicate a multisensory processing cascade in which appetitive odor activates octopaminergic neuromodulation of visual pathways, which leads to increased visual saliency and the switch from avoidance to approach toward a small visual object.

2021 ◽  
Vol 17 (3) ◽  
Author(s):  
Karen Y. Cheng ◽  
Mark A. Frye

Multisensory integration is synergistic—input from one sensory modality might modulate the behavioural response to another. Work in flies has shown that a small visual object presented in the periphery elicits innate aversive steering responses in flight, likely representing an approaching threat. Object aversion is switched to approach when paired with a plume of food odour. The ‘open-loop’ design of prior work facilitated the observation of changing valence. How does odour influence visual object responses when an animal has naturally active control over its visual experience? In this study, we use closed-loop feedback conditions, in which a fly's steering effort is coupled to the angular velocity of the visual stimulus, to confirm that flies steer toward or ‘fixate’ a long vertical stripe on the visual midline. They tend either to steer away from or ‘antifixate’ a small object or to disengage active visual control, which manifests as uncontrolled object ‘spinning’ within this experimental paradigm. Adding a plume of apple cider vinegar decreases the probability of both antifixation and spinning, while increasing the probability of frontal fixation for objects of any size, including a normally typically aversive small object.


2019 ◽  
Vol 31 (8) ◽  
pp. 1155-1172 ◽  
Author(s):  
Jean-Paul Noel ◽  
Andrea Serino ◽  
Mark T. Wallace

The actionable space surrounding the body, referred to as peripersonal space (PPS), has been the subject of significant interest of late within the broader framework of embodied cognition. Neurophysiological and neuroimaging studies have shown the representation of PPS to be built from visuotactile and audiotactile neurons within a frontoparietal network and whose activity is modulated by the presence of stimuli in proximity to the body. In contrast to single-unit and fMRI studies, an area of inquiry that has received little attention is the EEG characterization associated with PPS processing. Furthermore, although PPS is encoded by multisensory neurons, to date there has been no EEG study systematically examining neural responses to unisensory and multisensory stimuli, as these are presented outside, near, and within the boundary of PPS. Similarly, it remains poorly understood whether multisensory integration is generally more likely at certain spatial locations (e.g., near the body) or whether the cross-modal tactile facilitation that occurs within PPS is simply due to a reduction in the distance between sensory stimuli when close to the body and in line with the spatial principle of multisensory integration. In the current study, to examine the neural dynamics of multisensory processing within and beyond the PPS boundary, we present auditory, visual, and audiovisual stimuli at various distances relative to participants' reaching limit—an approximation of PPS—while recording continuous high-density EEG. We question whether multisensory (vs. unisensory) processing varies as a function of stimulus–observer distance. Results demonstrate a significant increase of global field power (i.e., overall strength of response across the entire electrode montage) for stimuli presented at the PPS boundary—an increase that is largest under multisensory (i.e., audiovisual) conditions. Source localization of the major contributors to this global field power difference suggests neural generators in the intraparietal sulcus and insular cortex, hubs for visuotactile and audiotactile PPS processing. Furthermore, when neural dynamics are examined in more detail, changes in the reliability of evoked potentials in centroparietal electrodes are predictive on a subject-by-subject basis of the later changes in estimated current strength at the intraparietal sulcus linked to stimulus proximity to the PPS boundary. Together, these results provide a previously unrealized view into the neural dynamics and temporal code associated with the encoding of nontactile multisensory around the PPS boundary.


2017 ◽  
Vol 30 (6) ◽  
pp. 565-578 ◽  
Author(s):  
Julian Keil ◽  
Daniel Senkowski

Ongoing neural oscillations reflect fluctuations of cortical excitability. A growing body of research has underlined the role of neural oscillations for stimulus processing. Neural oscillations in the alpha band have gained special interest in electrophysiological research on perception. Recent studies proposed the idea that neural oscillations provide temporal windows in which sensory stimuli can be perceptually integrated. This also includes multisensory integration. In the current high-density EEG-study we examined the relationship between the individual alpha frequency (IAF) and cross-modal audiovisual integration in the sound-induced flash illusion (SIFI). In 26 human volunteers we found a negative correlation between the IAF and the SIFI illusion rate. Individuals with a lower IAF showed higher audiovisual illusions. Source analysis suggested an involvement of the visual cortex, especially the calcarine sulcus, for this relationship. Our findings corroborate the notion that the IAF affects the cross-modal integration of auditory on visual stimuli in the SIFI. We integrate our findings with recent observations on the relationship between audiovisual integration and neural oscillations and suggest a multifaceted influence of neural oscillations on multisensory processing.


2014 ◽  
Vol 26 (2) ◽  
pp. 408-421 ◽  
Author(s):  
Clara A. Scholl ◽  
Xiong Jiang ◽  
Jacob G. Martin ◽  
Maximilian Riesenhuber

A hallmark of human cognition is the ability to rapidly assign meaning to sensory stimuli. It has been suggested that this fast visual object categorization ability is accomplished by a feedforward processing hierarchy consisting of shape-selective neurons in occipito-temporal cortex that feed into task circuits in frontal cortex computing conceptual category membership. We performed an EEG rapid adaptation study to test this hypothesis. Participants were trained to categorize novel stimuli generated with a morphing system that precisely controlled both stimulus shape and category membership. We subsequently performed EEG recordings while participants performed a category matching task on pairs of successively presented stimuli. We used space–time cluster analysis to identify channels and latencies exhibiting selective neural responses. Neural signals before 200 msec on posterior channels demonstrated a release from adaptation for shape changes, irrespective of category membership, compatible with a shape- but not explicitly category-selective neural representation. A subsequent cluster with anterior topography appeared after 200 msec and exhibited release from adaptation consistent with explicit categorization. These signals were subsequently modulated by perceptual uncertainty starting around 300 msec. The degree of category selectivity of the anterior signals was strongly predictive of behavioral performance. We also observed a posterior category-selective signal after 300 msec exhibiting significant functional connectivity with the initial anterior category-selective signal. In summary, our study supports the proposition that perceptual categorization is accomplished by the brain within a quarter second through a largely feedforward process culminating in frontal areas, followed by later category-selective signals in posterior regions.


1990 ◽  
Vol 329 (1254) ◽  
pp. 257-263 ◽  

Self-motion detectors of the dragonfly ventral cord have large fields that are sensitive to whole-field one- way motion with little habituation to the motion. In contrast, object-motion detectors have large fields, but their responses to the motion of small objects within the field habituate more easily and some are non-directional. Both types are large neurons that can be used for electrophysiological recording for long periods and are previously described in their anatomy and their responses to black and white moving patterns (Oldberg 1986). When tested with a flash of controlled intensity and wavelength, the self-motion neurons have a spectral sensitivity similar to that of photoreceptors with a peak in the green near 500 nm, but when tested with a moving edge, they have a single peak near 560 nm. They behave towards a pattern of two colours as if they are colour blind. Similar results are known for the bee and some butterflies. Forward-looking object-motion neurons have a spectral sensitivity curve that is rather flat from 380 nm to 580 nm, some with a peak in the UV. When they are tested with a moving coloured pattern on a differently coloured background, there is no value of the foreground brightness, which gives a null response, however the brightness is adjusted. The optimum response to a coloured object appears when the background is green and the object is another colour; usually blue is the most effective. Discrimination of a small object is less effective when the object is green. These results suggest that colour vision is associated with object-vision; and that object-motion detectors are not colour blind, but do not necessarily discriminate colours.


Author(s):  
Tim C. Kietzmann ◽  
Patrick McClure ◽  
Nikolaus Kriegeskorte

The goal of computational neuroscience is to find mechanistic explanations of how the nervous system processes information to give rise to cognitive function and behavior. At the heart of the field are its models, that is, mathematical and computational descriptions of the system being studied, which map sensory stimuli to neural responses and/or neural to behavioral responses. These models range from simple to complex. Recently, deep neural networks (DNNs) have come to dominate several domains of artificial intelligence (AI). As the term “neural network” suggests, these models are inspired by biological brains. However, current DNNs neglect many details of biological neural networks. These simplifications contribute to their computational efficiency, enabling them to perform complex feats of intelligence, ranging from perceptual (e.g., visual object and auditory speech recognition) to cognitive tasks (e.g., machine translation), and on to motor control (e.g., playing computer games or controlling a robot arm). In addition to their ability to model complex intelligent behaviors, DNNs excel at predicting neural responses to novel sensory stimuli with accuracies well beyond any other currently available model type. DNNs can have millions of parameters, which are required to capture the domain knowledge needed for successful task performance. Contrary to the intuition that this renders them into impenetrable black boxes, the computational properties of the network units are the result of four directly manipulable elements: input statistics, network structure, functional objective, and learning algorithm. With full access to the activity and connectivity of all units, advanced visualization techniques, and analytic tools to map network representations to neural data, DNNs represent a powerful framework for building task-performing models and will drive substantial insights in computational neuroscience.


2017 ◽  
Vol 30 (6) ◽  
pp. 509-536 ◽  
Author(s):  
Daniel Poole ◽  
Ellen Poliakoff ◽  
Emma Gowen ◽  
Samuel Couth ◽  
Rebecca A. Champion ◽  
...  

A number of studies have shown that multisensory performance is well predicted by a statistically optimal maximum likelihood estimation (MLE) model. Under this model unisensory estimates are combined additively and weighted according to relative reliability. Recent theories have proposed that atypical sensation and perception commonly reported in autism spectrum condition (ASC) may result from differences in the use of reliability information. Furthermore, experimental studies have indicated that multisensory processing is less effective in those with the condition in comparison to neurotypical (NT) controls. In the present study, adults with ASC () and a matched NT group () completed a visual–haptic size judgement task (cf. Gori et al., 2008) in which participants compared the height of wooden blocks using either vision or haptics, and in a dual modality condition in which visual–haptic stimuli were presented in size conflict. Participants with ASC tended to produce more reliable estimates than the NT group. However, dual modality performance was not well predicted by the MLE model for either group. Performance was subsequently compared to alternative models in which the participant either switched between modalities trial to trial (rather than integrating) and a model of non-optimal integration. Performance of both groups was statistically comparable to the cue-switching model. These findings suggest that adults with ASC adopted a similar strategy to NTs when processing conflicting visual–haptic information. Findings are discussed in relation to multisensory perception in ASC and methodological considerations associated with multisensory conflict paradigms.


2021 ◽  
Author(s):  
Tiziana Vercillo ◽  
Edward G. Freedman ◽  
Joshua B. Ewen ◽  
Sophie Molholm ◽  
John J. Foxe

Multisensory objects that are frequently encountered in the natural environment lead to strong associations across a distributed sensory cortical network, with the end result experience of a unitary percept. Remarkably little is known, however, about the cortical processes sub-serving multisensory object formation and recognition. To advance our understanding in this important domain, the present study investigated the brain processes involved in learning and identification of novel visual-auditory objects. Specifically, we introduce and test a rudimentary three-stage model of multisensory object-formation and processing. Thirty adults were remotely trained for a week to recognize a novel class of multisensory objects (3D shapes paired to complex sounds), and high-density event related potentials (ERPs) were recorded to the corresponding unisensory (shapes or sounds only) and multisensory (shapes and sounds) stimuli, before and after intensive training. We identified three major stages of multisensory processing: 1) an early, multisensory, automatic effect (<100 ms) in occipital areas, related to the detection of simultaneous audiovisual signals and not related to multisensory learning 2) an intermediate object-processing stage (100-200 ms) in occipital and parietal areas, sensitive to the learned multi-sensory associations and 3) a late multisensory processing stage (>250 ms) that appears to be involved in both object recognition and possibly memory consolidation. Results from this study provide support for multiple stages of multisensory object learning and recognition that are subserved by an extended network of cortical areas.


Author(s):  
Till R. Schneider ◽  
Andreas K. Engel ◽  
Stefan Debener

Abstract. The question of how vision and audition interact in natural object identification is currently a matter of debate. We developed a large set of auditory and visual stimuli representing natural objects in order to facilitate research in the field of multisensory processing. Normative data was obtained for 270 brief environmental sounds and 320 visual object stimuli. Each stimulus was named, categorized, and rated with regard to familiarity and emotional valence by N = 56 participants (Study 1). This multimodal stimulus set was employed in two subsequent crossmodal priming experiments that used semantically congruent and incongruent stimulus pairs in a S1-S2 paradigm. Task-relevant targets were either auditory (Study 2) or visual stimuli (Study 3). The behavioral data of both experiments expressed a crossmodal priming effect with shorter reaction times for congruent as compared to incongruent stimulus pairs. The observed facilitation effect suggests that object identification in one modality is influenced by input from another modality. This result implicates that congruent visual and auditory stimulus pairs were perceived as the same object and demonstrates a first validation of the multimodal stimulus set.


Sign in / Sign up

Export Citation Format

Share Document