scholarly journals Olfactory Influences on Visual Categorization: Behavioral and ERP Evidence

2020 ◽  
Vol 30 (7) ◽  
pp. 4220-4237 ◽  
Author(s):  
Thomas Hörberg ◽  
Maria Larsson ◽  
Ingrid Ekström ◽  
Camilla Sandöy ◽  
Peter Lundén ◽  
...  

Abstract Visual stimuli often dominate nonvisual stimuli during multisensory perception. Evidence suggests higher cognitive processes prioritize visual over nonvisual stimuli during divided attention. Visual stimuli should thus be disproportionally distracting when processing incongruent cross-sensory stimulus pairs. We tested this assumption by comparing visual processing with olfaction, a “primitive” sensory channel that detects potentially hazardous chemicals by alerting attention. Behavioral and event-related brain potentials (ERPs) were assessed in a bimodal object categorization task with congruent or incongruent odor–picture pairings and a delayed auditory target that indicated whether olfactory or visual cues should be categorized. For congruent pairings, accuracy was higher for visual compared to olfactory decisions. However, for incongruent pairings, reaction times (RTs) were faster for olfactory decisions. Behavioral results suggested that incongruent odors interfered more with visual decisions, thereby providing evidence for an “olfactory dominance” effect. Categorization of incongruent pairings engendered a late “slow wave” ERP effect. Importantly, this effect had a later amplitude peak and longer latency during visual decisions, likely reflecting additional categorization effort for visual stimuli in the presence of incongruent odors. In sum, contrary to what might be inferred from theories of “visual dominance,” incongruent odors may in fact uniquely attract mental processing resources during perceptual incongruence.

2019 ◽  
Author(s):  
Thomas Hörberg ◽  
Maria Larsson ◽  
Ingrid Ekström ◽  
Camilla Sandöy ◽  
Jonas Olofsson

Visual stimuli often dominate non-visual stimuli during multisensory perception, and evidence suggests higher cognitive processes prioritize visual over non-visual stimuli during divided attention. Visual stimuli may therefore have privileged access to higher mental processing resources, relative to other senses, and should be disproportionally distracting when processing incongruent cross-sensory stimuli. We tested this assumption by comparing visual processing with olfaction, a “primitive” sensory channel that detects potentially hazardous chemicals by alerting attention. Behavioral and event-related brain potentials (ERPs) were assessed in a bimodal object categorization task with congruent or incongruent odor-picture pairings and a delayed auditory response target. For congruent pairings, accuracy was higher for visual compared to olfactory decisions. However, for incongruent pairings, reaction times (RTs) were faster for olfactory decisions, suggesting incongruent odors interfered more with visual decisions, thereby showing an “olfactory dominance effect”. Categorization of incongruent pairings engendered a late “slow wave” ERP effect. Importantly, this effect had a later amplitude peak and longer latency during visual decisions, likely reflecting additional categorization effort for visual stimuli. In sum, contrary to what might be inferred from theories of ”visual dominance”, incongruent odors may in fact uniquely attract mental processing resources during perceptual incongruence.


2020 ◽  
Author(s):  
Kyra Swanson ◽  
Samantha R. White ◽  
Michael W. Preston ◽  
Joshua Wilson ◽  
Meagan Mitchell ◽  
...  

AbstractOperant behavior procedures often rely on visual stimuli to cue the initiation or secession of a response, and to provide a means for discriminating between two or more simultaneously available responses. While primate and human studies typically use LCD or OLED monitors and touch screens, rodent studies use a variety of methods to present visual cues ranging from traditional incandescent light bulbs, single LEDs, and, more recently, touch screen monitors. Commercially available systems for visual stimulus presentation are costly, challenging to customize, and are typically closed source. We developed an open-source, highly-modifiable visual stimulus presentation platform that can be combined with a 3D-printed operant response device. The device uses an eight by eight matrix of LEDs, and can be expanded to control much larger LED matrices. Implementing the platform is low-cost (<$70 USD per device in the year 2020). Using the platform, we trained rats to make nosepoke responses and discriminate between two distinct visual cues in a location-independent manner. This visual stimulus presentation platform is a cost-effective way to implement complex visually-guided operant behavior, including the use of moving or dynamically changing visual stimuli.Significance StatementThe design of an open source and low cost device for presenting visual stimuli is described. It is capable of presenting complex visual patterns and dynamically changing stimuli. A practical demonstration of the device is also reported, from an experiment in which rats performed a luminance based visual discrimination. The device has utility for studying visual processing, psychophysics, and decision making in a variety of species.


2002 ◽  
Vol 88 (1) ◽  
pp. 438-454 ◽  
Author(s):  
B. D. Corneil ◽  
M. Van Wanrooij ◽  
D. P. Munoz ◽  
A. J. Van Opstal

This study addresses the integration of auditory and visual stimuli subserving the generation of saccades in a complex scene. Previous studies have shown that saccadic reaction times (SRTs) to combined auditory-visual stimuli are reduced when compared with SRTs to either stimulus alone. However, these results have been typically obtained with high-intensity stimuli distributed over a limited number of positions in the horizontal plane. It is less clear how auditory-visual interactions influence saccades under more complex but arguably more natural conditions, when low-intensity stimuli are embedded in complex backgrounds and distributed throughout two-dimensional (2-D) space. To study this problem, human subjects made saccades to visual-only (V-saccades), auditory-only (A-saccades), or spatially coincident auditory-visual (AV-saccades) targets. In each trial, the low-intensity target was embedded within a complex auditory-visual background, and subjects were allowed over 3 s to search for and foveate the target at 1 of 24 possible locations within the 2-D oculomotor range. We varied systematically the onset times of the targets and the intensity of the auditory target relative to background [i.e., the signal-to-noise (S/N) ratio] to examine their effects on both SRT and saccadic accuracy. Subjects were often able to localize the target within one or two saccades, but in about 15% of the trials they generated scanning patterns that consisted of many saccades. The present study reports only the SRT and accuracy of the first saccade in each trial. In all subjects, A-saccades had shorter SRTs than V-saccades, but were more inaccurate than V-saccades when generated to auditory targets presented at low S/N ratios. AV-saccades were at least as accurate as V-saccades but were generated at SRTs typical of A-saccades. The properties of AV-saccades depended systematically on both stimulus timing and S/N ratio of the auditory target. Compared with unimodal A- and V-saccades, the improvements in SRT and accuracy of AV-saccades were greatest when the visual target was synchronous with or leading the auditory target, and when the S/N ratio of the auditory target was lowest. Further, the improvements in saccade accuracy were greater in elevation than in azimuth. A control experiment demonstrated that a portion of the improvements in SRT could be attributable to a warning-cue mechanism, but that the improvements in saccade accuracy depended on the spatial register of the stimuli. These results agree well with earlier electrophysiological results obtained from the midbrain superior colliculus (SC) of anesthetized preparations, and we argue that they demonstrate multisensory integration of auditory and visual signals in a complex, quasi-natural environment. A conceptual model incorporating the SC is presented to explain the observed data.


2019 ◽  
Vol 33 (2) ◽  
pp. 109-118
Author(s):  
Andrés Antonio González-Garrido ◽  
Jacobo José Brofman-Epelbaum ◽  
Fabiola Reveca Gómez-Velázquez ◽  
Sebastián Agustín Balart-Sánchez ◽  
Julieta Ramos-Loyo

Abstract. It has been generally accepted that skipping breakfast adversely affects cognition, mainly disturbing the attentional processes. However, the effects of short-term fasting upon brain functioning are still unclear. We aimed to evaluate the effect of skipping breakfast on cognitive processing by studying the electrical brain activity of young healthy individuals while performing several working memory tasks. Accordingly, the behavioral results and event-related brain potentials (ERPs) of 20 healthy university students (10 males) were obtained and compared through analysis of variances (ANOVAs), during the performance of three n-back working memory (WM) tasks in two morning sessions on both normal (after breakfast) and 12-hour fasting conditions. Significantly fewer correct responses were achieved during fasting, mainly affecting the higher WM load task. In addition, there were prolonged reaction times with increased task difficulty, regardless of breakfast intake. ERP showed a significant voltage decrement for N200 and P300 during fasting, while the amplitude of P200 notably increased. The results suggest skipping breakfast disturbs earlier cognitive processing steps, particularly attention allocation, early decoding in working memory, and stimulus evaluation, and this effect increases with task difficulty.


2002 ◽  
Vol 16 (3) ◽  
pp. 129-149 ◽  
Author(s):  
Boris Kotchoubey

Abstract Most cognitive psychophysiological studies assume (1) that there is a chain of (partially overlapping) cognitive processes (processing stages, mechanisms, operators) leading from stimulus to response, and (2) that components of event-related brain potentials (ERPs) may be regarded as manifestations of these processing stages. What is usually discussed is which particular processing mechanisms are related to some particular component, but not whether such a relationship exists at all. Alternatively, from the point of view of noncognitive (e. g., “naturalistic”) theories of perception ERP components might be conceived of as correlates of extraction of the information from the experimental environment. In a series of experiments, the author attempted to separate these two accounts, i. e., internal variables like mental operations or cognitive parameters versus external variables like information content of stimulation. Whenever this separation could be performed, the latter factor proved to significantly affect ERP amplitudes, whereas the former did not. These data indicate that ERPs cannot be unequivocally linked to processing mechanisms postulated by cognitive models of perception. Therefore, they cannot be regarded as support for these models.


2016 ◽  
Vol 30 (3) ◽  
pp. 102-113 ◽  
Author(s):  
Chun-Hao Wang ◽  
Chun-Ming Shih ◽  
Chia-Liang Tsai

Abstract. This study aimed to assess whether brain potentials have significant influences on the relationship between aerobic fitness and cognition. Behavioral and electroencephalographic (EEG) data was collected from 48 young adults when performing a Posner task. Higher aerobic fitness is related to faster reaction times (RTs) along with greater P3 amplitude and shorter P3 latency in the valid trials, after controlling for age and body mass index. Moreover, RTs were selectively related to P3 amplitude rather than P3 latency. Specifically, the bootstrap-based mediation model indicates that P3 amplitude mediates the relationship between fitness level and attention performance. Possible explanations regarding the relationships among aerobic fitness, cognitive performance, and brain potentials are discussed.


Author(s):  
Bruno and

Synaesthesia is a curious anomaly of multisensory perception. When presented with stimulation in one sensory channel, in addition to the percept usually associated with that channel (inducer) a true synaesthetic experiences a second percept in another perceptual modality (concurrent). Although synaesthesia is not pathological, true synaesthetes are relatively rare and their synaesthetic associations tend to be quite idiosyncratic. For this reason, studying synaesthesia is difficult, but exciting new experimental results are beginning to clarify what makes the brain of synaesthetes special and the mechanisms that may produce the condition. Even more importantly, the related phenomenon known as ‘natural’ crossmodal associations is instead experienced by everyone, providing another useful domain for studying multisensory interactions with important implications for understanding our preferences for products in terms of spontaneously evoked associations, as well as for choosing appropriate names, labels, and packaging in marketing applications.


2000 ◽  
Vol 84 (6) ◽  
pp. 2984-2997 ◽  
Author(s):  
Per Jenmalm ◽  
Seth Dahlstedt ◽  
Roland S. Johansson

Most objects that we manipulate have curved surfaces. We have analyzed how subjects during a prototypical manipulatory task use visual and tactile sensory information for adapting fingertip actions to changes in object curvature. Subjects grasped an elongated object at one end using a precision grip and lifted it while instructed to keep it level. The principal load of the grasp was tangential torque due to the location of the center of mass of the object in relation to the horizontal grip axis joining the centers of the opposing grasp surfaces. The curvature strongly influenced the grip forces required to prevent rotational slips. Likewise the curvature influenced the rotational yield of the grasp that developed under the tangential torque load due to the viscoelastic properties of the fingertip pulps. Subjects scaled the grip forces parametrically with object curvature for grasp stability. Moreover in a curvature-dependent manner, subjects twisted the grasp around the grip axis by a radial flexion of the wrist to keep the desired object orientation despite the rotational yield. To adapt these fingertip actions to object curvature, subjects could use both vision and tactile sensibility integrated with predictive control. During combined blindfolding and digital anesthesia, however, the motor output failed to predict the consequences of the prevailing curvature. Subjects used vision to identify the curvature for efficient feedforward retrieval of grip force requirements before executing the motor commands. Digital anesthesia caused little impairment of grip force control when subjects had vision available, but the adaptation of the twist became delayed. Visual cues about the form of the grasp surface obtained before contact was used to scale the grip force, whereas the scaling of the twist depended on visual cues related to object movement. Thus subjects apparently relied on different visuomotor mechanisms for adaptation of grip force and grasp kinematics. In contrast, blindfolded subjects used tactile cues about the prevailing curvature obtained after contact with the object for feedforward adaptation of both grip force and twist. We conclude that humans use both vision and tactile sensibility for feedforward parametric adaptation of grip forces and grasp kinematics to object curvature. Normal control of the twist action, however, requires digital afferent input, and different visuomotor mechanisms support the control of the grasp twist and the grip force. This differential use of vision may have a bearing to the two-stream model of human visual processing.


2020 ◽  
Vol 45 (9) ◽  
pp. 845-854
Author(s):  
Nicholas Fallon ◽  
Timo Giesbrecht ◽  
Anna Thomas ◽  
Andrej Stancak

Abstract Congruent visual cues augment sensitivity to brief olfactory presentations and habituation of odor perception is modulated by central-cognitive processing including context. However, it is not known whether habituation to odors could interact with cross-modal congruent stimuli. The present research investigated the effect of visual congruence on odor detection sensitivity during continuous odor exposures. We utilized a multimethod approach, including subjective behavioral responses and reaction times (RTs; study 1) and electroencephalography (EEG, study 2). Study 1: 25 participants received 2-min presentations of moderate-intensity floral odor delivered via olfactometer with congruent (flower) and incongruent (object) image presentations. Participants indicated odor perception after each image. Detection sensitivity and RTs were analyzed in epochs covering the period of habituation. Study 2: 25 new participants underwent EEG recordings during 145-s blocks of odor presentations with congruent or incongruent images. Participants passively observed images and intermittently rated the perceived intensity of odor. Event-related potential analysis was utilized to evaluate brain processing related to odor–visual pairs across the period of habituation. Odor detection sensitivity and RTs were improved by congruent visual cues. Results highlighted a diminishing influence of visual congruence on odor detection sensitivity as habituation occurred. Event-related potential analysis revealed an effect of congruency on electrophysiological processing in the N400 component. This was only evident in early periods of odor exposure when perception was strong. For the first time, this demonstrates the modulation of central processing of odor–visual pairs by habituation. Frontal negativity (N400) responses encode the aspects of cross-modal congruence for odor–vision cross-modal tasks.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Stefano Rozzi ◽  
Marco Bimbi ◽  
Alfonso Gravante ◽  
Luciano Simone ◽  
Leonardo Fogassi

AbstractThe ventral part of lateral prefrontal cortex (VLPF) of the monkey receives strong visual input, mainly from inferotemporal cortex. It has been shown that VLPF neurons can show visual responses during paradigms requiring to associate arbitrary visual cues to behavioral reactions. Further studies showed that there are also VLPF neurons responding to the presentation of specific visual stimuli, such as objects and faces. However, it is largely unknown whether VLPF neurons respond and differentiate between stimuli belonging to different categories, also in absence of a specific requirement to actively categorize or to exploit these stimuli for choosing a given behavior. The first aim of the present study is to evaluate and map the responses of neurons of a large sector of VLPF to a wide set of visual stimuli when monkeys simply observe them. Recent studies showed that visual responses to objects are also present in VLPF neurons coding action execution, when they are the target of the action. Thus, the second aim of the present study is to compare the visual responses of VLPF neurons when the same objects are simply observed or when they become the target of a grasping action. Our results indicate that: (1) part of VLPF visually responsive neurons respond specifically to one stimulus or to a small set of stimuli, but there is no indication of a “passive” categorical coding; (2) VLPF neuronal visual responses to objects are often modulated by the task conditions in which the object is observed, with the strongest response when the object is target of an action. These data indicate that VLPF performs an early passive description of several types of visual stimuli, that can then be used for organizing and planning behavior. This could explain the modulation of visual response both in associative learning and in natural behavior.


Sign in / Sign up

Export Citation Format

Share Document