Listen to your heart: Examining modality dominance using cross-modal oddball tasks

2020 ◽  
Author(s):  
Christopher W Robinson

The current study used cross-modal oddball tasks to examine cardiac and behavioral responses to changing auditory and visual information. When instructed to press the same button for auditory and visual oddballs, auditory dominance was found with cross-modal presentation slowing down visual response times more than auditory response times (Experiment 1). When instructed to make separate responses to auditory and visual oddballs, visual dominance was found with cross-modal presentation decreasing auditory discrimination. Participants also made more visual-based than auditory-based errors on cross-modal trials (Experiment 2). Experiment 3 increased task demands while requiring a single button press and found evidence of auditory dominance, suggesting that it is unlikely that increased task demands can account for the reversal in Experiment 2. Examination of cardiac responses that were time-locked with stimulus onset showed cross-modal facilitation effects, with auditory and visual discrimination occurring earlier in the course of processing in the cross-modal condition than in the unimodal conditions. The current findings showing that response demand manipulations reversed modality dominance and that time-locked cardiac responses show cross-modal facilitation, not interference, suggest that auditory and visual dominance effects may both be occurring later in the course of processing, not from disrupted encoding.

Vision ◽  
2020 ◽  
Vol 4 (1) ◽  
pp. 14
Author(s):  
Margeaux Ciraolo ◽  
Samantha O’Hanlon ◽  
Christopher Robinson ◽  
Scott Sinnett

Investigations of multisensory integration have demonstrated that, under certain conditions, one modality is more likely to dominate the other. While the direction of this relationship typically favors the visual modality, the effect can be reversed to show auditory dominance under some conditions. The experiments presented here use an oddball detection paradigm with variable stimulus timings to test the hypothesis that a stimulus that is presented earlier will be processed first and therefore contribute to sensory dominance. Additionally, we compared two measures of sensory dominance (slowdown scores and error rate) to determine whether the type of measure used can affect which modality appears to dominate. When stimuli were presented asynchronously, analysis of slowdown scores and error rates yielded the same result; for both the 1- and 3-button versions of the task, participants were more likely to show auditory dominance when the auditory stimulus preceded the visual stimulus, whereas evidence for visual dominance was observed as the auditory stimulus was delayed. In contrast, for the simultaneous condition, slowdown scores indicated auditory dominance, whereas error rates indicated visual dominance. Overall, these results provide empirical support for the hypothesis that the modality that engages processing first is more likely to show dominance, and suggest that more explicit measures of sensory dominance may favor the visual modality.


2021 ◽  
Author(s):  
Rajwant Sandhu

Multi-modal integration often results in one modality dominating sensory perception. Such dominance is influenced by task demands, processing efficiency, and training. I assessed modality dominance between auditory and visual processing in a paradigm controlling for the first two factors while manipulating the third. In a uni-modal task auditory and visual processing was equated per individual participant. Pre and post training, participants completed a bimodal selective attention task where the relationship between relevant and irrelevant information, and the task-relevant modality changed across trials. Training in one modality was provided between pre and post-training tasks. Training resulted in non-specific speeding post-training. Pre-training, visual information impacted auditory responding more than vice versa and this pattern reversed following training, implying visual dominance pre, and auditory dominance post-training. Results suggest modality dominance is flexible and influenced by experimental design and participant abilities. Research should continue to uncover factors leading to sensory dominance by one modality.


2020 ◽  
Author(s):  
Christopher W Robinson

The current study examined how simple tones affect speeded visual responses in a visual-spatial sequence learning task. Across the three reported experiments, participants were presented with a visual target that appeared in different locations on a touchscreen monitor and they were instructed to touch the visual targets as quickly as possible. Response times typically sped up across training and participants were slower to respond to the visual stimuli when the sequences were paired with tones. Moreover, these interference effects were more pronounced early in training and explicit instructions directing attention to the visual modality had little effect on eliminating auditory interference, suggesting that these interference effects may stem from bottom-up factors and do not appear to be under attentional control. These findings have implications on tasks that require the processing of simultaneously presented auditory and visual information and provide support for a proposed mechanism underlying auditory dominance on a task that is typically better suited for the visual modality.


2021 ◽  
Author(s):  
Rajwant Sandhu

Multi-modal integration often results in one modality dominating sensory perception. Such dominance is influenced by task demands, processing efficiency, and training. I assessed modality dominance between auditory and visual processing in a paradigm controlling for the first two factors while manipulating the third. In a uni-modal task auditory and visual processing was equated per individual participant. Pre and post training, participants completed a bimodal selective attention task where the relationship between relevant and irrelevant information, and the task-relevant modality changed across trials. Training in one modality was provided between pre and post-training tasks. Training resulted in non-specific speeding post-training. Pre-training, visual information impacted auditory responding more than vice versa and this pattern reversed following training, implying visual dominance pre, and auditory dominance post-training. Results suggest modality dominance is flexible and influenced by experimental design and participant abilities. Research should continue to uncover factors leading to sensory dominance by one modality.


2001 ◽  
Vol 15 (4) ◽  
pp. 256-274 ◽  
Author(s):  
Caterina Pesce ◽  
Rainer Bösel

Abstract In the present study we explored the focusing of visuospatial attention in subjects practicing and not practicing activities with high attentional demands. Similar to the studies of Castiello and Umiltà (e. g., 1990) , our experimental procedure was a variation of Posner's (1980) basic paradigm for exploring covert orienting of visuospatial attention. In a simple RT-task, a peripheral cue of varying size was presented unilaterally or bilaterally from a central fixation point and followed by a target at different stimulus-onset-asynchronies (SOAs). The target could occur validly inside the cue or invalidly outside the cue with varying spatial relation to its boundary. Event-related brain potentials (ERPs) and reaction times (RTs) were recorded to target stimuli under the different task conditions. RT and ERP findings showed converging aspects as well as dissociations. Electrophysiological results revealed an amplitude modulation of the ERPs in the early and late Nd time interval at both anterior and posterior scalp sites, which seems to be related to the effects of peripheral informative cues as well as to the attentional expertise. Results were: (1) shorter latency effects confirm the positive-going amplitude enhancement elicited by unilateral peripheral cues and strengthen the criticism against the neutrality of spatially nonpredictive peripheral cueing of all possible target locations which is often presumed in behavioral studies. (2) Longer latency effects show that subjects with attentional expertise modulate the distribution of the attentional resources in the visual space differently than nonexperienced subjects. Skilled practice may lead to minimizing attentional costs by automatizing the use of a span of attention that is adapted to the most frequent task demands and endogenously increases the allocation of resources to cope with less usual attending conditions.


Author(s):  
Peter Khooshabeh ◽  
Mary Hegarty ◽  
Thomas F. Shipley

Two experiments tested the hypothesis that imagery ability and figural complexity interact to affect the choice of mental rotation strategies. Participants performed the Shepard and Metzler (1971) mental rotation task. On half of the trials, the 3-D figures were manipulated to create “fragmented” figures, with some cubes missing. Good imagers were less accurate and had longer response times on fragmented figures than on complete figures. Poor imagers performed similarly on fragmented and complete figures. These results suggest that good imagers use holistic mental rotation strategies by default, but switch to alternative strategies depending on task demands, whereas poor imagers are less flexible and use piecemeal strategies regardless of the task demands.


1983 ◽  
Vol 27 (5) ◽  
pp. 354-354
Author(s):  
Bruce W. Hamill ◽  
Robert A. Virzi

This investigation addresses the problem of attention in the processing of symbolic information from visual displays. Its scope includes the nature of attentive processes, the structural properties of stimuli that influence visual information processing mechanisms, and the manner in which these factors interact in perception. Our purpose is to determine the effects of configural feature structure on visual information processing. It is known that for stimuli comprising separable features, one can distinguish between conditions in which only one relevant feature differs among stimuli in the array being searched and conditions in which conjunctions of two (or more) features differ: Since the visual process of conjoining separable features is additive, this fact is reflected in search time as a function of array size, with feature conditions yielding flat curves associated with parallel search (no increase in search time across array sizes) and conjunction conditions yielding linearly increasing curves associated with serial search. We studied configural-feature stimuli within this framework to determine the nature of visual processing for such stimuli as a function of their feature structure. Response times of subjects searching for particular targets among structured arrays of distractors were measured in a speeded visual search task. Two different sets of stimulus materials were studied in array sizes of up to 32 stimuli, using both tachistoscope and microcomputer-based CRT presentation for each. Our results with configural stimuli indicate serial search in all of the conditions, with the slope of the response-time-by-array-size function being steeper for conjunction conditions than for feature conditions. However, for each of the two sets of stimuli we studied, there was one configuration that stood apart from the others in its set in that it yielded significantly faster response times, and in that conjunction conditions involving these particular stimuli tended to cluster with the feature conditions rather than with the other conjunction conditions. In addition to these major effects of particular targets, context effects also appeared in our results as effects of the various distractor sets used; certain of these context effects appear to be reversible. The effects of distractor sets on target search were studied in considerable detail. We have found interesting differences in visual processing between stimuli comprising separable features and those comprising configural features. We have also been able to characterize the effects we have found with configural-feature stimuli as being related to the specific feature structure of the target stimulus in the context of the specific feature structure of distractor stimuli. These findings have strong implications for the design of symbology that can enhance visual performance in the use of automated displays.


1999 ◽  
Vol 11 (3) ◽  
pp. 300-311 ◽  
Author(s):  
Edmund T. Rolls ◽  
Martin J. Tovée ◽  
Stefano Panzeri

Backward masking can potentially provide evidence of the time needed for visual processing, a fundamental constraint that must be incorporated into computational models of vision. Although backward masking has been extensively used psychophysically, there is little direct evidence for the effects of visual masking on neuronal responses. To investigate the effects of a backward masking paradigm on the responses of neurons in the temporal visual cortex, we have shown that the response of the neurons is interrupted by the mask. Under conditions when humans can just identify the stimulus, with stimulus onset asynchronies (SOA) of 20 msec, neurons in macaques respond to their best stimulus for approximately 30 msec. We now quantify the information that is available from the responses of single neurons under backward masking conditions when two to six faces were shown. We show that the information available is greatly decreased as the mask is brought closer to the stimulus. The decrease is more marked than the decrease in firing rate because it is the selective part of the firing that is especially attenuated by the mask, not the spontaneous firing, and also because the neuronal response is more variable at short SOAs. However, even at the shortest SOA of 20 msec, the information available is on average 0.1 bits. This compares to 0.3 bits with only the 16-msec target stimulus shown and a typical value for such neurons of 0.4 to 0.5 bits with a 500-msec stimulus. The results thus show that considerable information is available from neuronal responses even under backward masking conditions that allow the neurons to have their main response in 30 msec. This provides evidence for how rapid the processing of visual information is in a cortical area and provides a fundamental constraint for understanding how cortical information processing operates.


2003 ◽  
Vol 90 (2) ◽  
pp. 1279-1294 ◽  
Author(s):  
Ralph M. Siegel ◽  
Milena Raffi ◽  
Raymond E. Phinney ◽  
Jessica A. Turner ◽  
Gábor Jandó

In the behaving monkey, inferior parietal lobe cortical neurons combine visual information with eye position signals. However, an organized topographic map of these neurons' properties has never been demonstrated. Intrinsic optical imaging revealed a functional architecture for the effect of eye position on the visual response to radial optic flow. The map was distributed across two subdivisions of the inferior parietal lobule, area 7a and the dorsal prelunate area, DP. Area 7a contains a representation of the lower eye position gain fields while area DP represents the upper eye position gain fields. Horizontal eye position is represented orthogonal to the vertical eye position across the medial lateral extents of the cortices. Similar topographies were found in three hemispheres of two monkeys; the horizontal and vertical gain field representations were not isotropic with a greater modulation found with the vertical. Monte Carlo methods demonstrated the significance of the maps, and they were verified in part using multiunit recordings. The novel topographic organization of this association cortex area provides a substrate for constructing representations of surrounding space for perception and the guidance of motor behaviors.


1995 ◽  
Vol 74 (1) ◽  
pp. 162-178 ◽  
Author(s):  
K. Nakamura ◽  
K. Kubota

1. We examined single-neuronal activity in the temporal pole of monkeys, including the anterior ventromedial temporal (VMT) cortex (the temporopolar cortex, area 36, area 35, and the entorhinal cortex) and the anterior inferotemporal (IT) cortex, during a visual recognition memory task. In the task, a trial began when the monkey pressed a lever. After a waiting period, a visual sample stimulus (S) was presented one to four times on a monitor with an interstimulus delay. Thereafter, a new stimulus (R) was presented. The monkeys were trained to remember S during the delay period and to release the lever in response to R. Colored photographs of natural objects were used as visual stimuli. 2. About 70% of the recorded neurons (225 of 311) responded to at least one of the Ss tested. Thirty percent of these neurons (68 of 225) continued to fire during the subsequent delay periods. In 75% of these neurons (51 of 68), the firing during the delay period strongly correlated with the response to S. 3. The discharge rate during the delay period did not correlate with the monkey's eye movements, pressing or releasing of the lever, or the reaction time. 4. If the monkey erroneously released the lever in response to S or during the delay period, the firing disappeared after the erroneous lever release. If the monkey failed to release the lever in response to R, the firing persisted even after R was withdrawn. The discharge rate in incorrect trials was comparable with that in correct trials. The neurons were considered to fire for as long as the memory of S was necessary. 5. Firing persisted even when an achromatic version or half (even a portion) of S was presented, indicating that the color, a particular portion, or the entire shape of S was not always necessary to elicit firing. 6. An S that elicited firing during the delay period invariably elicited a visual response. Neurons that fired during the delay period showed a higher stimulus selectivity than other visually responsive neurons in the anterior VMT cortex. Thus neurons that fire during the delay period represent a subgroup of visually responsive neurons that are selectively tuned to a certain stimulus. 7. More neurons fired during the delay period in the anterior VMT cortex than in the anterior IT cortex. 8. We conclude that firing during the delay period by neurons in the temporal pole reflects the short-term storage of visual information regarding a particular S.


Sign in / Sign up

Export Citation Format

Share Document