scholarly journals Auditory Stimulus Timing Influences Perceived duration of Co-Occurring Visual Stimuli

2011 ◽  
Vol 2 ◽  
Author(s):  
Vincenzo Romei ◽  
Benjamin De Haas ◽  
Robert M. Mok ◽  
Jon Driver
2012 ◽  
Vol 25 (0) ◽  
pp. 24
Author(s):  
Roberto Cecere ◽  
Benjamin De Haas ◽  
Harriett Cullen ◽  
Jon Driver ◽  
Vincenzo Romei

There is converging evidence that the duration of an auditory event can affect the perceived duration of a co-occurring visual event. When a brief visual stimulus is accompanied by a longer auditory stimulus, the perceived visual duration stretches. If this reflects a genuine sustain of visual stimulus perception, it should result in enhanced perception of non-temporal visual stimulus qualities. To test this hypothesis, in a temporal two-alternative forced choice task, 28 participants were asked to indicate whether a short (∼24 ms), peri-threshold, visual stimulus was presented in the first or in the second of two consecutive displays. Each display was accompanied by a sound of equal or longer duration (36, 48, 60, 72, 84, 96, 190 ms) than the visual stimulus. As a control condition, visual stimuli of different durations (matching auditory stimulus durations) were presented alone. We predicted that visual detection can improve as a function of sound duration. Moreover, if the expected cross-modal effect reflects sustained visual perception it should positively correlate with the improvement observed for genuinely longer visual stimuli. Results showed that detection sensitivity (d′) for the 24 ms visual stimulus was significantly enhanced when paired with longer auditory stimuli ranging from 60 to 96 ms duration. The visual detection performance dropped to baseline levels with 190 ms sounds. Crucially, the enhancement for auditory durations 60–96 ms significantly correlates with the d′ enhancement for visual stimuli lasting 60–96 ms in the control condition. We conclude that the duration of co-occurring auditory stimuli not only influences the perceived duration of visual stimuli but reflects a genuine sustain in visual perception.


2018 ◽  
Vol 7 ◽  
pp. 172-177
Author(s):  
Łukasz Tyburcy ◽  
Małgorzata Plechawska-Wójcik

The paper describes results of comparison of reactions times to visual and auditory stimuli using EEG evoked potentials. Two experiments were used to applied. The first one explored reaction times to visual stimulus and the second one to auditory stimulus. After conducting an analysis of data, received results enable determining that visual stimuli evoke faster reactions than auditory stimuli.


2016 ◽  
Vol 29 (4-5) ◽  
pp. 319-335 ◽  
Author(s):  
Riku Asaoka ◽  
Jiro Gyoba

Previous studies have shown that the perceived duration of visual stimuli can be strongly distorted by auditory stimuli presented simultaneously. In this study, we examine whether sounds presented separately from target visual stimuli alter the perceived duration of the target’s presentation. The participants’ task was to classify the duration of the target visual stimuli as perceived by them into four categories. Our results demonstrate that a sound presented before and after a visual target increases or decreases the perceived visual duration depending on the inter-stimulus interval between the sounds and the visual stimulus. In addition, three tones presented before and after a visual target did not increase or decrease the perceived visual duration. This indicates that auditory perceptual grouping prevents intermodal perceptual grouping, and eliminates crossmodal effects. These findings suggest that the auditory–visual integration, rather than a high arousal state caused by the presentation of the preceding sound, can induce distortions of perceived visual duration, and that inter- and intramodal perceptual grouping plays an important role in crossmodal time perception. These findings are discussed with reference to the Scalar Expectancy Theory.


Perception ◽  
10.1068/p5035 ◽  
2003 ◽  
Vol 32 (11) ◽  
pp. 1393-1402 ◽  
Author(s):  
Robert P Carlyon ◽  
Christopher J Plack ◽  
Deborah A Fantini ◽  
Rhodri Cusack

Carlyon et al (2001 Journal of Experimental Psychology: Human Perception and Performance27 115–127) have reported that the buildup of auditory streaming is reduced when attention is diverted to a competing auditory stimulus. Here, we demonstrate that a reduction in streaming can also be obtained by attention to a visual task or by the requirement to count backwards in threes. In all conditions participants heard a 13 s sequence of tones, and, during the first 10 s saw a sequence of visual stimuli containing three, four, or five targets. The tone sequence consisted of twenty repeating triplets in an ABA–ABA … order, where A and B represent tones of two different frequencies. In each sequence, three, four, or five tones were amplitude modulated. During the first 10 s of the sequence, participants either counted the number of visual targets, counted the number of (modulated) auditory targets, or counted backwards in threes from a specified number. They then made an auditory-streaming judgment about the last 3 s of the tone sequence: whether one or two streams were heard. The results showed more streaming when participants counted the auditory targets (and hence were attending to the tones throughout) than in either the ‘visual’ or ‘counting-backwards’ conditions.


1988 ◽  
Vol 63 (1) ◽  
pp. 311-318 ◽  
Author(s):  
Richard O. Shellenberger ◽  
Paul Lewis

In previous signal-control experiments, several types of stimuli elicited pecking when paired with peck-contingent grain. Here, we compared the effectiveness of an auditory stimulus and five visual stimuli. For 12 pigeons, the first keypeck to follow the offset of a 4-sec. signal was reinforced with grain. We examined the following signals: a tone, a white keylight, a dark keylight, a keylight that changed from white to red, houselight onset, and houselight offset. All signals acquired strong control over responding. According to one measure, percent of signals with a peck, houselight offset showed less control than the others; according to another measure, pecking rate, the white keylight showed greater control than the others. In this experiment, we found that a wide variety of stimuli can elicit strong pecking in the signal-control procedure. The present findings increase the chances that in past conditioning experiments, some keypecks thought to be due to contingencies of reinforcement were in fact elicited.


2007 ◽  
Vol 98 (4) ◽  
pp. 2399-2413 ◽  
Author(s):  
Vivian M. Ciaramitaro ◽  
Giedrius T. Buračas ◽  
Geoffrey M. Boynton

Attending to a visual or auditory stimulus often requires irrelevant information to be filtered out, both within the modality attended and in other modalities. For example, attentively listening to a phone conversation can diminish our ability to detect visual events. We used functional magnetic resonance imaging (fMRI) to examine brain responses to visual and auditory stimuli while subjects attended visual or auditory information. Although early cortical areas are traditionally considered unimodal, we found that brain responses to the same ignored information depended on the modality attended. In early visual area V1, responses to ignored visual stimuli were weaker when attending to another visual stimulus, compared with attending to an auditory stimulus. The opposite was true in more central visual area MT+, where responses to ignored visual stimuli were weaker when attending to an auditory stimulus. Furthermore, fMRI responses to the same ignored visual information depended on the location of the auditory stimulus, with stronger responses when the attended auditory stimulus shared the same side of space as the ignored visual stimulus. In early auditory cortex, responses to ignored auditory stimuli were weaker when attending a visual stimulus. A simple parameterization of our data can describe the effects of redirecting attention across space within the same modality (spatial attention) or across modalities (cross-modal attention), and the influence of spatial attention across modalities (cross-modal spatial attention). Our results suggest that the representation of unattended information depends on whether attention is directed to another stimulus in the same modality or the same region of space.


2020 ◽  
Vol 20 (11) ◽  
pp. 627
Author(s):  
Sofia Lavrenteva ◽  
Ikuya Murakami

Sign in / Sign up

Export Citation Format

Share Document