Cross-Modal and Non-Sensory Influences on Auditory Streaming

Perception ◽  
10.1068/p5035 ◽  
2003 ◽  
Vol 32 (11) ◽  
pp. 1393-1402 ◽  
Author(s):  
Robert P Carlyon ◽  
Christopher J Plack ◽  
Deborah A Fantini ◽  
Rhodri Cusack

Carlyon et al (2001 Journal of Experimental Psychology: Human Perception and Performance27 115–127) have reported that the buildup of auditory streaming is reduced when attention is diverted to a competing auditory stimulus. Here, we demonstrate that a reduction in streaming can also be obtained by attention to a visual task or by the requirement to count backwards in threes. In all conditions participants heard a 13 s sequence of tones, and, during the first 10 s saw a sequence of visual stimuli containing three, four, or five targets. The tone sequence consisted of twenty repeating triplets in an ABA–ABA … order, where A and B represent tones of two different frequencies. In each sequence, three, four, or five tones were amplitude modulated. During the first 10 s of the sequence, participants either counted the number of visual targets, counted the number of (modulated) auditory targets, or counted backwards in threes from a specified number. They then made an auditory-streaming judgment about the last 3 s of the tone sequence: whether one or two streams were heard. The results showed more streaming when participants counted the auditory targets (and hence were attending to the tones throughout) than in either the ‘visual’ or ‘counting-backwards’ conditions.

Perception ◽  
1989 ◽  
Vol 18 (1) ◽  
pp. 69-82 ◽  
Author(s):  
Dale M Stack ◽  
Darwin W Muir ◽  
Frances Sherriff ◽  
Jeanne Roman

Two studies were conducted to investigate the existence of an unusual U-shaped developmental function described by Wishart et al (1978) for human infants reaching towards invisible sounds. In study 1, 2–7 month olds were presented with four conditions: (i) an invisible auditory stimulus alone, (ii) a glowing visual stimulus alone, (iii) auditory and visual stimuli on the same side (ie combined), and (iv) auditory and visual stimuli on opposite sides (ie in conflict). Study 2 was designed to examine the effects of practice and possible associations made when using the ‘combined conflict’ paradigm. Infants of 5 and 7 months of age were given five trials with the auditory stimulus, with or without prior visual experience, and five trials with the visual stimulus, with the position of the stimulus varied on each trial. Stimuli were presented individually at the midline, and ±30 and ±60° from the midline. In both studies testing was conducted in complete darkness. Results indicated that the auditory-alone condition was slower to elicit a reach from the infants, relative to the visual-alone one, and reaches were least frequent to the auditory target. No U-shaped function was obtained, and reaching for auditory targets occurred later in age than for visual targets, but even at 7 months of age did not occur as often and was achieved by fewer infants. In both studies the quality of the reach was significantly poorer to auditory than to visual targets, but there were some accurate reaches. This research adds to our understanding of the development of auditory — manual coordination in sighted infants and is relevant to theories of auditory localization, visually guided reaching, and programming for the blind.


2018 ◽  
Vol 7 ◽  
pp. 172-177
Author(s):  
Łukasz Tyburcy ◽  
Małgorzata Plechawska-Wójcik

The paper describes results of comparison of reactions times to visual and auditory stimuli using EEG evoked potentials. Two experiments were used to applied. The first one explored reaction times to visual stimulus and the second one to auditory stimulus. After conducting an analysis of data, received results enable determining that visual stimuli evoke faster reactions than auditory stimuli.


Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 98-98
Author(s):  
U Leonards ◽  
W Singer

Segregation of textures on the basis of orientation differences between texture elements is achieved even when these texture elements differ from their surround only by colour (McIlhagga et al, 1990 Vision Research30 489 – 495). This finding seems to contradict the assumption that colour and orientation are extracted in separate feature maps (eg Treisman and Sato, 1990 Journal of Experimental Psychology: Human Perception and Performance16 459 – 478). To examine whether colour information is evaluated in parallel in different processing streams for the assessment of hue and form, we tested whether texture elements can be segregated if they differ only by specific conjunctions of colour and orientation; texture elements consisted of crosses with their two crossing lines differing in colour. Texture elements defining figure and background had the same coloured composition but the conjunction of colour with the two crossing lines was reversed. Different colour combinations were tested under various luminance contrast conditions, irrespective of the colour combination, segmentation was achieved as long as the two crossing lines of the texture elements differed in luminance. If, however, the different colours of the two crossing lines were approximately equiluminant, segmentation was reduced or impossible. Thus, subjects were able to use for texture segregation conjunctions between luminance and orientation but not between colour and orientation. Our results suggest that colour cannot be associated selectively with differently oriented components of the same texture element. This supports the hypothesis that colour contrast is used in parallel by different processing streams to assess the orientation and hue of contours and reveals limitations in the selectivity with which features are subsequently bound together.


Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 26-26
Author(s):  
F K Chua ◽  
J Goh ◽  
G Kek

Recent experiments (eg M M Chun and M C Potter, 1995 Journal of Experimental Psychology: Human Perception and Performance21 109 – 127; J E Raymond, K L Shapiro, and K M Arnell, 1992 Journal of Experimental Psychology: Human Perception and Performance18 849 – 860) with RSVP (rapid serial visual presentation) suggest that the attentional blink is caused by local interference. We present data from three RSVP experiments that provide further clues regarding the attentional blink. In experiment 1, subjects detected an ‘X’ and then identified a red letter; in experiment 2, subjects had to say whether the first red target was an ‘X’ and then identify a red letter. In experiment 3, subjects identified two red letters. We systematically varied the lag between the first and second targets. On half the trials, we also primed the second target by placing an identical letter in the lag one position (the position after the first target). In experiment 3, we also examined if the priming effect was semantic with a lower case letter. The first two experiments suggest that the priming effect is very short-lived and mainly sensory in nature. The priming effect disappears altogether if the first target is not present. More interestingly, we found that when subjects failed to detect the ‘X’, priming could still happen. The third experiment replicates and extends the results of the first two experiments. We also show that priming, albeit in a weak form, may still happen during the time when the attentional blink is supposed to occur. These results suggest that it is not an inhibition that causes the attentional blink and that sensory processing continues during the blink.


Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 151-151
Author(s):  
A E Stoper ◽  
J Randle ◽  
M M Cohen

Visually perceived eye level (VPEL) has been shown to be strongly affected by the pitch of the visible environment (Stoper and Cohen, 1989 Perception & Psychophysics46 469 – 475), even if this environment consists of only two luminous lines pitched from the vertical (Matin and Li, 1992 Journal of Experimental Psychology: Human Perception and Performance18 257 – 289). Here, two luminous vertical lines or 32 randomly distributed luminous dots were mounted on a plane that was viewed monocularly and was pitched (slanted in the pitch dimension) 30° forward or backward from the vertical. In addition to measuring the VPEL, we measured the perceived optic slant (rather than the perceived geographic slant) of this plane by requiring each of our ten subjects to set a target to the visually perceived near point (VPNP) of the plane. We found that, for the lines, VPNP shifted 50% and VPEL shifted 26% of the physical pitch of the plane. For the dots, VPNP shifted 28% but VPEL shifted only 8%. The effect of the dots on VPEL was weaker than might have been predicted by their effect on VPNP, which was used to indicate perceived optic slant. The weakness of the effect of the dots on VPEL implies that changes in VPEL result from a direct effect of the stimuli on VPEL, rather than one mediated by the perceived optic slant of the plane. The non-zero effect of the dots shows that pitched from vertical line segments are not necessary to shift VPEL.


1959 ◽  
Vol 63 (588) ◽  
pp. 690-695
Author(s):  
E. S. Calvert

The paper I have presented to you here is a brief account of the work we have been doing in the past six years at Farnborough and B.L.E.U. In studying visual judgments during the past few years, we have been driven to one conclusion which is pretty well the same as that which Capt. Prowse put before you, namely, that we are reaching the limit of what the human being can do. Every visual task has a certain failure rate, and I think this rate depends on the value of V|ak, where V is the approach speed, a is the acceleration which the pilot is able and willing to apply during a corrective manoeuvre, and k is an index representing the goodness of the visual stimuli. The tendency is for V|a to increase as aircraft get larger and heavier, but we have hitherto managed to counteract this by improving the visual aids, i.e. by increasing k.


2011 ◽  
Vol 2 ◽  
Author(s):  
Vincenzo Romei ◽  
Benjamin De Haas ◽  
Robert M. Mok ◽  
Jon Driver

2009 ◽  
Vol 21 (12) ◽  
pp. 2384-2397 ◽  
Author(s):  
Valerio Santangelo ◽  
Marta Olivetti Belardinelli ◽  
Charles Spence ◽  
Emiliano Macaluso

In everyday life, the allocation of spatial attention typically entails the interplay between voluntary (endogenous) and stimulus-driven (exogenous) attention. Furthermore, stimuli in different sensory modalities can jointly influence the direction of spatial attention, due to the existence of cross-sensory links in attentional control. Using fMRI, we examined the physiological basis of these interactions. We induced exogenous shifts of auditory spatial attention while participants engaged in an endogenous visuospatial cueing task. Participants discriminated visual targets in the left or right hemifield. A central visual cue preceded the visual targets, predicting the target location on 75% of the trials (endogenous visual attention). In the interval between the endogenous cue and the visual target, task-irrelevant nonpredictive auditory stimuli were briefly presented either in the left or right hemifield (exogenous auditory attention). Consistent with previous unisensory visual studies, activation of the ventral fronto-parietal attentional network was observed when the visual targets were presented at the uncued side (endogenous invalid trials, requiring visuospatial reorienting), as compared with validly cued targets. Critically, we found that the side of the task-irrelevant auditory stimulus modulated these activations, reducing spatial reorienting effects when the auditory stimulus was presented on the same side as the upcoming (invalid) visual target. These results demonstrate that multisensory mechanisms of attentional control can integrate endogenous and exogenous spatial information, jointly determining attentional orienting toward the most relevant spatial location.


2015 ◽  
Vol 115 ◽  
pp. 48-57 ◽  
Author(s):  
Gary P. Misson ◽  
Brenda H. Timmerman ◽  
Peter J. Bryanston-Cross

Sign in / Sign up

Export Citation Format

Share Document