Saccadic Performance as a Function of the Presence and Disappearance of Auditory and Visual Fixation Stimuli

1999 ◽  
Vol 11 (2) ◽  
pp. 206-213 ◽  
Author(s):  
Tracy L. Taylor ◽  
Raymond M. Klein ◽  
Douglas P. Munoz

Relative to when a fixated stimulus remains visible, saccadic latencies are facilitated when a fixated stimulus is extinguished simultaneously with or prior to the appearance of an eccentric auditory, visual, or combined visual-auditory target. In a study of nine human subjects, we determined whether such facilitation (the “gap effect”) occurs equivalently for the disappearance of fixated auditory stimuli and fixated visual stimuli. In the present study, a fixated auditory (noise) stimulus remained present (overlap) or else was extinguished simultaneously with (step) or 200 msec prior to (gap) the appearance of a visual, auditory (tone), or combined visual-auditory target 10° to the left or right of fixation. The results demonstrated equivalent facilitatory effects due to the disappearance of fixated auditory and visual stimuli and are consistent with the presumed role of the superior colliculus in the gap effect.

2000 ◽  
Vol 40 (20) ◽  
pp. 2763-2777 ◽  
Author(s):  
David Sparks ◽  
W.H. Rohrer ◽  
Yihong Zhang

1999 ◽  
Vol 22 (4) ◽  
pp. 681-682 ◽  
Author(s):  
Michael C. Dorris ◽  
Douglas P. Munoz

The Findlay & Walker target article emphasizes the role of the target-nonspecific “fixate” system while downplaying the role of the target-specific “move” system in determining saccade latency. We agree that disengagement of the fixate system is responsible for the target-nonspecific latency reduction associated with the gap effect. However, high target predictability and extensive training at a target location can also result in latency reductions, the culmination of this being express saccades. The target-specificity associated with the latter forms of latency reduction implicate a mechanism involving the move system. Recently discovered neurophysiological correlates underlying these behavioural phenomena reside in the superior colliculus.


Perception ◽  
10.1068/p5849 ◽  
2007 ◽  
Vol 36 (10) ◽  
pp. 1507-1512 ◽  
Author(s):  
Kerstin Königs ◽  
Jonas Knöll ◽  
Frank Bremmer

Previous studies have shown that the perceived location of visual stimuli briefly flashed during smooth pursuit, saccades, or optokinetic nystagmus (OKN) is not veridical. We investigated whether these mislocalisations can also be observed for brief auditory stimuli presented during OKN. Experiments were carried out in a lightproof sound-attenuated chamber. Participants performed eye movements elicited by visual stimuli. An auditory target (white noise) was presented for 5 ms. Our data clearly indicate that auditory targets are mislocalised during reflexive eye movements. OKN induces a shift of perceived location in the direction of the slow eye movement and is modulated in the temporal vicinity of the fast phase. The mislocalisation is stronger for look- as compared to stare-nystagmus. The size and temporal pattern of the observed mislocalisation are different from that found for visual targets. This suggests that different neural mechanisms are at play to integrate oculomotor signals and information on the spatial location of visual as well as auditory stimuli.


1977 ◽  
Vol 40 (1) ◽  
pp. 74-94 ◽  
Author(s):  
C. W. Mohler ◽  
R. H. Wurtz

1. We studied the effect of lesions placed in striate cortex or superior colliculus on the detection of visual stimuli and the accuracy of saccadic eye movements. The monkeys (Macaca mulatta) first learned to respond to a 0.25 degrees spot of light flashed for 150-200 ms in one part of the visual field while they were fixating in order to determine if they could detect the light. The monkeys also learned in a different task to make a saccade to the spot of light when the fixation point went out, and the accuracy of the saccades was measured. 2. Following a unilateral partial ablation of the striate cortex in two monkeys they could not detect the spot of light in the resulting scotoma or saccade to it. The deficit was only relative; if we increased the brightness of the stimulus from the usual 11 cd/m2 to 1,700 cd/m2 against a background of 1 cd/m2 the monkeys were able to detect and to make a saccade to the spot of light. 3. Following about 1 mo of practice on the detection and saccade tasks, the monkeys recovered the ability to detect the spots of light and to make saccades to them without gross errors (saccades made beyond an area of +/-3 average standard deviations). Lowering the stimulus intensity reinstated both the detection and saccadic errors...


2003 ◽  
Vol 89 (2) ◽  
pp. 1078-1093 ◽  
Author(s):  
Gregg H. Recanzone

Visual stimuli are known to influence the perception of auditory stimuli in spatial tasks, giving rise to the ventriloquism effect. These influences can persist in the absence of visual input following a period of exposure to spatially disparate auditory and visual stimuli, a phenomenon termed the ventriloquism aftereffect. It has been speculated that the visual dominance over audition in spatial tasks is due to the superior spatial acuity of vision compared with audition. If that is the case, then the auditory system should dominate visual perception in a manner analogous to the ventriloquism effect and aftereffect if one uses a task in which the auditory system has superior acuity. To test this prediction, the interactions of visual and auditory stimuli were measured in a temporally based task in normal human subjects. The results show that the auditory system has a pronounced influence on visual temporal rate perception. This influence was independent of the spatial location, spectral bandwidth, and intensity of the auditory stimulus. The influence was, however, strongly dependent on the disparity in temporal rate between the two stimulus modalities. Further, aftereffects were observed following approximately 20 min of exposure to temporally disparate auditory and visual stimuli. These results show that the auditory system can strongly influence visual perception and are consistent with the idea that bimodal sensory conflicts are dominated by the sensory system with the greater acuity for the stimulus parameter being discriminated.


2019 ◽  
Author(s):  
Madeline S. Cappelloni ◽  
Sabyasachi Shivkumar ◽  
Ralf M. Haefner ◽  
Ross K. Maddox

AbstractIn order to survive and function in the world, we must understand the content of our environment. This requires us to gather and parse complex, sometimes conflicting, information. Yet, the brain is capable of translating sensory stimuli from disparate modalities into a cohesive and accurate percept with little conscious effort. Previous studies of multisensory integration have suggested that the brain’s integration of cues is well-approximated by an ideal observer implementing Bayesian causal inference. However, behavioral data from tasks that include only one stimulus in each modality fail to capture what is in nature a complex process. Here we employed an auditory spatial discrimination task in which listeners were asked to determine on which side they heard one of two concurrently presented sounds. We compared two visual conditions in which task-uninformative shapes were presented in the center of the screen, or spatially aligned with the auditory stimuli. We found that performance on the auditory task improved when the visual stimuli were spatially aligned with the auditory stimuli—even though the shapes provided no information about which side the auditory target was on. We also demonstrate that a model of a Bayesian ideal observer performing causal inference cannot explain this improvement, demonstrating that humans deviate systematically from the ideal observer model.


2002 ◽  
Vol 88 (1) ◽  
pp. 438-454 ◽  
Author(s):  
B. D. Corneil ◽  
M. Van Wanrooij ◽  
D. P. Munoz ◽  
A. J. Van Opstal

This study addresses the integration of auditory and visual stimuli subserving the generation of saccades in a complex scene. Previous studies have shown that saccadic reaction times (SRTs) to combined auditory-visual stimuli are reduced when compared with SRTs to either stimulus alone. However, these results have been typically obtained with high-intensity stimuli distributed over a limited number of positions in the horizontal plane. It is less clear how auditory-visual interactions influence saccades under more complex but arguably more natural conditions, when low-intensity stimuli are embedded in complex backgrounds and distributed throughout two-dimensional (2-D) space. To study this problem, human subjects made saccades to visual-only (V-saccades), auditory-only (A-saccades), or spatially coincident auditory-visual (AV-saccades) targets. In each trial, the low-intensity target was embedded within a complex auditory-visual background, and subjects were allowed over 3 s to search for and foveate the target at 1 of 24 possible locations within the 2-D oculomotor range. We varied systematically the onset times of the targets and the intensity of the auditory target relative to background [i.e., the signal-to-noise (S/N) ratio] to examine their effects on both SRT and saccadic accuracy. Subjects were often able to localize the target within one or two saccades, but in about 15% of the trials they generated scanning patterns that consisted of many saccades. The present study reports only the SRT and accuracy of the first saccade in each trial. In all subjects, A-saccades had shorter SRTs than V-saccades, but were more inaccurate than V-saccades when generated to auditory targets presented at low S/N ratios. AV-saccades were at least as accurate as V-saccades but were generated at SRTs typical of A-saccades. The properties of AV-saccades depended systematically on both stimulus timing and S/N ratio of the auditory target. Compared with unimodal A- and V-saccades, the improvements in SRT and accuracy of AV-saccades were greatest when the visual target was synchronous with or leading the auditory target, and when the S/N ratio of the auditory target was lowest. Further, the improvements in saccade accuracy were greater in elevation than in azimuth. A control experiment demonstrated that a portion of the improvements in SRT could be attributable to a warning-cue mechanism, but that the improvements in saccade accuracy depended on the spatial register of the stimuli. These results agree well with earlier electrophysiological results obtained from the midbrain superior colliculus (SC) of anesthetized preparations, and we argue that they demonstrate multisensory integration of auditory and visual signals in a complex, quasi-natural environment. A conceptual model incorporating the SC is presented to explain the observed data.


2013 ◽  
Vol 280 (1763) ◽  
pp. 20130991 ◽  
Author(s):  
Takahiro Kawabe ◽  
Warrick Roseboom ◽  
Shin'ya Nishida

Sense of agency, the experience of controlling external events through one's actions, stems from contiguity between action- and effect-related signals. Here we show that human observers link their action- and effect-related signals using a computational principle common to cross-modal sensory grouping. We first report that the detection of a delay between tactile and visual stimuli is enhanced when both stimuli are synchronized with separate auditory stimuli (experiment 1). This occurs because the synchronized auditory stimuli hinder the potential grouping between tactile and visual stimuli. We subsequently demonstrate an analogous effect on observers' key press as an action and a sensory event. This change is associated with a modulation in sense of agency; namely, sense of agency, as evaluated by apparent compressions of action–effect intervals (intentional binding) or subjective causality ratings, is impaired when both participant's action and its putative visual effect events are synchronized with auditory tones (experiments 2 and 3). Moreover, a similar role of action–effect grouping in determining sense of agency is demonstrated when the additional signal is presented in the modality identical to an effect event (experiment 4). These results are consistent with the view that sense of agency is the result of general processes of causal perception and that cross-modal grouping plays a central role in these processes.


2018 ◽  
Vol 69 (10) ◽  
pp. 2826-2832
Author(s):  
Ioan Gabriel Sandu ◽  
Viorica Vasilache ◽  
Andrei Victor Sandu ◽  
Marin Chirazi ◽  
Cezar Honceriu ◽  
...  

The saline aerosols generated in gaseous media, as nanodispersions, behave, with respect to the concentration levels and the lifespan, as trimodal distributions (the three domains with Gaussian distributions: fine or Aitken under 50 �m, medium between 50 and 500 mm and, respectively, coarse or large between 500 and 1000 mm). The generation in latent state is dependent on the active surface of the source (number of generator centres, the size and position of the fluorescences, the porosity, size and shape of the source, etc.), the climatic parameters, but also on a series of other characteristics of the gaseous medium. Our team has demonstrated experimentally that saline aerosols, NaCl type, besides the ability to prevent and treat broncho-respiratory and cardiac conditions, through coassistance of saline aerosols of other cations than sodium, and of the iodine anion, have for certain levels of concentrations propitious effects over the immune, bone and muscular systems. Similarly proved has been the positive influence on the development of children, as well the determinant role in increasing athletic performance and of other human subjects performing intense activities.


Sign in / Sign up

Export Citation Format

Share Document