Auditory Psychomotor Coordination

1988 ◽  
Vol 32 (2) ◽  
pp. 81-85 ◽  
Author(s):  
David R. Perrott

A series of choice-reaction time experiments are described in which subjects were required to locate and identify the information contained on a small visual target. Across trials, the lateral position of the target was randomly varied across a 240° region (± 120° relative to the subject's initial line of gaze). The vertical position of the target was either fixed at 0° elevation or varied by ± 46°. Whether the target was in the forward or lateral field, a significant reduction in the visual search period was evident when an acoustic signal indicated the location of the visual target. Auditory spatial information was particularly effective in improving performance when the position of the target was varied in elevation or the target was located in the rear field. The current results support the notion that the auditory system can be used to direct eye-head movements toward a remote visual target.

Author(s):  
David R. Perrott ◽  
John Cisneros ◽  
Richard L. Mckinley ◽  
William R. D'Angelo

We examined the minimum latency required to locate and identify a visual target (visual search) in a two-alternative forced-choice paradigm in which the visual target could appear from any azimuth (0° to 360°) and from a broad range of elevations (from 90° above to 70° below the horizon) relative to a person's initial line of gaze. Seven people were tested in six conditions: unaided search, three aurally aided search conditions, and two visually aided search conditions. Aurally aided search with both actual and virtual sound localization cues proved to be superior to unaided and visually guided search. Application of synthesized three dimensional and two-dimensional sound cues in the workstations are discussed.


Author(s):  
Douglas S. Brungart ◽  
Sarah E. Kruger ◽  
Tricia Kwiatkowski ◽  
Thomas Heil ◽  
Julie Cohen

Objective: The present study was designed to examine the impact that walking has on performance in auditory localization, visual discrimination, and aurally aided visual search tasks. Background: Auditory localization and visual search are critical skills that are frequently conducted by moving observers, but most laboratory studies of these tasks have been conducted on stationary listeners who were either seated or standing during stimulus presentation. Method: Thirty participants completed three different tasks while either standing still or while walking at a comfortable self-selected pace on a treadmill: (1) an auditory localization task, where they identified the perceived location of a target sound; (2) a visual discrimination task, where they identified a visual target presented at a known location directly in front of the listener; and (3) an aurally aided visual search task, where they identified a visual target that was presented in the presence of multiple visual distracters either in isolation or in conjunction with a spatially colocated auditory cue. Results: Participants who were walking performed auditory localization and aurally aided visual search tasks significantly faster than those who were standing, with no loss in accuracy. Conclusion: The improved aurally aided visual search performance found in this experiment may be related to enhanced overall activation caused by walking. It is also possible that the slight head movements required may have provided auditory cues that enhanced localization accuracy. Application: The results have potential applications in virtual and augmented reality displays where audio cues might be presented to listeners while walking.


2015 ◽  
Vol 74 (1) ◽  
pp. 55-60 ◽  
Author(s):  
Alexandre Coutté ◽  
Gérard Olivier ◽  
Sylvane Faure

Computer use generally requires manual interaction with human-computer interfaces. In this experiment, we studied the influence of manual response preparation on co-occurring shifts of attention to information on a computer screen. The participants were to carry out a visual search task on a computer screen while simultaneously preparing to reach for either a proximal or distal switch on a horizontal device, with either their right or left hand. The response properties were not predictive of the target’s spatial position. The results mainly showed that the preparation of a manual response influenced visual search: (1) The visual target whose location was congruent with the goal of the prepared response was found faster; (2) the visual target whose location was congruent with the laterality of the response hand was found faster; (3) these effects have a cumulative influence on visual search performance; (4) the magnitude of the influence of the response goal on visual search is marginally negatively correlated with the rapidity of response execution. These results are discussed in the general framework of structural coupling between perception and motor planning.


1973 ◽  
Vol 25 (4) ◽  
pp. 476-491 ◽  
Author(s):  
Nigel Harvey

In a same—different judgement task with successively presented signals, subjects matched dots in different vertical positions and tones of different frequencies intramodally and intermodally. The first and second stimuli of trials in each of the four modality conditions were drawn from a set consisting of two, three or five alternatives. In all intermodal set size conditions, the dimensions of pitch and vertical position were related by the same equivalence rule. While intramodal performance improvement depended only on the total number of practice trials at matching on the relevant dimensions, intermodal performance improvement appeared to be related to the number of trials practice with each heteromodal stimulus pairing in a particular set. After performance had approached asymptotic level neither intramodal nor intermodal matching reaction time depended on set size. Mean “same” reaction time was less than mean “different” reaction time, and this difference was greater for intermodal matching than for intramodal matching. The results indicated that intermodal equivalence exists between discrete stimulus values on heteromodal dimensions rather than between the dimensions themselves.


1978 ◽  
Vol 22 (1) ◽  
pp. 287-291 ◽  
Author(s):  
Christine L. Nelson ◽  
Robert M. London ◽  
Gordon H. Robinson

This experiment measured eye reaction time as a function of presence or absence of a central control task, type of command, and knowledge of target direction prior to command. It was found that eye reaction time was greater when a subject was involved in a central tracking task than when he was not; it was greater when the command was symbolic than when it was spatial; and it was longer when the target direction was unknown prior to command. These variables also interacted, so that the effect of unknown target direction was greater with a symbolic command. Results of this experiment also showed that subjects sometimes used an initial compensatory pattern of eye-head movements. There were large inter-subject differences, but use of compensation generally increased with complexity of centrally located information which required processing. It thus appears that reaction time of the eye responds to information processing variables in a manner similar to other motor response systems.


1986 ◽  
Vol 55 (4) ◽  
pp. 696-714 ◽  
Author(s):  
J. van der Steen ◽  
I. S. Russell ◽  
G. O. James

We studied the effects of unilateral frontal eye-field (FEF) lesions on eye-head coordination in monkeys that were trained to perform a visual search task. Eye and head movements were recorded with the scleral search coil technique using phase angle detection in a homogeneous electromagnetic field. In the visual search task all three animals showed a neglect for stimuli presented in the field contralateral to the lesion. In two animals the neglect disappeared within 2-3 wk. One animal had a lasting deficit. We found that FEF lesions that are restricted to area 8 cause only temporary deficits in eye and head movements. Up to a week after the lesion the animals had a strong preference to direct gaze and head to the side ipsilateral to the lesion. Animals tracked objects in contralateral space with combined eye and head movements, but failed to do this with the eyes alone. It was found that within a few days after the lesion, eye and head movements in the direction of the target were initiated, but they were inadequate and had long latencies. Within 1 wk latencies had regained preoperative values. Parallel with the recovery on the behavioral task, head movements became more prominent than before the lesion. Four weeks after the lesion, peak velocity of the head movement had increased by a factor of two, whereas the duration showed a twofold decrease compared with head movements before the lesion. No effects were seen on the duration and peak velocity of gaze. After the recovery on the behavioral task had stabilized, a relative neglect in the hemifield contralateral to the lesion could still be demonstrated by simultaneously presenting two stimuli in the left and right visual hemifields. The neglect is not due to a sensory deficit, but to a disorder of programming. The recovery from unilateral neglect after a FEF lesion is the result of a different orienting behavior, in which head movements become more important. It is concluded that the FEF plays an important role in the organization and coordination of eye and head movements and that lesions of this area result in subtle but permanent changes in eye-head coordination.


Author(s):  
Rachel J. Cunio ◽  
David Dommett ◽  
Joseph Houpt

Maintaining spatial awareness is a primary concern for operators, but relying only on visual displays can cause visual system overload and lead to performance decrements. Our study examined the benefits of providing spatialized auditory cues for maintaining visual awareness as a method of combating visual system overload. We examined visual search performance of seven participants in an immersive, dynamic (moving), three-dimensional, virtual reality environment both with no cues, non-masked, spatialized auditory cues, and masked, spatialized auditory cues. Results indicated a significant reduction in visual search time from the no-cue condition when either auditory cue type was presented, with the masked auditory condition slower. The results of this study can inform attempts to improve visual search performance in operational environments, such as determining appropriate display types for providing spatial information.


2020 ◽  
Vol 14 ◽  
Author(s):  
Shimpei Yamagishi ◽  
Shigeto Furukawa

It is often assumed that the reaction time of a saccade toward visual and/or auditory stimuli reflects the sensitivities of our oculomotor-orienting system to stimulus saliency. Endogenous factors, as well as stimulus-related factors, would also affect the saccadic reaction time (SRT). However, it was not clear how these factors interact and to what extent visual and auditory-targeting saccades are accounted for by common mechanisms. The present study examined the effect of, and the interaction between, stimulus saliency and audiovisual spatial congruency on the SRT for visual- and for auditory-target conditions. We also analyzed pre-target pupil size to examine the relationship between saccade preparation and pupil size. Pupil size is considered to reflect arousal states coupling with locus-coeruleus (LC) activity during a cognitive task. The main findings were that (1) the pattern of the examined effects on the SRT varied between visual- and auditory-auditory target conditions, (2) the effect of stimulus saliency was significant for the visual-target condition, but not significant for the auditory-target condition, (3) Pupil velocity, not absolute pupil size, was sensitive to task set (i.e., visual-targeting saccade vs. auditory-targeting saccade), and (4) there was a significant correlation between the pre-saccade absolute pupil size and the SRTs for the visual-target condition but not for the auditory-target condition. The discrepancy between target modalities for the effect of pupil velocity and between the absolute pupil size and pupil velocity for the correlation with SRT may imply that the pupil effect for the visual-target condition was caused by a modality-specific link between pupil size modulation and the SC rather than by the LC-NE (locus coeruleus-norepinephrine) system. These results support the idea that different threshold mechanisms in the SC may be involved in the initiation of saccades toward visual and auditory targets.


Sign in / Sign up

Export Citation Format

Share Document