The Unimportance of Explicit Spatial Information in Serial Recall of Visually Presented Lists

1975 ◽  
Vol 27 (2) ◽  
pp. 161-164 ◽  
Author(s):  
Graham Hitch ◽  
John Morton

The superiority of auditory over visual presentation in short-term serial recall may be due to the fact that typically only temporal cues to order have been provided in the two modalities. Auditory information is usually ordered along a temporal continuum, whereas visual information is ordered spatially, as well. It is therefore possible that recall following visual presentation may benefit from spatial cues to order. Subjects were tested for serial recall of letter-sequences presented visually either with or without explicit spatial cues to order. No effect of any kind was found, a result which suggests (a) that spatial information is not utilized when it is redundant with temporal information and (b) that the auditory-visual difference would not be modified by the presence of explicit spatial cues to order.

1980 ◽  
Vol 32 (1) ◽  
pp. 85-99 ◽  
Author(s):  
Ruth Campbell ◽  
Barbara Dodd

Recent work on integration of auditory and visual information during speech perception has indicated that adults are surprisingly good at, and rely extensively on, lip reading. The conceptual status of lip read information is of interest: such information is at the same time both visual and phonological. Three experiments investigated the nature of short term coding of lip read information in hearing subjects. The first experiment used asynchronous visual and auditory information and showed that a subject's ability to repeat words, when heard speech lagged lip movements, was unaffected by the lag duration, both quantitatively and qualitatively. This suggests that lip read information is immediately recoded into a durable code. An experiment on serial recall of lip read items showed a serial position curve containing a recency effect (characteristic of auditory but not visual input). It was then shown that an auditory suffix diminishes the recency effect obtained with lip read stimuli. These results are consistent with the hypothesis that seen speech, that is not heard, is encoded into a durable code which has some shared properties with heard speech. The results of the serial recall experiments are inconsistent with interpretations of the recency and suffix effects in terms of precategorical acoustic storage, for they demonstrate that recency and suffix effects can be supra-modal.


1988 ◽  
Vol 32 (2) ◽  
pp. 75-75
Author(s):  
Thomas Z. Strybel

Developments of head-coupled control/display systems have focused primarily on the display of three dimensional visual information, as the visual system is the optimal sensory channel for the aquisition of spatial information in humans. The auditory system improves the efficiency of vision, however, by obtaining spatial information about relevant objects outside of the visual field of view. This auditory information can be used to direct head and eye movements. Head-coupled display systems, can also benefit from the addition of auditory spatial information, as it provides a natural method of signaling the location of important events outside of the visual field of view. This symposium will report on current efforts in the developments of head-coupled display systems, with an emphasis on the auditory spatial component. The first paper “Virtual Interface Environment Workstations”, by Scott S. Fisher, will report on the development of a prototype virtual environment. This environment consists of a head-mounted, wide-angle, stereoscopic display system which is controlled by operator position, voice, and gesture. With this interface, an operator can virtually explore a 360 degree synthesized environment, and viscerally interact with its components. The second paper, “A Virtual Display System For Conveying Three-Dimensional Acoustic Information” by Elizabeth M. Wenzel, Frederic L. Wightman and Scott H. Foster, will report on the development of a method of synthetically generating three-dimensional sound cues for the above-mentioned interface. The development of simulated auditory spatial cues is limited to some extent, by our knowlege of auditory spatial processing. The remaining papers will report on two areas of auditory space perception that have recieved little attention until recently. “Perception of Real and Simulated Motion in the Auditory Modality”, by Thomas Z. Strybel, will review recent research on auditory motion perception, because a natural acoustic environment must contain moving sounds. This review will consider applications of this knowledge to head-coupled display systems. The last paper, “Auditory Psychomotor Coordination”, will examine the interplay between the auditory, visual and motor systems. The specific emphasis of this paper is the use of auditory spatial information in the regulation of motor responses so as to provide efficient application of the visual channel.


2017 ◽  
Vol 26 (1) ◽  
pp. 3-9 ◽  
Author(s):  
Stephen Darling ◽  
Richard J. Allen ◽  
Jelena Havelka

Visuospatial bootstrapping is the name given to a phenomenon whereby performance on visually presented verbal serial-recall tasks is better when stimuli are presented in a spatial array rather than a single location. However, the display used has to be a familiar one. This phenomenon implies communication between cognitive systems involved in storing short-term memory for verbal and visual information, alongside connections to and from knowledge held in long-term memory. Bootstrapping is a robust, replicable phenomenon that should be incorporated in theories of working memory and its interaction with long-term memory. This article provides an overview of bootstrapping, contextualizes it within research on links between long-term knowledge and short-term memory, and addresses how it can help inform current working memory theory.


2018 ◽  
Vol 72 (5) ◽  
pp. 1141-1154 ◽  
Author(s):  
Daniele Nardi ◽  
Brian J Anzures ◽  
Josie M Clark ◽  
Brittany V Griffith

Among the environmental stimuli that can guide navigation in space, most attention has been dedicated to visual information. The process of determining where you are and which direction you are facing (called reorientation) has been extensively examined by providing the navigator with two sources of information—typically the shape of the environment and its features—with an interest in the extent to which they are used. Similar questions with non-visual cues are lacking. Here, blindfolded sighted participants had to learn the location of a target in a real-world, circular search space. In Experiment 1, two ecologically relevant non-visual cues were provided: the slope of the floor and an array of two identical auditory landmarks. Slope successfully guided behaviour, suggesting that proprioceptive/kinesthetic access is sufficient to navigate on a slanted environment. However, despite the fact that participants could localise the auditory sources, this information was not encoded. In Experiment 2, the auditory cue was made more useful for the task because it had greater predictive value and there were no competing spatial cues. Nonetheless, again, the auditory landmark was not encoded. Finally, in Experiment 3, after being prompted, participants were able to reorient by using the auditory landmark. Overall, participants failed to spontaneously rely on the auditory cue, regardless of how informative it was.


1983 ◽  
Vol 56 (1) ◽  
pp. 139-146 ◽  
Author(s):  
Michel Guay ◽  
R. B. Wilberg

The main purpose was to determine the short-term retention characteristics of temporal information when subjects experienced time under a conscious cognitive strategy for time estimation, i.e., subjects were instructed to refrain from employing time-aiding techniques. Visual durations of 1 and 4 sec. were estimated by 12 subjects under the method of reproduction. Six levels of retention interval were used, viz., immediate reproduction, self-paced reproduction, i.e., the subjects were allowed to recall whenever they wished, 15 and 30 sec. of rest, and 15 and 30 sec. of interpolated activity, i.e., counting backwards by threes. The variable error was used to evaluate effects of forgetting. When subjects held a duration of 4 sec. in memory for a period of 15 or 30 sec. of rest, they became more variable than when they recalled the item immediately, at their own pace or when the duration to be remembered was only 1 sec. long. When an interpolated task was required during the retention interval, its variability was similar to the results obtained under an unfulfilled retention interval for both durations. The presence of an interaction between duration and retention interval under the variable error was explained in terms of memory.


2016 ◽  
Vol 39 ◽  
Author(s):  
Mary C. Potter

AbstractRapid serial visual presentation (RSVP) of words or pictured scenes provides evidence for a large-capacity conceptual short-term memory (CSTM) that momentarily provides rich associated material from long-term memory, permitting rapid chunking (Potter 1993; 2009; 2012). In perception of scenes as well as language comprehension, we make use of knowledge that briefly exceeds the supposed limits of working memory.


2020 ◽  
Author(s):  
Holly Lockhart ◽  
Blaire Dube ◽  
Kevin John MacDonald ◽  
Naseem Al-Aidroos ◽  
Stephen Emrich

Although recent evidence suggests that visual short-term memory (VSTM) is a continuous resource, little is known about how flexibly this resource can be allocated. Previous studies using probabilistic cues to indicate two different levels of probe probability have found that response precision can be predicted according to a continuous allocation of resources that depends on attentional priority. The current study used a continuous report procedure and attentional prioritization via simultaneous probabilistic spatial cues to address whether participants can use up to three levels of attentional priority to allocate VSTM resources. Three experiments were performed with differing priority levels, different cues, and cue presentation time. Although group level analysis demonstrated flexible allocation, there was limited evidence that participants were using three priority levels. An individual differences approach revealed that a minority of participants were using three levels of attentional priority, demonstrating that, while possible, it is not the predominant pattern of behavior.


Sign in / Sign up

Export Citation Format

Share Document