Effects of Verbal Labeling on Visual Recall by Adult Aphasic Patients

1974 ◽  
Vol 38 (1) ◽  
pp. 255-262 ◽  
Author(s):  
Cynthia M. Shewan ◽  
Clinton W. Bennett

The performance of 30 adult aphasics and 30 normal Ss was compared on a task of visual recall for ambiguous figures. A verbal label was presented simultaneously with the figure in some conditions. Aphasics recalled the visual stimuli significantly less accurately than did the normals, but both groups demonstrated the same pattern of errors. When verbal labels accompanied the visual stimuli, aphasics and normals more frequently selected responses which corresponded most closely with the verbal name they heard during original exposure to the ambiguous stimuli. The data suggested that both groups used verbal strategies to perform the task but that the aphasics' strategies perhaps were less complex than those of their normal counterparts.

1981 ◽  
Vol 52 (1) ◽  
pp. 183-193 ◽  
Author(s):  
Henry L. Dee ◽  
H. Julia Hannay

It has previously been demonstrated that certain characteristics of the stimulus, specifically visual complexity and verbal association value, as well as mnemonic factors are important in producing the usually obtained asymmetry in human perceptual performance, and thus presumably, hemispheric asymmetry of function. The present research demonstrated that the usual superiority of the left visual field for high-complexity, low-association-value visual forms can be reversed by the acquisition and use of verbal labels for such stimuli but is only attenuated when the labels are not used to respond to the stimuli. Simple familiarity with the visual stimuli attenuated the difference between the fields, but here there was no reversal. Implications of these results for hemispheric processing are discussed.


1967 ◽  
Vol 24 (1) ◽  
pp. 287-292 ◽  
Author(s):  
Ronald L. Cohen

Using a circle with a 90° gap as stimulus figure and the verbal labels a clock set at 5 min. to 7 and a clock set at 10 min. to 8, several experimental conditions were run to test the interaction between the figure and the label in recall, after the stimulus had been labelled just prior to its presentation. From the obtained results the conclusion was drawn that the main interaction occurs during the recall phase, there existing two separate memory traces, a visual and a verbal, up to the time of recall.


1997 ◽  
Vol 85 (1) ◽  
pp. 275-285 ◽  
Author(s):  
Saho Ayabe-Kanamum ◽  
Tadashi Kikuchi ◽  
Sachiko Saito

The experiment investigated the effect of verbal cues on recognition memory for unfamiliar odors. 58 participants learned 20 odors of chemical substances. The control group learned the odors without accompanying verbal labels whereas two other groups learned the odors with accompanying verbal labels. The labels referred to relatively pleasant or unpleasant odor sources. On a memory test, administered 15 min. and also 1 wk. after the learning phase, participants were asked to recognize 10 learned odors from 10 unlearned odors and to evaluate each odor's pleasantness. Analysis showed (a) the verbal labels did not facilitate recognition of the unfamiliar odors, (b) recognition performance was lower after 1 wk. than after 15 min., and (c) rated pleasantness tended to be affected by the verbal label assigned to the odor in the learning phase.


2020 ◽  
Vol 10 (2) ◽  
pp. 64 ◽  
Author(s):  
Rajesh Amerineni ◽  
Resh S. Gupta ◽  
Lalit Gupta

The brain uses contextual information to uniquely resolve the interpretation of ambiguous stimuli. This paper introduces a deep learning neural network classification model that emulates this ability by integrating weighted bidirectional context into the classification process. The model, referred to as the CINET, is implemented using a convolution neural network (CNN), which is shown to be ideal for combining target and context stimuli and for extracting coupled target-context features. The CINET parameters can be manipulated to simulate congruent and incongruent context environments and to manipulate target-context stimuli relationships. The formulation of the CINET is quite general; consequently, it is not restricted to stimuli in any particular sensory modality nor to the dimensionality of the stimuli. A broad range of experiments is designed to demonstrate the effectiveness of the CINET in resolving ambiguous visual stimuli and in improving the classification of non-ambiguous visual stimuli in various contextual environments. The fact that the performance improves through the inclusion of context can be exploited to design robust brain-inspired machine learning algorithms. It is interesting to note that the CINET is a classification model that is inspired by a combination of brain’s ability to integrate contextual information and the CNN, which is inspired by the hierarchical processing of information in the visual cortex.


2021 ◽  
Author(s):  
Matthew Canham ◽  
Stefan Sütterlin ◽  
Torvald F. Ask ◽  
Benjamin J. Knox ◽  
Lauren Glenister ◽  
...  

Humans quickly and effortlessly impose narrative context onto ambiguous stimuli, as demonstrated through psychological projective testing and ambiguous figures. We suggest that this feature of human cognition may be weaponized as part of an information operation. Such Ambiguous Self-Induced Disinformation (ASID) attacks would employ the following elements: the introduction of a culturally consistent narrative, the presence of ambiguous stimuli, the motivation for hypervigilance, and a social network. ASID attacks represent a reduced-risk, low-investment on the part of the adversary with a potentially significant reward, making this a likely tactic of choice for information operators within the context of gray-zone conflicts.


1967 ◽  
Vol 24 (2) ◽  
pp. 363-366 ◽  
Author(s):  
Suzanne D. Hill ◽  
Alva H. McCullum ◽  
Anthony G. Sceau

The effect of a systematic program of exercises on the development of retarded children's awareness of right-left directionality was studied. The children were oriented toward observing the use of specific body parts and directed to select willfully a specified body part in making a response. Half of the children were required to also use a directional verbal label for the body part used. Ss who did not use directional verbal labels showed as much improvement as those who did. These findings suggest that the lag in development of a concept of right-left awareness found with these retarded children was not due to a deficit in verbalization per se.


1995 ◽  
Vol 1 (3) ◽  
pp. 271-280 ◽  
Author(s):  
Arne L. Ostergaard ◽  
William C. Heindel ◽  
Jane S. Paulsen

AbstractThis experiment investigated the effects of verbal labels on recognition memory for ambiguous visual figures in patients with Alzheimer's disease (AD), patients with Huntington's disease (HD), and matched normal control subjects. The study employed ambiguous figures that could be interpreted in two different ways. During the study phase each figure was presented together with a verbal label that corresponded to one interpretation of the figure. After a 30-min retention interval a recognition memory test was given during which the study figures and distractor figures were presented one at a time without verbal labels. For each study figure two distractor figures were employed, each corresponding to a different interpretation of the study figure. The patients' overall recognition memory performance was severely impaired compared to control subjects. However, all subject groups tended to produce responses and response latencies to distractor items that were consistent with the verbal labels presented during the study phase. This bias effect occurred in the AD patients despite the fact that their recognition memory performance was at chance level. Indeed, there was no significant difference in the bias evidenced by the AD and HD patients and their respective matched control subjects. The bias effects were obtained in an explicit memory task, and the findings are discussed in terms of unconscious influences on explicit memory processes. (JINS, 1995, I, 271–280.)


Sign in / Sign up

Export Citation Format

Share Document