Effects of Punishment on Learning by Aphasic Subjects

1973 ◽  
Vol 36 (1) ◽  
pp. 283-289
Author(s):  
Heraldean Kushner ◽  
Dee Jay Hubbard ◽  
A. W. Knox

Effects of three types of punishment on learning a paired-associate visual-matching task by aphasic Ss were investigated. Ss matched response buttons with stimulus patterns in three punishment conditions—time-out, when E inactivated the pushbuttons and refrained from presenting a stimulus card for a period of 15 sec.; response-cost, when E took a penny from S for every incorrect response; and presentation of an aversive stimulus, during which 95 dB SPL of noise was presented for 0.75 sec. contingent upon an incorrect response. Each punishment condition lasted either until criterion (10 correct responses in 10 trials) was reached, or until 10 min. had elapsed. All aphasic Ss learned the task under at least one type of punishment condition; types of punishment had differential effects for individual Ss, and Ss learned more rapidly when positive reinforcement and punishment were combined.

1971 ◽  
Vol 29 (3) ◽  
pp. 791-796
Author(s):  
R. H. Willoughby

72 college Ss were given 50 training and 20 test trials on a conditional matching task in which the color of the comparison stimuli served as the conditional cue. Comparison stimuli were presented either in runs of 1, runs of 5, or at random. All Ss received continuous reinforcement for correct responses and half received a 30-sec. TO after every incorrect response. Results showed that performance early in training was significantly higher under stimulus-compounding but that random presentation produced significantly better performance on test trials. While not significant, punishment (TO) facilitated performance under random presentation more than when stimulus-compounding was employed.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Bo Dong ◽  
Airui Chen ◽  
Yuting Zhang ◽  
Yangyang Zhang ◽  
Ming Zhang ◽  
...  

AbstractInaccurate egocentric distance and speed perception are two main explanations for the high accident rate associated with driving in foggy weather. The effect of foggy weather on speed has been well studied. However, its effect on egocentric distance perception is poorly understood. The paradigm for measuring perceived egocentric distance in previous studies was verbal estimation instead of a nonverbal paradigm. In the current research, a nonverbal paradigm, the visual matching task, was used. Our results from the nonverbal task revealed a robust foggy effect on egocentric distance. Observers overestimated the egocentric distance in foggy weather compared to in clear weather. The higher the concentration of fog, the more serious the overestimation. This effect of fog on egocentric distance was not limited to a certain distance range but was maintained in action space and vista space. Our findings confirm the foggy effect with a nonverbal paradigm and reveal that people may perceive egocentric distance more "accurately" in foggy weather than when it is measured with a verbal estimation task.


Author(s):  
Daniel Campbell ◽  
Corey Ray-Subramanian ◽  
Winifred Schultz-Krohn ◽  
Kristen M. Powers ◽  
Renee Watling ◽  
...  

2019 ◽  
Author(s):  
Tianhe Wang ◽  
Ziyan Zhu ◽  
Inoue Kana ◽  
Yuanzheng Yu ◽  
Hao He ◽  
...  

AbstractAccumulating evidence indicates that the human’s proprioception map appears subject-specific. However, whether the idiosyncratic pattern persists across time with good within-subject consistency has not been quantitatively examined. Here we measured the proprioception by a hand visual-matching task in multiple sessions over two days. We found that people improved their proprioception when tested repetitively without performance feedback. Importantly, despite the reduction of average error, the spatial pattern of proprioception errors remained idiosyncratic. Based on individuals’ proprioceptive performance, a standard convolutional neural network classifier could identify people with good accuracy. We also found that subjects’ baseline proprioceptive performance could not predict their motor performance in a visual trajectory-matching task even though both tasks require accurate mapping of hand position to visual targets in the same workspace. Using a separate experiment, we not only replicated these findings but also ruled out the possibility that performance feedback during a few familiarization trials caused the observed improvement in proprioception. We conclude that the conventional proprioception test itself, even without feedback, can improve proprioception but leave the idiosyncrasy of proprioception unchanged.


2019 ◽  
Vol 11 (4) ◽  
pp. 474-482
Author(s):  
Kristina Howansky ◽  
Analia Albuja ◽  
Shana Cole

In four studies, we explored perceptual representations of the gender-typicality of transgender individuals. In Studies 1a and 1b, participants ( N = 237) created an avatar based on an image of an individual who disclosed being transgender or did not. Avatars generated in the transgender condition were less gender-typical—that is, transmen were less masculine and transwomen were less feminine—than those created in the control condition. In Study 2 ( N = 368), using a unique visual matching task, participants represented a target labeled transgender as less gender-typical than the same target labeled cisgender. In Study 3 ( N = 228), perceptual representations of transwomen as less gender-typical led to lower acceptability of feminine behavior and less endorsement that the target should be categorized as female. We discuss how biased perceptual representations may contribute to the stigmatization and marginalization of transgender individuals.


1981 ◽  
Vol 4 (1) ◽  
pp. 38-43 ◽  
Author(s):  
Jed P. Luchow ◽  
Margaret Jo Shepherd

The purpose of this study was to examine the effect of multisensory input on the performance of learning disabled boys on a visual matching task. A thirty-item multiple-choice visual dot pattern matching task was given to 160 boys, ages 6 years through 8 years, 11 months, who were enrolled in special classes for children with learning problems. Of the four treatment groups (visual input only, visual plus tactile input, visual plus auditory input, visual plus auditory plus tactile input), the difference between the means of the visual only and visual-auditory and visual-auditory-tactile groups was significant at p<.05. The results suggest that on a perceptual task not related to reading or mathematics, the addition of input from tactile and auditory sensory modalities does not improve learning performance and, in certain combinations, actually interferes with such performance.


Sign in / Sign up

Export Citation Format

Share Document