Discrimination and Generalization of Complex Visual Shape Variations in Pigeons

1972 ◽  
Vol 35 (3) ◽  
pp. 915-927 ◽  
Author(s):  
Douglas P. Ferraro ◽  
Michael G. Grisham

Three experiments investigated stimulus control of key pecking in pigeons by varying the distance of vertices movement for a six-point complex visual shape. Ease of discrimination learning was monotonically related to the distance of vertices movement when the directions of vertices movement were held constant. As suggested by selective attention theory, steep generalization gradients were obtained following intradimensional differential training but not following nondifferential training or interdimensional differential training. These results indicate that, unlike the dimension of angular orientation or tilt, distance of vertices movement provides a consistent functional representation of complex shape similarity.

1970 ◽  
Vol 21 (5) ◽  
pp. 298-300
Author(s):  
Daniel F. Johnson ◽  
William H. Anderson

2005 ◽  
Vol 16 (Supplement 1) ◽  
pp. S64
Author(s):  
L. Hogarth ◽  
A. Dickinson ◽  
S.B. Hutton ◽  
H. Bamborough ◽  
T. Duka

1969 ◽  
Vol 25 (1) ◽  
pp. 139-148 ◽  
Author(s):  
William M. Wiest

Conditions necessary for the development of social interaction were examined with 7 Observer pigeons, each working beside a Model pigeon. Observer was conditioned to pay attention to the Model's behavior. Model's key pecking rate on a multiple fixed-ratio, extinction schedule was controlled by stimuli projected on his key (not visible to Observer), but Observer, whose key always remained the same color, had no discriminative stimuli except those provided by Model's behavior. More precise control of Observer's behavior occurred when Model could be both seen and heard than when Model could be heard only.


2021 ◽  
Vol 17 (6) ◽  
pp. e1008981
Author(s):  
Yaniv Morgenstern ◽  
Frieder Hartmann ◽  
Filipp Schmidt ◽  
Henning Tiedemann ◽  
Eugen Prokott ◽  
...  

Shape is a defining feature of objects, and human observers can effortlessly compare shapes to determine how similar they are. Yet, to date, no image-computable model can predict how visually similar or different shapes appear. Such a model would be an invaluable tool for neuroscientists and could provide insights into computations underlying human shape perception. To address this need, we developed a model (‘ShapeComp’), based on over 100 shape features (e.g., area, compactness, Fourier descriptors). When trained to capture the variance in a database of >25,000 animal silhouettes, ShapeComp accurately predicts human shape similarity judgments between pairs of shapes without fitting any parameters to human data. To test the model, we created carefully selected arrays of complex novel shapes using a Generative Adversarial Network trained on the animal silhouettes, which we presented to observers in a wide range of tasks. Our findings show that incorporating multiple ShapeComp dimensions facilitates the prediction of human shape similarity across a small number of shapes, and also captures much of the variance in the multiple arrangements of many shapes. ShapeComp outperforms both conventional pixel-based metrics and state-of-the-art convolutional neural networks, and can also be used to generate perceptually uniform stimulus sets, making it a powerful tool for investigating shape and object representations in the human brain.


2019 ◽  
Vol 19 (10) ◽  
pp. 37c
Author(s):  
Yaniv Morgenstern ◽  
Filipp Schmidt ◽  
Frieder Hartmann ◽  
Henning Tiedemann ◽  
Eugen Prokott ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document