scholarly journals An image computable model of visual shape similarity

2019 ◽  
Vol 19 (10) ◽  
pp. 37c
Author(s):  
Yaniv Morgenstern ◽  
Filipp Schmidt ◽  
Frieder Hartmann ◽  
Henning Tiedemann ◽  
Eugen Prokott ◽  
...  
2021 ◽  
Vol 17 (6) ◽  
pp. e1008981
Author(s):  
Yaniv Morgenstern ◽  
Frieder Hartmann ◽  
Filipp Schmidt ◽  
Henning Tiedemann ◽  
Eugen Prokott ◽  
...  

Shape is a defining feature of objects, and human observers can effortlessly compare shapes to determine how similar they are. Yet, to date, no image-computable model can predict how visually similar or different shapes appear. Such a model would be an invaluable tool for neuroscientists and could provide insights into computations underlying human shape perception. To address this need, we developed a model (‘ShapeComp’), based on over 100 shape features (e.g., area, compactness, Fourier descriptors). When trained to capture the variance in a database of >25,000 animal silhouettes, ShapeComp accurately predicts human shape similarity judgments between pairs of shapes without fitting any parameters to human data. To test the model, we created carefully selected arrays of complex novel shapes using a Generative Adversarial Network trained on the animal silhouettes, which we presented to observers in a wide range of tasks. Our findings show that incorporating multiple ShapeComp dimensions facilitates the prediction of human shape similarity across a small number of shapes, and also captures much of the variance in the multiple arrangements of many shapes. ShapeComp outperforms both conventional pixel-based metrics and state-of-the-art convolutional neural networks, and can also be used to generate perceptually uniform stimulus sets, making it a powerful tool for investigating shape and object representations in the human brain.


Author(s):  
Yaniv Morgenstern ◽  
Frieder Hartmann ◽  
Filipp Schmidt ◽  
Henning Tiedemann ◽  
Eugen Prokott ◽  
...  

AbstractShape is a defining feature of objects. Yet, no image-computable model accurately predicts how similar or different shapes appear to human observers. To address this, we developed a model (‘ShapeComp’), based on over 100 shape features (e.g., area, compactness, Fourier descriptors). When trained to capture the variance in a database of >25,000 animal silhouettes, ShapeComp predicts human shape similarity judgments almost perfectly (r2>0.99) without fitting any parameters to human data. To test the model, we created carefully selected arrays of complex novel shapes using a Generative Adversarial Network trained on the animal silhouettes, which we presented to observers in a wide range of tasks. Our findings show that human shape perception is inherently multidimensional and optimized for comparing natural shapes. ShapeComp outperforms conventional metrics, and can also be used to generate perceptually uniform stimulus sets, making it a powerful tool for investigating shape and object representations in the human brain.


1972 ◽  
Vol 35 (3) ◽  
pp. 915-927 ◽  
Author(s):  
Douglas P. Ferraro ◽  
Michael G. Grisham

Three experiments investigated stimulus control of key pecking in pigeons by varying the distance of vertices movement for a six-point complex visual shape. Ease of discrimination learning was monotonically related to the distance of vertices movement when the directions of vertices movement were held constant. As suggested by selective attention theory, steep generalization gradients were obtained following intradimensional differential training but not following nondifferential training or interdimensional differential training. These results indicate that, unlike the dimension of angular orientation or tilt, distance of vertices movement provides a consistent functional representation of complex shape similarity.


1999 ◽  
Vol 24 (4) ◽  
pp. 377-383
Author(s):  
G. A. van zanten ◽  
A. van de sande ◽  
M. P. brocaar

2011 ◽  
Vol 18 (4) ◽  
pp. 1597-1610 ◽  
Author(s):  
Chaoqian Cai ◽  
Jiayu Gong ◽  
Xiaofeng Liu ◽  
Hualiang Jiang ◽  
Daqi Gao ◽  
...  

2018 ◽  
Vol 38 (31) ◽  
pp. 6888-6899 ◽  
Author(s):  
Peter Kok ◽  
Nicholas B. Turk-Browne
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document