scholarly journals Strategies of shape representation in macaque visual area V2

2003 ◽  
Vol 20 (3) ◽  
pp. 313-328 ◽  
Author(s):  
JAY HEGDÉ ◽  
DAVID C. VAN ESSEN

Contours and surface textures provide powerful cues used in image segmentation and the analysis of object shape. To learn more about how the visual system extracts and represents these visual cues, we studied the responses of V2 neurons in awake, fixating monkeys to complex contour stimuli (angles, intersections, arcs, and circles) and texture patterns such as non-Cartesian gratings, along with conventional bars and sinusoidal gratings. Substantial proportions of V2 cells conveyed information about many contour and texture characteristics associated with our stimuli, including shape, size, orientation, and spatial frequency. However, the cells differed considerably in terms of their degree of selectivity for the various stimulus characteristics. On average, V2 cells responded better to grating stimuli but were more selective for contour stimuli. Metric multidimensional scaling and principal components analysis showed that, as a population, V2 cells show strong correlations in how they respond to different stimulus types. The first two and five principal components accounted for 69% and 85% of the overall response variation, respectively, suggesting that the response correlations simplified the population representation of shape information with relatively little loss of information. Moreover, smaller random subsets of the population carried response correlation patterns very similar to the population as a whole, indicating that the response correlations were a widespread property of V2 cells. Thus, V2 cells extract information about a number of higher order shape cues related to contours and surface textures and about similarities among many of these shape cues. This may reflect an efficient strategy of representing cues for image segmentation and object shape using finite neuronal resources.

2017 ◽  
Author(s):  
Le Chang ◽  
Pinglei Bao ◽  
Doris Y. Tsao

AbstractAn important question about color vision is: how does the brain represent the color of an object? The recent discovery of “color patches” in macaque inferotemporal (IT) cortex, the part of brain responsible for object recognition, makes this problem experimentally tractable. Here we record neurons in three color patches, middle color patch CLC (central lateral color patch), and two anterior color patches ALC (anterior lateral color patch) and AMC (anterior medial color patch), while presenting images of objects systematically varied in hue. We found that all three patches contain high concentrations of hue-selective cells, and the three patches use distinct computational strategies to represent colored objects: while all three patches multiplex hue and shape information, shape-invariant hue information is much stronger in anterior color patches ALC/AMC than CLC; furthermore, hue and object shape specifically for primate faces/bodies are over-represented in AMC but not in the other two patches.


2016 ◽  
Vol 10 (4) ◽  
pp. 327-338 ◽  
Author(s):  
Anwesha Khasnobish ◽  
Monalisa Pal ◽  
Dwaipayan Sardar ◽  
D. N. Tibarewala ◽  
Amit Konar

2017 ◽  
Vol 2017 ◽  
pp. 1-10
Author(s):  
Zheng Wang ◽  
Qingbiao Wu

Shape completion is an important task in the field of image processing. An alternative method is to capture the shape information and finish the completion by a generative model, such as Deep Boltzmann Machine. With its powerful ability to deal with the distribution of the shapes, it is quite easy to acquire the result by sampling from the model. In this paper, we make use of the hidden activation of the DBM and incorporate it with the convolutional shape features to fit a regression model. We compare the output of the regression model with the incomplete shape feature in order to set a proper and compact mask for sampling from the DBM. The experiment shows that our method can obtain realistic results without any prior information about the incomplete object shape.


2019 ◽  
Vol 16 (2(SI)) ◽  
pp. 0504 ◽  
Author(s):  
Abu Bakar Et al.

Zernike Moments has been popularly used in many shape-based image retrieval studies due to its powerful shape representation. However its strength and weaknesses have not been clearly highlighted in the previous studies. Thus, its powerful shape representation could not be fully utilized. In this paper, a method to fully capture the shape representation properties of Zernike Moments is implemented and tested on a single object for binary and grey level images. The proposed method works by determining the boundary of the shape object and then resizing the object shape to the boundary of the image. Three case studies were made. Case 1 is the Zernike Moments implementation on the original shape object image. In Case 2, the centroid of the shape object image in Case 1 is relocated to the center of the image. In Case 3, the proposed method first detect the outer boundary of the shape object and then resizing the object to the boundary of the image. Experimental investigations were made by using two benchmark shape image datasets showed that the proposed method in Case 3 had demonstrated to provide the most superior image retrieval performances as compared to both the Case 1 and Case 2. As a conlusion, to fully capture the powerful shape representation properties of the Zernike moment, a shape object should be resized to the boundary of the image.


Sign in / Sign up

Export Citation Format

Share Document