Object shape representation via skeletal models (s-reps) and statistical analysis

Author(s):  
Stephen M. Pizer ◽  
Junpyo Hong ◽  
Jared Vicory ◽  
Zhiyuan Liu ◽  
J.S. Marron ◽  
...  
2019 ◽  
Vol 16 (2(SI)) ◽  
pp. 0504 ◽  
Author(s):  
Abu Bakar Et al.

Zernike Moments has been popularly used in many shape-based image retrieval studies due to its powerful shape representation. However its strength and weaknesses have not been clearly highlighted in the previous studies. Thus, its powerful shape representation could not be fully utilized. In this paper, a method to fully capture the shape representation properties of Zernike Moments is implemented and tested on a single object for binary and grey level images. The proposed method works by determining the boundary of the shape object and then resizing the object shape to the boundary of the image. Three case studies were made. Case 1 is the Zernike Moments implementation on the original shape object image. In Case 2, the centroid of the shape object image in Case 1 is relocated to the center of the image. In Case 3, the proposed method first detect the outer boundary of the shape object and then resizing the object to the boundary of the image. Experimental investigations were made by using two benchmark shape image datasets showed that the proposed method in Case 3 had demonstrated to provide the most superior image retrieval performances as compared to both the Case 1 and Case 2. As a conlusion, to fully capture the powerful shape representation properties of the Zernike moment, a shape object should be resized to the boundary of the image.


Author(s):  
Yasser Ebrahim ◽  
Maher Ahmed ◽  
Siu-Cheung Chau ◽  
Wegdan Abdelsalam

2001 ◽  
Vol 13 (2) ◽  
pp. 80-94 ◽  
Author(s):  
Yannis Avrithis ◽  
Yiannis Xirouhakis ◽  
Stefanos Kollias

2003 ◽  
Vol 20 (3) ◽  
pp. 313-328 ◽  
Author(s):  
JAY HEGDÉ ◽  
DAVID C. VAN ESSEN

Contours and surface textures provide powerful cues used in image segmentation and the analysis of object shape. To learn more about how the visual system extracts and represents these visual cues, we studied the responses of V2 neurons in awake, fixating monkeys to complex contour stimuli (angles, intersections, arcs, and circles) and texture patterns such as non-Cartesian gratings, along with conventional bars and sinusoidal gratings. Substantial proportions of V2 cells conveyed information about many contour and texture characteristics associated with our stimuli, including shape, size, orientation, and spatial frequency. However, the cells differed considerably in terms of their degree of selectivity for the various stimulus characteristics. On average, V2 cells responded better to grating stimuli but were more selective for contour stimuli. Metric multidimensional scaling and principal components analysis showed that, as a population, V2 cells show strong correlations in how they respond to different stimulus types. The first two and five principal components accounted for 69% and 85% of the overall response variation, respectively, suggesting that the response correlations simplified the population representation of shape information with relatively little loss of information. Moreover, smaller random subsets of the population carried response correlation patterns very similar to the population as a whole, indicating that the response correlations were a widespread property of V2 cells. Thus, V2 cells extract information about a number of higher order shape cues related to contours and surface textures and about similarities among many of these shape cues. This may reflect an efficient strategy of representing cues for image segmentation and object shape using finite neuronal resources.


Sign in / Sign up

Export Citation Format

Share Document