scholarly journals A Comparative Analysis of the Zernike Moments for Single Object Retrieval

2019 ◽  
Vol 16 (2(SI)) ◽  
pp. 0504 ◽  
Author(s):  
Abu Bakar Et al.

Zernike Moments has been popularly used in many shape-based image retrieval studies due to its powerful shape representation. However its strength and weaknesses have not been clearly highlighted in the previous studies. Thus, its powerful shape representation could not be fully utilized. In this paper, a method to fully capture the shape representation properties of Zernike Moments is implemented and tested on a single object for binary and grey level images. The proposed method works by determining the boundary of the shape object and then resizing the object shape to the boundary of the image. Three case studies were made. Case 1 is the Zernike Moments implementation on the original shape object image. In Case 2, the centroid of the shape object image in Case 1 is relocated to the center of the image. In Case 3, the proposed method first detect the outer boundary of the shape object and then resizing the object to the boundary of the image. Experimental investigations were made by using two benchmark shape image datasets showed that the proposed method in Case 3 had demonstrated to provide the most superior image retrieval performances as compared to both the Case 1 and Case 2. As a conlusion, to fully capture the powerful shape representation properties of the Zernike moment, a shape object should be resized to the boundary of the image.

Author(s):  
Wei-Bang Chen ◽  
Chengcui Zhang

Inaccurate image segmentation often has a negative impact on object-based image retrieval. Researchers have attempted to alleviate this problem by using hierarchical image representation. However, these attempts suffer from the inefficiency in building the hierarchical image representation and the high computational complexity in matching two hierarchically represented images. This paper presents an innovative multiple-object retrieval framework named Multiple-Object Image Retrieval (MOIR) on the basis of hierarchical image representation. This framework concurrently performs image segmentation and hierarchical tree construction, producing a hierarchical region tree to represent the image. In addition, an efficient hierarchical region tree matching algorithm is designed for multiple-object retrieval with a reasonably low time complexity. The experimental results demonstrate the efficacy and efficiency of the proposed approach.


Author(s):  
G. Sucharitha ◽  
Ranjan K. Senapati

Shape is one of the significant features of Content Based Image Retrieval (CBIR). This paper proposes a strong and successful shape feature, which is based on a set of orthogonal complex moments of images known as Zernike moments. For shape classification Zernike moment (ZM) is the dominant solution. The radial polynomial of Zernike moment produces the number of concentric circles based on the order. As the order increases number of circles will increases, due to this the local information of an image will be ignored. In this paper, we introduced a novel method for radial polynomial where local information of an image given importance. We succeeded to extract the local features and shape features at very a low order of polynomial compared to the state of traditional ZM.The proposed method gives an advantage of a lower order, less complex, and lower dimension feature vector.For more similar images we find that simple  Euclidian distance approximately zero. Proposed method tested on a MPEG-7 CE-1 shape database, Coil-100 databases. Experiments demonstrated that it is outperforming in identifying the shape of an object in the image  and reduced the retrieving time and complexity of calculations.


Author(s):  
Sultan Ullah ◽  
Hamna Ikram ◽  
Qurat ul Ain ◽  
Habib Akbar ◽  
Mudasser A. Khan ◽  
...  

Author(s):  
Chengcui Zhang ◽  
Liping Zhou ◽  
Wen Wan ◽  
Jeffrey Birch ◽  
Wei-Bang Chen

Most existing object-based image retrieval systems are based on single object matching, with its main limitation being that one individual image region (object) can hardly represent the user’s retrieval target, especially when more than one object of interest is involved in the retrieval. Integrated Region Matching (IRM) has been used to improve the retrieval accuracy by evaluating the overall similarity between images and incorporating the properties of all the regions in the images. However, IRM does not take the user’s preferred regions into account and has undesirable time complexity. In this article, we present a Feedback-based Image Clustering and Retrieval Framework (FIRM) using a novel image clustering algorithm and integrating it with Integrated Region Matching (IRM) and Relevance Feedback (RF). The performance of the system is evaluated on a large image database, demonstrating the effectiveness of our framework in catching users’ retrieval interests in object-based image retrieval.


Author(s):  
Wing-Yin Chau ◽  
Chia-Hung Wei ◽  
Yue Li

With the rapid increase in the amount of registered trademarks around the world, trademark image retrieval has been developed to deal with a vast amount of trademark images in a trademark registration system. Many different approaches have been developed throughout these years in an attempt to develop an effective TIR system. Some conventional approaches used in content-based image retrieval, such as moment invariants, Zernike moments, Fourier descriptors and curvature scale space descriptors, have also been widely used in TIR. These approaches, however, contain some major deficiencies when addressing the TIR problem. Therefore, this chapter proposes a novel approach in order to overcome the major deficiencies of the conventional approaches. The proposed approach combines the Zernike moments descriptors with the centroid distance representation and the curvature representation. The experimental results show that the proposed approach outperforms the conventional approaches in several circumstances. Details regarding to the proposed approach as well as the conventional approaches are presented in this chapter.


Sign in / Sign up

Export Citation Format

Share Document