scholarly journals Maximum Entropy Threshold Segmentation for Target Matching Using Speeded-Up Robust Features

2014 ◽  
Vol 2014 ◽  
pp. 1-12 ◽  
Author(s):  
Mu Zhou ◽  
Xia Hong ◽  
Zengshan Tian ◽  
Huining Dong ◽  
Mingchun Wang ◽  
...  

This paper proposes a 2-dimensional (2D) maximum entropy threshold segmentation (2DMETS) based speeded-up robust features (SURF) approach for image target matching. First of all, based on the gray level of each pixel and the average gray level of its neighboring pixels, we construct a 2D gray histogram. Second, by the target and background segmentation, we localize the feature points at the interest points which have the local extremum of box filter responses. Third, from the 2D Haar wavelet responses, we generate the 64-dimensional (64D) feature point descriptor vectors. Finally, we perform the target matching according to the comparisons of the 64D feature point descriptor vectors. Experimental results show that our proposed approach can effectively enhance the target matching performance, as well as preserving the real-time capacity.

Author(s):  
Wei Liu ◽  
Shuai Yang ◽  
Zhiwei Ye ◽  
Qian Huang ◽  
Yongkun Huang

Threshold segmentation has been widely used in recent years due to its simplicity and efficiency. The method of segmenting images by the two-dimensional maximum entropy is a species of the useful technique of threshold segmentation. However, the efficiency and stability of this technique are still not ideal and the traditional search algorithm cannot meet the needs of engineering problems. To mitigate the above problem, swarm intelligent optimization algorithms have been employed in this field for searching the optimal threshold vector. An effective technique of lightning attachment procedure optimization (LAPO) algorithm based on a two-dimensional maximum entropy criterion is offered in this paper, and besides, a chaotic strategy is embedded into LAPO to develop a new algorithm named CLAPO. In order to confirm the benefits of the method proposed in this paper, the other seven kinds of competitive algorithms, such as Ant–lion Optimizer (ALO) and Grasshopper Optimization Algorithm (GOA), are compared. Experiments are conducted on four different kinds of images and the simulation results are presented in several indexes (such as computational time, maximum fitness, average fitness, variance of fitness and other indexes) at different threshold levels for each test image. By scrutinizing the results of the experiment, the superiority of the introduced method is demonstrated, which can meet the needs of image segmentation excellently.


2014 ◽  
Vol 644-650 ◽  
pp. 4027-4030
Author(s):  
Hong Li ◽  
Zhong An Lai ◽  
Jun Wei Lei

The traditional Otsu threshold algorithm is not a good method for processing the real images because of complex shape and unbalanced distribution. To solve this problem, the paper uses the thinking of Otsu’s method for reference, introduces a threshold segmentation algorithm based on histogram statistical property. In addition, the paper draws a comparison between the new algorithm and Otsu’s algorithm. Experimental results show that the new algorithm can get better segmentation effect than that of Otsu’s method when the gray-level distribution of the background follows normal distribution approximatively, and the target region is less than the background region.


2013 ◽  
Vol 325-326 ◽  
pp. 1637-1640
Author(s):  
Dong Mei Li ◽  
Jing Lei Zhang

Images matching is the basis of image registration. For their difference, a improved SURF(speeded up robust features) algorithm was proposed for the infrared and visible images matching. Firstly, edges were extracted from the images to improve the similarity of infrared and visible images. Then SURF algorithm was used to detect interest points, and the dimension of the point descriptor was 64. Finally, found the matching points by Euclidean distance. Experimental results show that some invalid data points were eliminated.


Author(s):  
Ewa Skubalska-Rafajłowicz

Local Correlation and Entropy Maps as Tools for Detecting Defects in Industrial ImagesThe aim of this paper is to propose two methods of detecting defects in industrial products by an analysis of gray level images with low contrast between the defects and their background. An additional difficulty is the high nonuniformity of the background in different parts of the same image. The first method is based on correlating subimages with a nondefective reference subimage and searching for pixels with low correlation. To speed up calculations, correlations are replaced by a map of locally computed inner products. The second approach does not require a reference subimage and is based on estimating local entropies and searching for areas with maximum entropy. A nonparametric estimator of local entropy is also proposed, together with its realization as a bank of RBF neural networks. The performance of both methods is illustrated with an industrial image.


2014 ◽  
Vol 525 ◽  
pp. 719-722
Author(s):  
Yu Bing Dong ◽  
Ming Jing Li ◽  
Ying Sun

Threshold-based segmentation algorithms are introduced and simulated. In order to evaluate the performance of various threshold segmentation algorithms, the quantitative experimental criterions are introduced and applied. And through large experiments by using MATLAB, threshold segmentation algorithms are evaluate with uniformity measure (UM), shape measure (SM) and gray-level contrast (GC). The effect of image segmentation is described by using the average value of three experimental criterions (UM, SM and GC).


Author(s):  
NIKHIL R. PAL ◽  
SANKAR K. PAL

The theory of formation of an ideal image has been described which shows that the gray level in an image follows the Poisson distribution. Based on this concept, various algorithms for object background classification have been developed. Proposed algorithms involve either the maximum entropy principle or the minimum χ2 statistic. The appropriateness of the Poisson distribution is further strengthened by comparing the results with those of similar algorithms which use conventional normal distribution. A set of images with various types of histograms has been considered here as the test data.


Author(s):  
Yohannes Yohannes ◽  
Siska Devella ◽  
Ade Hendri Pandrean

Songket is a historical heritage in the city of Palembang. Where Songket has many different types and motifs. Besides having historical value, Palembang's original Songket has high quality and complexity in the manufacturing process. As known Palembang Songket has a lot of motives, one of the ways to recognize Palembang Songket is through its motives, so that research was conducted for the classification of Palembang Songket motifs. The method used to extract features is the Speeded-Up Robust Feature (SURF), while the classification method is Random Forest. The process of forming the SURF feature is divided into two stages, the first stage is Interest Point Detection, which consists of Integral Images, Hessian Matrix Based Interest Points, Scale Space Representation and Interest Point Localization, the second stage of Interest Point Description consists of Orientation Assignment and Descriptor Based on Sum Haar Wavelet Responses. The resulting feature is used for the Random Forest classification. This study used 345 images of Palembang Songket motifs, among others, Bunga Cina, Cantik Manis and Pulir. The images taken are based on 5 colors from each Palembang Songket motif. For the separation of data there are 300 images used as data train and 45 images for testing data. From the tests that have been done the results of the overall overall accuracy are 68.89%, per class accuracy 79.26%, precision 69.27, and recall 68.89%.


Sign in / Sign up

Export Citation Format

Share Document