scholarly journals Extraction of Affine Invariant Features Using Fractal

2013 ◽  
Vol 2013 ◽  
pp. 1-8 ◽  
Author(s):  
Jianwei Yang ◽  
Guosheng Cheng ◽  
Ming Li

An approach based on fractal is presented for extracting affine invariant features. Central projection transformation is employed to reduce the dimensionality of the original input pattern, and general contour (GC) of the pattern is derived. Affine invariant features cannot be extracted from GC directly due to shearing. To address this problem, a group of curves (which are called shift curves) are constructed from the obtained GC. Fractal dimensions of these curves can readily be computed and constitute a new feature vector for the original pattern. The derived feature vector is used in question for pattern recognition. Several experiments have been conducted to evaluate the performance of the proposed method. Experimental results show that the proposed method can be used for object classification.

Author(s):  
YU TAO ◽  
ERNEST C. M. LAM ◽  
YUAN Y. TANG

In this paper, a novel approach to feature extraction with wavelet and fractal theories is presented as a powerful technique in pattern recognition. The motivation behind using fractal transformation is to develop a high-speed feature extraction technique. A multiresolution family of the wavelets is also used to compute information conserving micro-features. In this study, a new fractal feature is reported. We employed a central projection method to reduce the dimensionality of the original input pattern, and a wavelet transform technique to convert the derived pattern into a set of subpatterns, from which the fractal dimensions can readily be computed. The new feature is a measurement of the fractal dimension, which is an important characteristic that contains information about the geometrical structure. This new scheme includes utilizing the central projection transformation to describe the shape, the wavelet transformation to aid the boundary identification, and the fractal features to enhance image discrimination. The proposed method reduces the dimensionality of a 2-D pattern by way of a central projection approach, and thereafter, performs Daubechies' wavelet transform on the derived 1-D pattern to generate a set of wavelet transform subpatterns, namely, curves that are non-self-intersecting. Further from the resulting non-self-intersecting curves, the divider dimensions are computed with a modified box-counting approach. These divider dimensions constitute a new feature vector for the original 2-D pattern, defined over the curve's fractal dimensions. We have conducted several experiments in which a set of printed Chinese characters, English letters of varying fonts and other images were classified. Based on the formulation of our new feature vector, the experiments have satisfying results.


2012 ◽  
Vol 2012 ◽  
pp. 1-12 ◽  
Author(s):  
Jianwei Yang ◽  
Ming Li ◽  
Zirun Chen ◽  
Yunjie Chen

The extraction of affine invariant features plays an important role in many fields of image processing. In this paper, the original image is transformed into new images to extract more affine invariant features. To construct new images, the original image is cut in two areas by a closed curve, which is called general contour (GC). GC is obtained by performing projections along lines with different polar angles. New image is obtained by changing gray value of pixels in inside area. The traditional affine moment invariants (AMIs) method is applied to the new image. Consequently, cutting affine moment invariants (CAMIs) are derived. Several experiments have been conducted to evaluate the proposed method. Experimental results show that CAMIs can be used in object classification tasks.


2017 ◽  
Vol 58 (3-4) ◽  
pp. 256-264
Author(s):  
JIANWEI YANG ◽  
LIANG ZHANG ◽  
ZHENGDA LU

The central projection transform can be employed to extract invariant features by combining contour-based and region-based methods. However, the central projection transform only considers the accumulation of the pixels along the radial direction. Consequently, information along the radial direction is inevitably lost. In this paper, we propose the Mellin central projection transform to extract affine invariant features. The radial factor introduced by the Mellin transform, makes up for the loss of information along the radial direction by the central projection transform. The Mellin central projection transform can convert any object into a closed curve as a central projection transform, so the central projection transform is only a special case of the Mellin central projection transform. We prove that closed curves extracted from the original image and the affine transformed image by the Mellin central projection transform satisfy the same affine transform relationship. A method is provided for the extraction of affine invariants by employing the area of closed curves derived by the Mellin central projection transform. Experiments have been conducted on some printed Chinese characters and the results establish the invariance and robustness of the extracted features.


2011 ◽  
Vol 2011 ◽  
pp. 1-15 ◽  
Author(s):  
Jianwei Yang ◽  
Zirun Chen ◽  
Wen-Sheng Chen ◽  
Yunjie Chen

An approach is developed for the extraction of affine invariant descriptors by cutting object into slices. Gray values associated with every pixel in each slice are summed up to construct affine invariant descriptors. As a result, these descriptors are very robust to additive noise. In order to establish slices of correspondence between an object and its affine transformed version, general contour (GC) of the object is constructed by performing projection along lines with different polar angles. Consequently, affine in-variant division curves are derived. A slice is formed by points fall in the region enclosed by two adjacent division curves. To test and evaluate the proposed method, several experiments have been conducted. Experimental results show that the proposed method is very robust to noise.


2013 ◽  
Vol 748 ◽  
pp. 619-623 ◽  
Author(s):  
Yan Liang ◽  
Ye Hua Sheng ◽  
Ka Zhang

The object of this research is to reconstruct 3D dense point cloud of geographical scene. With the technology and method of computer vision , first affine invariant features are extracted and matched, then cameras parameters and 3D dense point cloud are recovered and united under geographical reference. The experimental results show that this method with low cost and high precision of centimeters can satisfy the requirements of measurement, modeling and virtual reality.


2013 ◽  
Vol 333-335 ◽  
pp. 1106-1109
Author(s):  
Wei Wu

Palm vein pattern recognition is one of the newest biometric techniques researched today. This paper proposes project the palm vein image matrix based on independent component analysis directly, then calculates the Euclidean distance of the projection matrix, seeks the nearest distance for classification. The experiment has been done in a self-build palm vein database. Experimental results show that the algorithm of independent component analysis is suitable for palm vein recognition and the recognition performance is practical.


2012 ◽  
Vol 542-543 ◽  
pp. 937-940
Author(s):  
Ping Shu Ge ◽  
Guo Kai Xu ◽  
Xiu Chun Zhao ◽  
Peng Song ◽  
Lie Guo

To locate pedestrian faster and more accurately, a pedestrian detection method based on histograms of oriented gradients (HOG) in region of interest (ROI) is introduced. The features are extracted in the ROI where the pedestrian's legs may exist, which is helpful to decrease the dimension of feature vector and simplify the calculation. Then the vertical edge symmetry of pedestrian's legs is fused to confirm the detection. Experimental results indicate that this method can achieve an ideal accuracy with lower process time compared to traditional method.


Author(s):  
Qian Liu ◽  
Feng Yang ◽  
XiaoFen Tang

In view of the issue of the mechanism for enhancing the neighbourhood relationship of blocks of HOG, this paper proposes neighborhood descriptor of oriented gradients (NDOG), an improved feature descriptor based on HOG, for pedestrian detection. To obtain the NDOG feature vector, the algorithm calculates the local weight vector of the HOG feature descriptor, while integrating spatial correlation among blocks, concatenates this weight vector to the tail of the HOG feature descriptor, and uses the gradient norm to normalize this new feature vector. With the proposed NDOG feature vector along with a linear SVM classifier, this paper develops a complete pedestrian detection approach. Experimental results for the INRIA, Caltech-USA, and ETH pedestrian datasets show that the approach achieves a lower miss rate and a higher average precision compared with HOG and other advanced methods for pedestrian detection especially in the case of insufficient training samples.


Sign in / Sign up

Export Citation Format

Share Document