scholarly journals Constrained K-Means Classification

2018 ◽  
Vol 8 (4) ◽  
pp. 3203-3208
Author(s):  
P. N. Smyrlis ◽  
D. C. Tsouros ◽  
M. G. Tsipouras

Classification-via-clustering (CvC) is a widely used method, using a clustering procedure to perform classification tasks. In this paper, a novel K-Means-based CvC algorithm is presented, analysed and evaluated. Two additional techniques are employed to reduce the effects of the limitations of K-Means. A hypercube of constraints is defined for each centroid and weights are acquired for each attribute of each class, for the use of a weighted Euclidean distance as a similarity criterion in the clustering procedure. Experiments are made with 42 well–known classification datasets. The experimental results demonstrate that the proposed algorithm outperforms CvC with simple K-Means.

2013 ◽  
Vol 333-335 ◽  
pp. 1106-1109
Author(s):  
Wei Wu

Palm vein pattern recognition is one of the newest biometric techniques researched today. This paper proposes project the palm vein image matrix based on independent component analysis directly, then calculates the Euclidean distance of the projection matrix, seeks the nearest distance for classification. The experiment has been done in a self-build palm vein database. Experimental results show that the algorithm of independent component analysis is suitable for palm vein recognition and the recognition performance is practical.


Author(s):  
Yuan Zhang ◽  
Regina Barzilay ◽  
Tommi Jaakkola

We introduce a neural method for transfer learning between two (source and target) classification tasks or aspects over the same domain. Rather than training on target labels, we use a few keywords pertaining to source and target aspects indicating sentence relevance instead of document class labels. Documents are encoded by learning to embed and softly select relevant sentences in an aspect-dependent manner. A shared classifier is trained on the source encoded documents and labels, and applied to target encoded documents. We ensure transfer through aspect-adversarial training so that encoded documents are, as sets, aspect-invariant. Experimental results demonstrate that our approach outperforms different baselines and model variants on two datasets, yielding an improvement of 27% on a pathology dataset and 5% on a review dataset.


2019 ◽  
Vol 224 ◽  
pp. 04005
Author(s):  
Nikolay Gapon ◽  
Roman Sizyakin ◽  
Marina Zhdanova ◽  
Oksana Balabaeva ◽  
Yigang Cen

This paper proposes a method for reconstructing a depth map obtained using a stereo pair image. The proposed approach is based on a geometric model for the synthesis of patches. The entire image is preliminarily divided into blocks of different size, where large blocks are used to restore homogeneous areas, and small blocks are used to restore details of the image structure. Lost pixels are recovered by copying the pixel values from the source based on the similarity criterion. We used a trained neural network to select the “best like” patch. Experimental results show that the proposed method gives better results than other modern methods, both in subjective and objective measurements for reconstructing a depth map.


2010 ◽  
Vol 121-122 ◽  
pp. 596-599 ◽  
Author(s):  
Ni An Cai ◽  
Wen Zhao Liang ◽  
Shao Qiu Xu ◽  
Fang Zhen Li

A recognition method for traffic signs based on an SIFT features is proposed to solve the problems of distortion and occlusion. SIFT features are first extracted from traffic signs and matched by using the Euclidean distance. Then the recognition is implemented based on the similarity. Experimental results show that the proposed method, superior to traditional method, can excellently recognize traffic signs with the transformation of scale, rotation, and distortion and has a good ability of anti-noise and anti-occlusion.


2014 ◽  
Vol 543-547 ◽  
pp. 2670-2673
Author(s):  
Lei Cao ◽  
Di Liao ◽  
Bin Dang Xue

Aiming to solve the high computational and time consuming problem in SIFT feature matching, this paper presents an improved SIFT feature matching algorithm based on reference point. The algorithm starts from selecting a suitable reference point in the feature descriptor space when SIFT features are extracted. In the feature matching stage, this paper uses the Euclidean distance between descriptor vectors of the feature point to be matched and the reference point to make a fast filtration which removes most of the features that could not be matched. For the remaining SIFT features, Best-bin-first (BBF) algrithm is utilized to obtain precise matches. Experimental results demonstrate that the proposed matching algorithm achieves good effectiveness in image matching, and takes only about 60 percent of the time that the traditional matching algorithm takes.


2013 ◽  
Vol 325-326 ◽  
pp. 1637-1640
Author(s):  
Dong Mei Li ◽  
Jing Lei Zhang

Images matching is the basis of image registration. For their difference, a improved SURF(speeded up robust features) algorithm was proposed for the infrared and visible images matching. Firstly, edges were extracted from the images to improve the similarity of infrared and visible images. Then SURF algorithm was used to detect interest points, and the dimension of the point descriptor was 64. Finally, found the matching points by Euclidean distance. Experimental results show that some invalid data points were eliminated.


2014 ◽  
Vol 556-562 ◽  
pp. 3590-3593
Author(s):  
Hong Bing Huang

Manifold learning has made many successful applications in the fields of dimensionality reduction and pattern recognition. However, when it is used for supervised classification, the result is still unsatisfactory. To address this challenge, a novel supervised approach, namely macro manifold learning (MML) is proposed. Based on the proposed approach, the low-dimensional embeddings of the testing samples is more favorable for classification tasks. Experimental results demonstrate the feasibility and effectiveness of our proposed approach.


2020 ◽  
Vol 34 (01) ◽  
pp. 51-58 ◽  
Author(s):  
Xinyan Dai ◽  
Xiao Yan ◽  
Kelvin K. W. Ng ◽  
Jiu Liu ◽  
James Cheng

Vector quantization (VQ) techniques are widely used in similarity search for data compression, computation acceleration and etc. Originally designed for Euclidean distance, existing VQ techniques (e.g., PQ, AQ) explicitly or implicitly minimize the quantization error. In this paper, we present a new angle to analyze the quantization error, which decomposes the quantization error into norm error and direction error. We show that quantization errors in norm have much higher influence on inner products than quantization errors in direction, and small quantization error does not necessarily lead to good performance in maximum inner product search (MIPS). Based on this observation, we propose norm-explicit quantization (NEQ) — a general paradigm that improves existing VQ techniques for MIPS. NEQ quantizes the norms of items in a dataset explicitly to reduce errors in norm, which is crucial for MIPS. For the direction vectors, NEQ can simply reuse an existing VQ technique to quantize them without modification. We conducted extensive experiments on a variety of datasets and parameter configurations. The experimental results show that NEQ improves the performance of various VQ techniques for MIPS, including PQ, OPQ, RQ and AQ.


2018 ◽  
Vol 7 (S1) ◽  
pp. 108-111
Author(s):  
Gurrampally Kumar ◽  
S. Mohan ◽  
G. Prabakaran

Feature selection has been developed by several mining techniques for classification. Some existing approaches couldn’t remove the irrelevant data from dataset for class. Thus it needs the selection of appropriate features that emphasize its role in classification. For this it consider the statistical method like correlation coefficient to identify the features from feature set whose data are very important for existing classes. The several methods such as Gaussian process, linear regression and Euclidean distance have taken into consideration for clarity of classification. The experimental results reveal that the proposed method identifies the exact relevant features for several classes.


2016 ◽  
Vol 367 ◽  
pp. 25-33 ◽  
Author(s):  
Andrzej Golabczak ◽  
Andrzej Konstantynowicz ◽  
Marcin Golabczak

In the paper a new method has been proposed for the determining of the very fine machining uniformity over the elaborated surface and could be applied to different machined materials and machining procedures. The proposed methodology is relatively simple and is essentially formulated in the few subsequent steps: taking surface roughness 3D profile accordingly proposed scheme; estimation of the roughness statistical parameters: Rp, Rv, Rt, Ra, Rq, Rskew, Rkurt, and if need be – surface rugosity Ru; calculation of the centroid of the obtained data due to the measurement fields, calculation of the barycentre of the obtained data with the weighting variable chosen for the appropriate evaluation of the surface machining uniformity. As the main Cartesian coordinates of the centroid calculation we propose (Rskew, Rkurt), although other data organization schemes have also been provided as the example solutions. The final evaluation of the surface machining uniformity is based upon the Euclidean distance between the centroid and barycentre of the surface roughness data. The proposed method has been applied to experimental results obtained with the AFM technique used on samples of the polished AZ31 magnesium alloy. The surface machining procedure comprised of four stages performed with using different abrasive media, finally lead to the highest grade of the surface roughness.


Sign in / Sign up

Export Citation Format

Share Document