Semi-supervised boosting strategy aims at improving the performance of a given classifier with a multitude of unlabeled data. In semi-supervised boosting strategy, a similarity is needed to select unlabeled samples and then a pseudo label will be assigned to the unlabeled sample. A good similarity is helpful to assign a more proper pseudo label to unlabeled samples. Those selected samples with their pseudo labels will serve as labeled samples to train the new component classifier. So, similarity is important in semi-supervised boosting. Gaussian kernel similarity [Formula: see text] is used in semi-supervised boosting strategy. There are two drawbacks, first, the Euclidean distance [Formula: see text] cannot characterize the complicated relationship between the data samples; second, the parameter [Formula: see text] needs to set carefully. So, this paper proposes a novel adaptive similarity based on sparse representation for semi-supervised boosting. Our sparse representation is learned from a “clean” dictionary, which is a low rank matrix obtained from the sample matrix. We evaluate the proposed method on COIL20 databases. Experimental results show that: the semi-supervised boosting algorithm with sparse representation similarity outperforms the algorithm with Gaussian kernel similarity.