A New Strategy for One-Example Person re-ID: Exploit the Unlabeled Data Gradually Base on Style-Transferred Images
As a research field of symmetry journals, computer vision has received more and more attention. Person re-identification (re-ID) has become a research hotspot in computer vision. We focus on one-example person re-ID, where each person only has one labeled image in the dataset, and other images are unlabeled. There are two main challenges of the task, the insufficient labeled data, and the lack of labeled images cross-cameras. In dealing with the above issue, we propose a new one-example labeling scheme, which generates style-transferred images by CycleGAN (Cycle Generative Adversarial Networks) to ensure that for each person, there is one labeled image under each camera style. Then a self-learning framework is adopted, which iteratively train a CNN (Convolutional Neural Networks) model with labeled images and labeled style-transferred images, and mine the reliable images to assign a pseudo label. The experimental results prove that by integrating the camera style transferred images, we effectively expand the dataset, and the problem of low recognition rate caused by the lack of labeled pedestrian pictures across cameras is effectively solved. Notably, the rank-1 accuracy of our method outperforms the state-of-the-art method by 8.7 points on the Market-1501 dataset, and 6.3 points on the DukeMTMC-ReID dataset.