LR-SVM+: Learning Using Privileged Information with Noisy Labels

2021 ◽  
pp. 1-1
Author(s):  
Zhengning Wu ◽  
Xiaobo Xia ◽  
Ruxin Wang ◽  
Jiatong Li ◽  
Jun Yu ◽  
...  
Author(s):  
Roman Ilin ◽  
Simon Streltsov ◽  
Rauf Izmailov

This work considers “Learning Using Privileged Information” (LUPI) paradigm. LUPI improves classification accuracy by incorporating additional information available at training time and not available during testing. In this contribution, the LUPI paradigm is tested on a Wide Area Motion Imagery (WAMI) dataset and on images from the Caltech 101 dataset. In both cases a consistent improvement in classification accuracy is observed. The results are discussed and the directions of future research are outlined.


2014 ◽  
Vol 53 ◽  
pp. 95-108 ◽  
Author(s):  
Maksim Lapin ◽  
Matthias Hein ◽  
Bernt Schiele

Symmetry ◽  
2019 ◽  
Vol 11 (4) ◽  
pp. 487
Author(s):  
Lingzhi Yang ◽  
Xiaojuan Ban ◽  
Michele Mukeshimana ◽  
Zhe Chen

Multimodal emotion recognition has become one of the new research fields of human-machine interaction. This paper focuses on feature extraction and data fusion in audio-visual emotion recognition, aiming at improving recognition effect and saving storage space. A semi-serial fusion symmetric method is proposed to fuse the audio and visual patterns of emotional recognition, and a method of Symmetric S-ELM-LUPI is adopted (Symmetric Sparse Extreme Learning Machine-Learning Using Privileged Information). The method inherits the generalized high speed of the Extreme Learning Machine, and combines this with the acceleration in the recognition process by the Learning Using Privileged Information and the memory saving of the Sparse Extreme Learning Machine. It is a learning method, which improves the traditional learning methods of examples and targets only. It introduces the role of a teacher in providing additional information to enhance the recognition (test) without complicating the learning process. The proposed method is tested on publicly available datasets and yields promising results. This method regards one pattern as the standard information source, while the other pattern as the privileged information source. Each mode can be treated as privileged information for another mode. The results show that this method is appropriate for multi-modal emotion recognition. For hundreds of samples, the execution time is less than one percent seconds. The sparsity of the proposed method has the advantage of storing memory economy. Compared with other machine learning methods, this method is more accurate and stable.


Sign in / Sign up

Export Citation Format

Share Document