Nonlinear Cross-Domain Feature Representation Learning Method Based on Dual Constraints

Author(s):  
Han Ding ◽  
Yuhong Zhang ◽  
Shuai Yang ◽  
Yaojin Lin
Author(s):  
Ward van Breda ◽  
Mark Hoogendoorn ◽  
A.E. Eiben ◽  
Gerhard Andersson ◽  
Heleen Riper ◽  
...  

2015 ◽  
Vol 2015 ◽  
pp. 1-6 ◽  
Author(s):  
Zongyong Cui ◽  
Zongjie Cao ◽  
Jianyu Yang ◽  
Hongliang Ren

A hierarchical recognition system (HRS) based on constrained Deep Belief Network (DBN) is proposed for SAR Automatic Target Recognition (SAR ATR). As a classical Deep Learning method, DBN has shown great performance on data reconstruction, big data mining, and classification. However, few works have been carried out to solve small data problems (like SAR ATR) by Deep Learning method. In HRS, the deep structure and pattern classifier are combined to solve small data classification problems. After building the DBN with multiple Restricted Boltzmann Machines (RBMs), hierarchical features can be obtained, and then they are fed to classifier directly. To obtain more natural sparse feature representation, the Constrained RBM (CRBM) is proposed with solving a generalized optimization problem. Three RBM variants,L1-RNM,L2-RBM, andL1/2-RBM, are presented and introduced to HRS in this paper. The experiments on MSTAR public dataset show that the performance of the proposed HRS with CRBM outperforms current pattern recognition methods in SAR ATR, like PCA + SVM, LDA + SVM, and NMF + SVM.


Author(s):  
Guanbin Li ◽  
Xin Zhu ◽  
Yirui Zeng ◽  
Qing Wang ◽  
Liang Lin

Facial action unit (AU) recognition is a crucial task for facial expressions analysis and has attracted extensive attention in the field of artificial intelligence and computer vision. Existing works have either focused on designing or learning complex regional feature representations, or delved into various types of AU relationship modeling. Albeit with varying degrees of progress, it is still arduous for existing methods to handle complex situations. In this paper, we investigate how to integrate the semantic relationship propagation between AUs in a deep neural network framework to enhance the feature representation of facial regions, and propose an AU semantic relationship embedded representation learning (SRERL) framework. Specifically, by analyzing the symbiosis and mutual exclusion of AUs in various facial expressions, we organize the facial AUs in the form of structured knowledge-graph and integrate a Gated Graph Neural Network (GGNN) in a multi-scale CNN framework to propagate node information through the graph for generating enhanced AU representation. As the learned feature involves both the appearance characteristics and the AU relationship reasoning, the proposed model is more robust and can cope with more challenging cases, e.g., illumination change and partial occlusion. Extensive experiments on the two public benchmarks demonstrate that our method outperforms the previous work and achieves state of the art performance.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Jifeng Guo ◽  
Zhiqi Pang ◽  
Wenbo Sun ◽  
Shi Li ◽  
Yu Chen

Active learning aims to select the most valuable unlabelled samples for annotation. In this paper, we propose a redundancy removal adversarial active learning (RRAAL) method based on norm online uncertainty indicator, which selects samples based on their distribution, uncertainty, and redundancy. RRAAL includes a representation generator, state discriminator, and redundancy removal module (RRM). The purpose of the representation generator is to learn the feature representation of a sample, and the state discriminator predicts the state of the feature vector after concatenation. We added a sample discriminator to the representation generator to improve the representation learning ability of the generator and designed a norm online uncertainty indicator (Norm-OUI) to provide a more accurate uncertainty score for the state discriminator. In addition, we designed an RRM based on a greedy algorithm to reduce the number of redundant samples in the labelled pool. The experimental results on four datasets show that the state discriminator, Norm-OUI, and RRM can improve the performance of RRAAL, and RRAAL outperforms the previous state-of-the-art active learning methods.


Sign in / Sign up

Export Citation Format

Share Document