Gender-Invariant Face Representation Learning and Data Augmentation for Kinship Verification

Author(s):  
Yuqing Feng ◽  
Bo Ma
2018 ◽  
pp. 127-152
Author(s):  
Naman Kohli ◽  
Daksha Yadav ◽  
Mayank Vatsa ◽  
Richa Singh ◽  
Afzel Noore

2017 ◽  
Vol 2017 ◽  
pp. 1-12 ◽  
Author(s):  
Mengyu Xu ◽  
Zhenmin Tang ◽  
Yazhou Yao ◽  
Lingxiang Yao ◽  
Huafeng Liu ◽  
...  

Due to the variations of viewpoint, pose, and illumination, a given individual may appear considerably different across different camera views. Tracking individuals across camera networks with no overlapping fields is still a challenging problem. Previous works mainly focus on feature representation and metric learning individually which tend to have a suboptimal solution. To address this issue, in this work, we propose a novel framework to do the feature representation learning and metric learning jointly. Different from previous works, we represent the pairs of pedestrian images as new resized input and use linear Support Vector Machine to replace softmax activation function for similarity learning. Particularly, dropout and data augmentation techniques are also employed in this model to prevent the network from overfitting. Extensive experiments on two publically available datasets VIPeR and CUHK01 demonstrate the effectiveness of our proposed approach.


2021 ◽  
Vol 7 (2) ◽  
pp. 755-758
Author(s):  
Daniel Wulff ◽  
Mohamad Mehdi ◽  
Floris Ernst ◽  
Jannis Hagenah

Abstract Data augmentation is a common method to make deep learning assessible on limited data sets. However, classical image augmentation methods result in highly unrealistic images on ultrasound data. Another approach is to utilize learning-based augmentation methods, e.g. based on variational autoencoders or generative adversarial networks. However, a large amount of data is necessary to train these models, which is typically not available in scenarios where data augmentation is needed. One solution for this problem could be a transfer of augmentation models between different medical imaging data sets. In this work, we present a qualitative study of the cross data set generalization performance of different learning-based augmentation methods for ultrasound image data. We could show that knowledge transfer is possible in ultrasound image augmentation and that the augmentation partially results in semantically meaningful transfers of structures, e.g. vessels, across domains.


2021 ◽  
Author(s):  
noureddine kermiche

Using data augmentation techniques, unsupervised representation learning methods extract features from data by training artificial neural networks to recognize that different views of an object are just different instances of the same object. We extend current unsupervised representation learning methods to networks that can self-organize data representations into two-dimensional (2D) maps. The proposed method combines ideas from Kohonen’s original self-organizing maps (SOM) and recent development in unsupervised representation learning. A ResNet backbone with an added 2D <i>Softmax</i> output layer is used to organize the data representations. A new loss function with linear complexity is proposed to enforce SOM requirements of winner-take-all (WTA) and competition between neurons while explicitly avoiding collapse into trivial solutions. We show that enforcing SOM topological neighborhood requirement can be achieved by a fixed radial convolution at the 2D output layer without having to resort to actual radial activation functions which prevented the original SOM algorithm from being extended to nowadays neural network architectures. We demonstrate that when combined with data augmentation techniques, self-organization is a simple emergent property of the 2D output layer because of neighborhood recruitment combined with WTA competition between neurons. The proposed methodology is demonstrated on SVHN and CIFAR10 data sets. The proposed algorithm is the first end-to-end unsupervised learning method that combines data self-organization and visualization as integral parts of unsupervised representation learning.


Author(s):  
Abdelhakim Chergui ◽  
Salim Ouchtati ◽  
Sebastien Mavromatis ◽  
Salah Eddine Bekhouche ◽  
Jean Sequeira ◽  
...  

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Rida Assaf ◽  
Fangfang Xia ◽  
Rick Stevens

AbstractContiguous genes in prokaryotes are often arranged into operons. Detecting operons plays a critical role in inferring gene functionality and regulatory networks. Human experts annotate operons by visually inspecting gene neighborhoods across pileups of related genomes. These visual representations capture the inter-genic distance, strand direction, gene size, functional relatedness, and gene neighborhood conservation, which are the most prominent operon features mentioned in the literature. By studying these features, an expert can then decide whether a genomic region is part of an operon. We propose a deep learning based method named Operon Hunter that uses visual representations of genomic fragments to make operon predictions. Using transfer learning and data augmentation techniques facilitates leveraging the powerful neural networks trained on image datasets by re-training them on a more limited dataset of extensively validated operons. Our method outperforms the previously reported state-of-the-art tools, especially when it comes to predicting full operons and their boundaries accurately. Furthermore, our approach makes it possible to visually identify the features influencing the network’s decisions to be subsequently cross-checked by human experts.


2021 ◽  
Vol 16 ◽  
pp. 2461-2476
Author(s):  
Hardik Uppal ◽  
Alireza Sepas-Moghaddam ◽  
Michael Greenspan ◽  
Ali Etemad

Author(s):  
Evgeny Smirnov ◽  
Aleksandr Melnikov ◽  
Sergey Novoselov ◽  
Eugene Luckyanets ◽  
Galina Lavrentyeva

Sign in / Sign up

Export Citation Format

Share Document