scholarly journals A Variational Autoencoder with Deep Embedding Model for Generalized Zero-Shot Learning

2020 ◽  
Vol 34 (07) ◽  
pp. 11733-11740
Author(s):  
Peirong Ma ◽  
Xiao Hu

Generalized zero-shot learning (GZSL) is a challenging task that aims to recognize not only unseen classes unavailable during training, but also seen classes used at training stage. It is achieved by transferring knowledge from seen classes to unseen classes via a shared semantic space (e.g. attribute space). Most existing GZSL methods usually learn a cross-modal mapping between the visual feature space and the semantic space. However, the mapping model learned only from the seen classes will produce an inherent bias when used in the unseen classes. In order to tackle such a problem, this paper integrates a deep embedding network (DE) and a modified variational autoencoder (VAE) into a novel model (DE-VAE) to learn a latent space shared by both image features and class embeddings. Specifically, the proposed model firstly employs DE to learn the mapping from the semantic space to the visual feature space, and then utilizes VAE to transform both original visual features and the features obtained by the mapping into latent features. Finally, the latent features are used to train a softmax classifier. Extensive experiments on four GZSL benchmark datasets show that the proposed model significantly outperforms the state of the arts.

Author(s):  
Wei Li ◽  
Haiyu Song ◽  
Hongda Zhang ◽  
Houjie Li ◽  
Pengjie Wang

The ever-increasing size of images has made automatic image annotation one of the most important tasks in the fields of machine learning and computer vision. Despite continuous efforts in inventing new annotation algorithms and new models, results of the state-of-the-art image annotation methods are often unsatisfactory. In this paper, to further improve annotation refinement performance, a novel approach based on weighted mutual information to automatically refine the original annotations of images is proposed. Unlike the traditional refinement model using only visual feature, the proposed model use semantic embedding to properly map labels and visual features to a meaningful semantic space. To accurately measure the relevance between the particular image and its original annotations, the proposed model utilize all available information including image-to-image, label-to-label and image-to-label. Experimental results conducted on three typical datasets show not only the validity of the refinement, but also the superiority of the proposed algorithm over existing ones. The improvement largely benefits from our proposed mutual information method and utilizing all available information.


Author(s):  
Yang Liu ◽  
Deyan Xie ◽  
Quanxue Gao ◽  
Jungong Han ◽  
Shujian Wang ◽  
...  

Zero-shot learning (ZSL) aims to build models to recognize novel visual categories that have no associated labelled training samples. The basic framework is to transfer knowledge from seen classes to unseen classes by learning the visual-semantic embedding. However, most of approaches do not preserve the underlying sub-manifold of samples in the embedding space. In addition, whether the mapping can precisely reconstruct the original visual feature is not investigated in-depth. In order to solve these problems, we formulate a novel framework named Graph and Autoencoder Based Feature Extraction (GAFE) to seek a low-rank mapping to preserve the sub-manifold of samples. Taking the encoder-decoder paradigm, the encoder part learns a mapping from the visual feature to the semantic space, while decoder part reconstructs the original features with the learned mapping. In addition, a graph is constructed to guarantee the learned mapping can preserve the local intrinsic structure of the data. To this end, an L21 norm sparsity constraint is imposed on the mapping to identify features relevant to the target domain. Extensive experiments on five attribute datasets demonstrate the effectiveness of the proposed model.


Author(s):  
Yang Liu ◽  
Quanxue Gao ◽  
Jin Li ◽  
Jungong Han ◽  
Ling Shao

Zero-shot learning (ZSL) has been widely researched and get successful in machine learning. Most existing ZSL methods aim to accurately recognize objects of unseen classes by learning a shared mapping from the feature space to a semantic space. However, such methods did not investigate in-depth whether the mapping can precisely reconstruct the original visual feature. Motivated by the fact that the data have low intrinsic dimensionality e.g. low-dimensional subspace. In this paper, we formulate a novel framework named Low-rank Embedded Semantic AutoEncoder (LESAE) to jointly seek a low-rank mapping to link visual features with their semantic representations. Taking the encoder-decoder paradigm, the encoder part aims to learn a low-rank mapping from the visual feature to the semantic space, while decoder part manages to reconstruct the original data with the learned mapping. In addition, a non-greedy iterative algorithm is adopted to solve our model. Extensive experiments on six benchmark datasets demonstrate its superiority over several state-of-the-art algorithms.


Author(s):  
Huimin Lu ◽  
Rui Yang ◽  
Zhenrong Deng ◽  
Yonglin Zhang ◽  
Guangwei Gao ◽  
...  

Chinese image description generation tasks usually have some challenges, such as single-feature extraction, lack of global information, and lack of detailed description of the image content. To address these limitations, we propose a fuzzy attention-based DenseNet-BiLSTM Chinese image captioning method in this article. In the proposed method, we first improve the densely connected network to extract features of the image at different scales and to enhance the model’s ability to capture the weak features. At the same time, a bidirectional LSTM is used as the decoder to enhance the use of context information. The introduction of an improved fuzzy attention mechanism effectively improves the problem of correspondence between image features and contextual information. We conduct experiments on the AI Challenger dataset to evaluate the performance of the model. The results show that compared with other models, our proposed model achieves higher scores in objective quantitative evaluation indicators, including BLEU , BLEU , METEOR, ROUGEl, and CIDEr. The generated description sentence can accurately express the image content.


Robotica ◽  
1991 ◽  
Vol 9 (2) ◽  
pp. 203-212 ◽  
Author(s):  
Won Jang ◽  
Kyungjin Kim ◽  
Myungjin Chung ◽  
Zeungnam Bien

SUMMARYFor efficient visual servoing of an “eye-in-hand” robot, the concepts of Augmented Image Space and Transformed Feature Space are presented in the paper. A formal definition of image features as functionals is given along with a technique to use defined image features for visual servoing. Compared with other known methods, the proposed concepts reduce the computational burden for visual feedback, and enhance the flexibility in describing the vision-based task. Simulations and real experiments demonstrate that the proposed concepts are useful and versatile tools for the industrial robot vision tasks, and thus the visual servoing problem can be dealt with more systematically.


Entropy ◽  
2021 ◽  
Vol 24 (1) ◽  
pp. 36
Author(s):  
Xiaoan Yan ◽  
Yadong Xu ◽  
Daoming She ◽  
Wan Zhang

Variational auto-encoders (VAE) have recently been successfully applied in the intelligent fault diagnosis of rolling bearings due to its self-learning ability and robustness. However, the hyper-parameters of VAEs depend, to a significant extent, on artificial settings, which is regarded as a common and key problem in existing deep learning models. Additionally, its anti-noise capability may face a decline when VAE is used to analyze bearing vibration data under loud environmental noise. Therefore, in order to improve the anti-noise performance of the VAE model and adaptively select its parameters, this paper proposes an optimized stacked variational denoising autoencoder (OSVDAE) for the reliable fault diagnosis of bearings. Within the proposed method, a robust network, named variational denoising auto-encoder (VDAE), is, first, designed by integrating VAE and a denoising auto-encoder (DAE). Subsequently, a stacked variational denoising auto-encoder (SVDAE) architecture is constructed to extract the robust and discriminative latent fault features via stacking VDAE networks layer on layer, wherein the important parameters of the SVDAE model are automatically determined by employing a novel meta-heuristic intelligent optimizer known as the seagull optimization algorithm (SOA). Finally, the extracted latent features are imported into a softmax classifier to obtain the results of fault recognition in rolling bearings. Experiments are conducted to validate the effectiveness of the proposed method. The results of analysis indicate that the proposed method not only can achieve a high identification accuracy for different bearing health conditions, but also outperforms some representative deep learning methods.


Author(s):  
М.Ю. Уздяев

Увеличение количества пользователей социокиберфизических систем, умных пространств, систем интернета вещей актуализирует проблему выявления деструктивных действий пользователей, таких как агрессия. При этом, деструктивные действия пользователей могут быть представлены в различных модальностях: двигательная активность тела, сопутствующее выражение лица, невербальное речевое поведение, вербальное речевое поведение. В статье рассматривается нейросетевая модель многомодального распознавания человеческой агрессии, основанная на построении промежуточного признакового пространства, инвариантного виду обрабатываемой модальности. Предлагаемая модель позволяет распознавать с высокой точностью агрессию в условиях отсутствия или недостатка информации какой-либо модальности. Экспериментальное исследование показало 81:8% верных распознаваний на наборе данных IEMOCAP. Также приводятся результаты экспериментов распознавания агрессии на наборе данных IEMOCAP для 15 различных сочетаний обозначенных выше модальностей. Growing user base of socio-cyberphysical systems, smart environments, IoT (Internet of Things) systems actualizes the problem of revealing of destructive user actions, such as various acts of aggression. Thereby destructive user actions can be represented in different modalities: locomotion, facial expression, associated with it, non-verbal speech behavior, verbal speech behavior. This paper considers a neural network model of multi-modal recognition of human aggression, based on the establishment of an intermediate feature space, invariant to the actual modality, being processed. The proposed model ensures high-fidelity aggression recognition in the cases when data on certain modality are scarce or lacking. Experimental research showed 81.8% correct recognition instances on the IEMOCAP dataset. Also, experimental results are given concerning aggression recognition on the IEMOCAP dataset for 15 different combinations of the modalities, outlined above.


Author(s):  
Renjun Xu ◽  
Pelen Liu ◽  
Yin Zhang ◽  
Fang Cai ◽  
Jindong Wang ◽  
...  

Domain adaptation (DA) has achieved a resounding success to learn a good classifier by leveraging labeled data from a source domain to adapt to an unlabeled target domain. However, in a general setting when the target domain contains classes that are never observed in the source domain, namely in Open Set Domain Adaptation (OSDA), existing DA methods failed to work because of the interference of the extra unknown classes. This is a much more challenging problem, since it can easily result in negative transfer due to the mismatch between the unknown and known classes. Existing researches are susceptible to misclassification when target domain unknown samples in the feature space distributed near the decision boundary learned from the labeled source domain. To overcome this, we propose Joint Partial Optimal Transport (JPOT), fully utilizing information of not only the labeled source domain but also the discriminative representation of unknown class in the target domain. The proposed joint discriminative prototypical compactness loss can not only achieve intra-class compactness and inter-class separability, but also estimate the mean and variance of the unknown class through backpropagation, which remains intractable for previous methods due to the blindness about the structure of the unknown classes. To our best knowledge, this is the first optimal transport model for OSDA. Extensive experiments demonstrate that our proposed model can significantly boost the performance of open set domain adaptation on standard DA datasets.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Tingting Xu ◽  
Ye Zhao ◽  
Xueliang Liu

Zero-shot learning is dedicated to solving the classification problem of unseen categories, while generalized zero-shot learning aims to classify the samples selected from both seen classes and unseen classes, in which “seen” and “unseen” classes indicate whether they can be used in the training process, and if so, they indicate seen classes, and vice versa. Nowadays, with the promotion of deep learning technology, the performance of zero-shot learning has been greatly improved. Generalized zero-shot learning is a challenging topic that has promising prospects in many realistic scenarios. Although the zero-shot learning task has made gratifying progress, there is still a strong deviation between seen classes and unseen classes in the existing methods. Recent methods focus on learning a unified semantic-aligned visual representation to transfer knowledge between two domains, while ignoring the intrinsic characteristics of visual features which are discriminative enough to be classified by itself. To solve the above problems, we propose a novel model that uses the discriminative information of visual features to optimize the generative module, in which the generative module is a dual generation network framework composed of conditional VAE and improved WGAN. Specifically, the model uses the discrimination information of visual features, according to the relevant semantic embedding, synthesizes the visual features of unseen categories by using the learned generator, and then trains the final softmax classifier by using the generated visual features, thus realizing the recognition of unseen categories. In addition, this paper also analyzes the effect of the additional classifiers with different structures on the transmission of discriminative information. We have conducted a lot of experiments on six commonly used benchmark datasets (AWA1, AWA2, APY, FLO, SUN, and CUB). The experimental results show that our model outperforms several state-of-the-art methods for both traditional as well as generalized zero-shot learning.


2020 ◽  
Vol 21 (S6) ◽  
Author(s):  
Jianqiang Li ◽  
Guanghui Fu ◽  
Yueda Chen ◽  
Pengzhi Li ◽  
Bo Liu ◽  
...  

Abstract Background Screening of the brain computerised tomography (CT) images is a primary method currently used for initial detection of patients with brain trauma or other conditions. In recent years, deep learning technique has shown remarkable advantages in the clinical practice. Researchers have attempted to use deep learning methods to detect brain diseases from CT images. Methods often used to detect diseases choose images with visible lesions from full-slice brain CT scans, which need to be labelled by doctors. This is an inaccurate method because doctors detect brain disease from a full sequence scan of CT images and one patient may have multiple concurrent conditions in practice. The method cannot take into account the dependencies between the slices and the causal relationships among various brain diseases. Moreover, labelling images slice by slice spends much time and expense. Detecting multiple diseases from full slice brain CT images is, therefore, an important research subject with practical implications. Results In this paper, we propose a model called the slice dependencies learning model (SDLM). It learns image features from a series of variable length brain CT images and slice dependencies between different slices in a set of images to predict abnormalities. The model is necessary to only label the disease reflected in the full-slice brain scan. We use the CQ500 dataset to evaluate our proposed model, which contains 1194 full sets of CT scans from a total of 491 subjects. Each set of data from one subject contains scans with one to eight different slice thicknesses and various diseases that are captured in a range of 30 to 396 slices in a set. The evaluation results present that the precision is 67.57%, the recall is 61.04%, the F1 score is 0.6412, and the areas under the receiver operating characteristic curves (AUCs) is 0.8934. Conclusion The proposed model is a new architecture that uses a full-slice brain CT scan for multi-label classification, unlike the traditional methods which only classify the brain images at the slice level. It has great potential for application to multi-label detection problems, especially with regard to the brain CT images.


Sign in / Sign up

Export Citation Format

Share Document