scholarly journals Deep Transfer Learning for Biology Cross-Domain Image Classification

2021 ◽  
Vol 2021 ◽  
pp. 1-19
Author(s):  
Chunfeng Guo ◽  
Bin Wei ◽  
Kun Yu

Automatic biology image classification is essential for biodiversity conservation and ecological study. Recently, due to the record-shattering performance, deep convolutional neural networks (DCNNs) have been used more often in biology image classification. However, training DCNNs requires a large amount of labeled data, which may be difficult to collect for some organisms. This study was carried out to exploit cross-domain transfer learning for DCNNs with limited data. According to the literature, previous studies mainly focus on transferring from ImageNet to a specific domain or transferring between two closely related domains. While this study explores deep transfer learning between species from different domains and analyzes the situation when there is a huge difference between the source domain and the target domain. Inspired by the analysis of previous studies, the effect of biology cross-domain image classification in transfer learning is proposed. In this work, the multiple transfer learning scheme is designed to exploit deep transfer learning on several biology image datasets from different domains. There may be a huge difference between the source domain and the target domain, causing poor performance on transfer learning. To address this problem, multistage transfer learning is proposed by introducing an intermediate domain. The experimental results show the effectiveness of cross-domain transfer learning and the importance of data amount and validate the potential of multistage transfer learning.

2019 ◽  
Vol 2019 ◽  
pp. 1-14 ◽  
Author(s):  
Ke Wang ◽  
Jiayong Liu ◽  
Jing-Yan Wang

Domain transfer learning aims to learn common data representations from a source domain and a target domain so that the source domain data can help the classification of the target domain. Conventional transfer representation learning imposes the distributions of source and target domain representations to be similar, which heavily relies on the characterization of the distributions of domains and the distribution matching criteria. In this paper, we proposed a novel framework for domain transfer representation learning. Our motive is to make the learned representations of data points independent from the domains which they belong to. In other words, from an optimal cross-domain representation of a data point, it is difficult to tell which domain it is from. In this way, the learned representations can be generalized to different domains. To measure the dependency between the representations and the corresponding domain which the data points belong to, we propose to use the mutual information between the representations and the domain-belonging indicators. By minimizing such mutual information, we learn the representations which are independent from domains. We build a classwise deep convolutional network model as a representation model and maximize the margin of each data point of the corresponding class, which is defined over the intraclass and interclass neighborhood. To learn the parameters of the model, we construct a unified minimization problem where the margins are maximized while the representation-domain mutual information is minimized. In this way, we learn representations which are not only discriminate but also independent from domains. An iterative algorithm based on the Adam optimization method is proposed to solve the minimization to learn the classwise deep model parameters and the cross-domain representations simultaneously. Extensive experiments over benchmark datasets show its effectiveness and advantage over existing domain transfer learning methods.


Author(s):  
Shu Jiang ◽  
Zuchao Li ◽  
Hai Zhao ◽  
Bao-Liang Lu ◽  
Rui Wang

In recent years, the research on dependency parsing focuses on improving the accuracy of the domain-specific (in-domain) test datasets and has made remarkable progress. However, there are innumerable scenarios in the real world that are not covered by the dataset, namely, the out-of-domain dataset. As a result, parsers that perform well on the in-domain data usually suffer from significant performance degradation on the out-of-domain data. Therefore, to adapt the existing in-domain parsers with high performance to a new domain scenario, cross-domain transfer learning methods are essential to solve the domain problem in parsing. This paper examines two scenarios for cross-domain transfer learning: semi-supervised and unsupervised cross-domain transfer learning. Specifically, we adopt a pre-trained language model BERT for training on the source domain (in-domain) data at the subword level and introduce self-training methods varied from tri-training for these two scenarios. The evaluation results on the NLPCC-2019 shared task and universal dependency parsing task indicate the effectiveness of the adopted approaches on cross-domain transfer learning and show the potential of self-learning to cross-lingual transfer learning.


2020 ◽  
Vol 5 (3) ◽  
pp. 4148-4155
Author(s):  
Dandan Zhang ◽  
Zicong Wu ◽  
Junhong Chen ◽  
Anzhu Gao ◽  
Xu Chen ◽  
...  

Author(s):  
Ting Lu ◽  
Yan Xiang ◽  
Junge Liang ◽  
Li Zhang ◽  
Mingfang Zhang

The grand challenge of cross-domain sentiment analysis is that classifiers trained in a specific domain are very sensitive to the discrepancy between domains. A sentiment classifier trained in the source domain usually have a poor performance in the target domain. One of the main strategies to solve this problem is the pivot-based strategy, which regards the feature representation as an important component. However, part-of-speech information was not considered to guide the learning of feature representation and feature mapping in previous pivot-based models. Therefore, we present a fused part-of-speech vectors and attention-based model (FAM). In our model, we fuse part-of-speech vectors and feature word embeddings as the representation of features, giving deep semantics to mapping features. And we adopt Multi-Head attention mechanism to train the cross-domain sentiment classifier to obtain the connection between different features. The results of 12 groups comparative experiments on the Amazon dataset demonstrate that our model outperforms all baseline models in this paper.


Sign in / Sign up

Export Citation Format

Share Document