Calibrating EEG features in motor imagery classification tasks with a small amount of current data using multisource fusion transfer learning

2020 ◽  
Vol 62 ◽  
pp. 102101
Author(s):  
Yong Liang ◽  
Yu Ma
Author(s):  
Pavel Karpov ◽  
Guillaume Godin ◽  
Igor Tetko

We present SMILES-embeddings derived from internal encoder state of a Transformer model trained to canonize SMILES as a Seq2Seq problem. Using CharNN architecture upon the embeddings results in a higher quality QSAR/QSPR models on diverse benchmark datasets including regression and classification tasks. The proposed Transformer-CNN method uses SMILES augmentation for training and inference, and thus the prognosis grounds on an internal consensus. Both the augmentation and transfer learning based on embedding allows the method to provide good results for small datasets. We discuss the reasons for such effectiveness and draft future directions for the development of the method. The source code and the embeddings are available on https://github.com/bigchem/transformer-cnn, whereas the OCHEM environment (https://ochem.eu) hosts its on-line implementation.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Fangzhou Xu ◽  
Yunjing Miao ◽  
Yanan Sun ◽  
Dongju Guo ◽  
Jiali Xu ◽  
...  

AbstractDeep learning networks have been successfully applied to transfer functions so that the models can be adapted from the source domain to different target domains. This study uses multiple convolutional neural networks to decode the electroencephalogram (EEG) of stroke patients to design effective motor imagery (MI) brain-computer interface (BCI) system. This study has introduced ‘fine-tune’ to transfer model parameters and reduced training time. The performance of the proposed framework is evaluated by the abilities of the models for two-class MI recognition. The results show that the best framework is the combination of the EEGNet and ‘fine-tune’ transferred model. The average classification accuracy of the proposed model for 11 subjects is 66.36%, and the algorithm complexity is much lower than other models.These good performance indicate that the EEGNet model has great potential for MI stroke rehabilitation based on BCI system. It also successfully demonstrated the efficiency of transfer learning for improving the performance of EEG-based stroke rehabilitation for the BCI system.


Author(s):  
V. Akash Kumar ◽  
Vijaya Mishra ◽  
Monika Arora

The inhibition of healthy cells creating improper controlling process of the human body system indicates the occurrence of growth of cancerous cells. The cluster of such cells leads to the development of tumor. The observation of this type of abnormal skin pigmentation is done using an effective tool called Dermoscopy. However, these dermatoscopic images possess a great challenge for diagnosis. Considering the characteristics of dermatoscopic images, transfer learning is an appropriate approach of automatically classifying the images based on the respective categories. An automatic identification of skin cancer not only saves human life but also helps in detecting its growth at an earlier stage which saves medical practitioner’s effort and time. A newly predicted model has been proposed for classifying the skin cancer as benign or malignant by DCNN with transfer learning and its pre-trained models such as VGG 16, VGG 19, ResNet 50, ResNet 101, and Inception V3. The proposed methodology aims at examining the efficiency of pre-trained models and transfer learning approach for the classification tasks and opens new dimensions of research in the field of medicines using imaging technique which can be implementable in real-time applications.


Author(s):  
Yuan Zhang ◽  
Regina Barzilay ◽  
Tommi Jaakkola

We introduce a neural method for transfer learning between two (source and target) classification tasks or aspects over the same domain. Rather than training on target labels, we use a few keywords pertaining to source and target aspects indicating sentence relevance instead of document class labels. Documents are encoded by learning to embed and softly select relevant sentences in an aspect-dependent manner. A shared classifier is trained on the source encoded documents and labels, and applied to target encoded documents. We ensure transfer through aspect-adversarial training so that encoded documents are, as sets, aspect-invariant. Experimental results demonstrate that our approach outperforms different baselines and model variants on two datasets, yielding an improvement of 27% on a pathology dataset and 5% on a review dataset.


2020 ◽  
Vol 345 ◽  
pp. 108886 ◽  
Author(s):  
Piyush Kant ◽  
Shahedul Haque Laskar ◽  
Jupitara Hazarika ◽  
Rupesh Mahamune

Algorithms ◽  
2021 ◽  
Vol 14 (11) ◽  
pp. 334
Author(s):  
Nicola Landro ◽  
Ignazio Gallo ◽  
Riccardo La Grassa

Nowadays, the transfer learning technique can be successfully applied in the deep learning field through techniques that fine-tune the CNN’s starting point so it may learn over a huge dataset such as ImageNet and continue to learn on a fixed dataset to achieve better performance. In this paper, we designed a transfer learning methodology that combines the learned features of different teachers to a student network in an end-to-end model, improving the performance of the student network in classification tasks over different datasets. In addition to this, we tried to answer the following questions which are in any case directly related to the transfer learning problem addressed here. Is it possible to improve the performance of a small neural network by using the knowledge gained from a more powerful neural network? Can a deep neural network outperform the teacher using transfer learning? Experimental results suggest that neural networks can transfer their learning to student networks using our proposed architecture, designed to bring to light a new interesting approach for transfer learning techniques. Finally, we provide details of the code and the experimental settings.


Sign in / Sign up

Export Citation Format

Share Document