Integrated Transfer Learning Method for Image Recognition Based on Neural Network

2021 ◽  
pp. 1-8
Author(s):  
JingYuan He ◽  
BaiLong Yang ◽  
Yang Su
2021 ◽  
Author(s):  
Farrel Athaillah Putra ◽  
Dwi Anggun Cahyati Jamil ◽  
Briliantino Abhista Prabandanu ◽  
Suhaili Faruq ◽  
Firsta Adi Pradana ◽  
...  

Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4408 ◽  
Author(s):  
Hyun-Myung Cho ◽  
Heesu Park ◽  
Suh-Yeon Dong ◽  
Inchan Youn

The goals of this study are the suggestion of a better classification method for detecting stressed states based on raw electrocardiogram (ECG) data and a method for training a deep neural network (DNN) with a smaller data set. We suggest an end-to-end architecture to detect stress using raw ECGs. The architecture consists of successive stages that contain convolutional layers. In this study, two kinds of data sets are used to train and validate the model: A driving data set and a mental arithmetic data set, which smaller than the driving data set. We apply a transfer learning method to train a model with a small data set. The proposed model shows better performance, based on receiver operating curves, than conventional methods. Compared with other DNN methods using raw ECGs, the proposed model improves the accuracy from 87.39% to 90.19%. The transfer learning method improves accuracy by 12.01% and 10.06% when 10 s and 60 s of ECG signals, respectively, are used in the model. In conclusion, our model outperforms previous models using raw ECGs from a small data set and, so, we believe that our model can significantly contribute to mobile healthcare for stress management in daily life.


2021 ◽  
Vol 290 ◽  
pp. 02020
Author(s):  
Boyu Zhang ◽  
Xiao Wang ◽  
Shudong Li ◽  
Jinghua Yang

Current underwater shipwreck side scan sonar samples are few and difficult to label. With small sample sizes, their image recognition accuracy with a convolutional neural network model is low. In this study, we proposed an image recognition method for shipwreck side scan sonar that combines transfer learning with deep learning. In the non-transfer learning, shipwreck sonar sample data were used to train the network, and the results were saved as the control group. The weakly correlated data were applied to train the network, then the network parameters were transferred to the new network, and then the shipwreck sonar data was used for training. These steps were repeated using strongly correlated data. Experiments were carried out on Lenet-5, AlexNet, GoogLeNet, ResNet and VGG networks. Without transfer learning, the highest accuracy was obtained on the ResNet network (86.27%). Using weakly correlated data for transfer training, the highest accuracy was on the VGG network (92.16%). Using strongly correlated data for transfer training, the highest accuracy was also on the VGG network (98.04%). In all network architectures, transfer learning improved the correct recognition rate of convolutional neural network models. Experiments show that transfer learning combined with deep learning improves the accuracy and generalization of the convolutional neural network in the case of small sample sizes.


This research is aimed to achieve high-precision accuracy and for face recognition system. Convolution Neural Network is one of the Deep Learning approaches and has demonstrated excellent performance in many fields, including image recognition of a large amount of training data (such as ImageNet). In fact, hardware limitations and insufficient training data-sets are the challenges of getting high performance. Therefore, in this work the Deep Transfer Learning method using AlexNet pre-trained CNN is proposed to improve the performance of the face-recognition system even for a smaller number of images. The transfer learning method is used to fine-tuning on the last layer of AlexNet CNN model for new classification tasks. The data augmentation (DA) technique also proposed to minimize the over-fitting problem during Deep transfer learning training and to improve accuracy. The results proved the improvement in over-fitting and in performance after using the data augmentation technique. All the experiments were tested on UTeMFD, GTFD, and CASIA-Face V5 small data-sets. As a result, the proposed system achieved a high accuracy as 100% on UTeMFD, 96.67% on GTFD, and 95.60% on CASIA-Face V5 in less than 0.05 seconds of recognition time.


2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Abdul Jalil Rozaqi ◽  
Muhammad Rudyanto Arief ◽  
Andi Sunyoto

Potatoes are a plant that has many benefits for human life. The potato plant has a problem, namely a disease that attacks the leaves. Disease on potato leaves that is often encountered is early blight and late blight. Image processing is a method that can be used to assist farmers in identifying potato leaf disease by utilizing leaf images. Image processing method development has been done a lot, one of which is by using the Convolutional Neural Network (CNN) algorithm. The CNN method is a good image classification algorithm because its layer architecture can extract leaf image features in depth, however, determining a good CNN architectural model requires a lot of data. CNN architecture will become overfitting if it uses less data, where the classification model has high accuracy on training data but the accuracy becomes poor on test data or new data. This research utilizes the Transfer Learning method to avoid an overfit model when the data used is not ideal or too little. Transfer Learning is a method that uses the CNN architecture that has been trained by other data previously which is then used for image classification on the new data. The purpose of this research was to use the Transfer Learning method on CNN architecture to classify potato leaf images in identifying potato leaf disease. This research compares the Transfer Learning method used to find the best method. The results of the experiments in this research indicate that the Transfer Learning VGG-16 method has the best classification performance results, this method produces the highest accuracy value of 95%.


Author(s):  
Jonathan Boigne ◽  
Biman Liyanage ◽  
Ted Östrem

We propose a novel transfer learning method for speech emotion recognition allowing us to obtain promising results when only few training data is available. With as low as 125 examples per emotion class, we were able to reach a higher accuracy than a strong baseline trained on 8 times more data. Our method leverages knowledge contained in pre-trained speech representations extracted from models trained on a more general self-supervised task which doesn’t require human annotations, such as the wav2vec model. We provide detailed insights on the benefits of our approach by varying the training data size, which can help labeling teams to work more efficiently. We compare performance with other popular methods on the IEMOCAP dataset, a well-benchmarked dataset among the Speech Emotion Recognition (SER) research community. Furthermore, we demonstrate that results can be greatly improved by combining acoustic and linguistic knowledge from transfer learning. We align acoustic pre-trained representations with semantic representations from the BERT model through an attention-based recurrent neural network. Performance improves significantly when combining both modalities and scales with the amount of data. When trained on the full IEMOCAP dataset, we reach a new state-of-the-art of 73.9% unweighted accuracy (UA).


Sign in / Sign up

Export Citation Format

Share Document