scholarly journals Improvement in Classification Performance Based on Target Vector Modification for All-Transfer Deep Learning

2019 ◽  
Vol 9 (1) ◽  
pp. 128 ◽  
Author(s):  
Yoshihide Sawada ◽  
Yoshikuni Sato ◽  
Toru Nakada ◽  
Shunta Yamaguchi ◽  
Kei Ujimoto ◽  
...  

This paper proposes a target vector modification method for the all-transfer deep learning (ATDL) method. Deep neural networks (DNNs) have been used widely in many applications; however, the DNN has been known to be problematic when large amounts of training data are not available. Transfer learning can provide a solution to this problem. Previous methods regularize all layers, including the output layer, by estimating the relation vectors, which are then used instead of one-hot target vectors of the target domain. These vectors are estimated by averaging the target domain data of each target domain label in the output space. This method improves the classification performance, but it does not consider the relation between the relation vectors. From this point of view, we propose a relation vector modification based on constrained pairwise repulsive forces. High pairwise repulsive forces provide large distances between the relation vectors. In addition, the risk of divergence is mitigated by the constraint based on distributions of the output vectors of the target domain data. We apply our method to two simulation experiments and a disease classification using two-dimensional electrophoresis images. The experimental results show that reusing all layers through our estimation method is effective, especially for a significantly small number of the target domain data.

Healthcare ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. 291 ◽  
Author(s):  
Chunwu Yin ◽  
Zhanbo Chen

Disease classification based on machine learning has become a crucial research topic in the fields of genetics and molecular biology. Generally, disease classification involves a supervised learning style; i.e., it requires a large number of labelled samples to achieve good classification performance. However, in the majority of the cases, labelled samples are hard to obtain, so the amount of training data are limited. However, many unclassified (unlabelled) sequences have been deposited in public databases, which may help the training procedure. This method is called semi-supervised learning and is very useful in many applications. Self-training can be implemented using high- to low-confidence samples to prevent noisy samples from affecting the robustness of semi-supervised learning in the training process. The deep forest method with the hyperparameter settings used in this paper can achieve excellent performance. Therefore, in this work, we propose a novel combined deep learning model and semi-supervised learning with self-training approach to improve the performance in disease classification, which utilizes unlabelled samples to update a mechanism designed to increase the number of high-confidence pseudo-labelled samples. The experimental results show that our proposed model can achieve good performance in disease classification and disease-causing gene identification.


2021 ◽  
Author(s):  
J. Annrose ◽  
N. Herald Anantha Rufus ◽  
C. R. Edwin Selva Rex ◽  
D. Godwin Immanuel

Abstract Bean which is botanically called Phaseolus vulgaris L belongs to the Fabaceae family.During bean disease identification, unnecessary economical losses occur due to the delay of the treatment period, incorrect treatment, and lack of knowledge. The existing deep learning and machine learning techniques met few issues such as high computational complexity, higher cost associated with the training data, more execution time, noise, feature dimensionality, lower accuracy, low speed, etc. To tackle these problems, we have proposed a hybrid deep learning model with an Archimedes optimization algorithm (HDL-AOA) for bean disease classification. In this work, there are five bean classes of which one is a healthy class whereas the remaining four classes indicate different diseases such as Bean halo blight, Pythium diseases, Rhizoctonia root rot, and Anthracnose abnormalities acquired from the Soybean (Large) Data Set.The hybrid deep learning technique is the combination of wavelet packet decomposition (WPD) and long short term memory (LSTM). Initially, the WPD decomposes the input images into four sub-series. For these sub-series, four LSTM networks were developed. During bean disease classification, an Archimedes optimization algorithm (AOA) enhances the classification accuracy for multiple single LSTM networks. MATLAB software implements the HDL-AOA model for bean disease classification. The proposed model accomplishes lower MAPE than other exiting methods. Finally, the proposed HDL-AOA model outperforms excellent classification results using different evaluation measures such as accuracy, specificity, sensitivity, precision, recall, and F-score.


2021 ◽  
Author(s):  
J. Annrose ◽  
N. Herald Anantha Rufus ◽  
C. R. Edwin Selva Rex ◽  
D. Godwin Immanuel

Abstract Bean which is botanically called Phaseolus vulgaris L belongs to the Fabaceae family.During bean disease identification, unnecessary economical losses occur due to the delay of the treatment period, incorrect treatment, and lack of knowledge. The existing deep learning and machine learning techniques met few issues such as high computational complexity, higher cost associated with the training data, more execution time, noise, feature dimensionality, lower accuracy, low speed, etc. To tackle these problems, we have proposed a hybrid deep learning model with an Archimedes optimization algorithm (HDL-AOA) for bean disease classification. In this work, there are five bean classes of which one is a healthy class whereas the remaining four classes indicate different diseases such as Bean halo blight, Pythium diseases, Rhizoctonia root rot, and Anthracnose abnormalities acquired from the Soybean (Large) Data Set.The hybrid deep learning technique is the combination of wavelet packet decomposition (WPD) and long short term memory (LSTM). Initially, the WPD decomposes the input images into four sub-series. For these sub-series, four LSTM networks were developed. During bean disease classification, an Archimedes optimization algorithm (AOA) enhances the classification accuracy for multiple single LSTM networks. MATLAB software implements the HDL-AOA model for bean disease classification. The proposed model accomplishes lower MAPE than other exiting methods. Finally, the proposed HDL-AOA model outperforms excellent classification results using different evaluation measures such as accuracy, specificity, sensitivity, precision, recall, and F-score.


2021 ◽  
Vol 36 (1) ◽  
pp. 443-450
Author(s):  
Mounika Jammula

As of 2020, the total area planted with crops in India overtook 125.78 million hectares. India is the second biggest organic product maker in the world. Thus, an Indian economy greatly depends on farming products. Nowadays, farmers suffer a drop in production due to a lot of diseases and pests. Thus, to overcome this problem, this article presents the artificial intelligence based deep learning approach for plant disease classification. Initially, the adaptive mean bilateral filter (AMBF) for noise removal and enhancement operations. Then, Gaussian kernel fuzzy C-means (GKFCM) approach is used to segment the effected disease regions. The optimal features from color, texture and shape features are extracted by using GLCM. Finally, Deep learning convolutional neural network (DLCNN) is used for the classification of five class diseases. The segmentation and classification performance of proposed method outperforms as compared with the state of art approaches.


Author(s):  
Fouzia Altaf ◽  
Syed M. S. Islam ◽  
Naeem Khalid Janjua

AbstractDeep learning has provided numerous breakthroughs in natural imaging tasks. However, its successful application to medical images is severely handicapped with the limited amount of annotated training data. Transfer learning is commonly adopted for the medical imaging tasks. However, a large covariant shift between the source domain of natural images and target domain of medical images results in poor transfer learning. Moreover, scarcity of annotated data for the medical imaging tasks causes further problems for effective transfer learning. To address these problems, we develop an augmented ensemble transfer learning technique that leads to significant performance gain over the conventional transfer learning. Our technique uses an ensemble of deep learning models, where the architecture of each network is modified with extra layers to account for dimensionality change between the images of source and target data domains. Moreover, the model is hierarchically tuned to the target domain with augmented training data. Along with the network ensemble, we also utilize an ensemble of dictionaries that are based on features extracted from the augmented models. The dictionary ensemble provides an additional performance boost to our method. We first establish the effectiveness of our technique with the challenging ChestXray-14 radiography data set. Our experimental results show more than 50% reduction in the error rate with our method as compared to the baseline transfer learning technique. We then apply our technique to a recent COVID-19 data set for binary and multi-class classification tasks. Our technique achieves 99.49% accuracy for the binary classification, and 99.24% for multi-class classification.


Author(s):  
H. Yassine ◽  
K. Tout ◽  
M. Jaber

Abstract. Machine learning (ML) has proven useful for a very large number of applications in several domains. It has realized a remarkable growth in remote-sensing image analysis over the past few years. Deep Learning (DL) a subset of machine learning were applied in this work to achieve a better classification of Land Use Land Cover (LULC) in satellite imagery using Convolutional Neural Networks (CNNs). EuroSAT benchmarking data set is used as training data set which uses Sentinel-2 satellite images. Sentinel-2 provides images with 13 spectral feature bands, but surprisingly little attention has been paid to these features in deep learning models. The majority of applications focused only on using RGB due to high availability of the RGB models in computer vision. While RGB gives an accuracy of 96.83% using CNN, we are presenting two approaches to improve the classification performance of Sentinel-2 images. In the first approach, features are extracted from 13 spectral feature bands of Sentinel-2 instead of RGB which leads to accuracy of 98.78%. In the second approach features are extracted from 13 spectral bands of Sentinel-2 in addition to calculated indices used in LULC like Blue Ratio (BR), Vegetation index based on Red Edge (VIRE) and Normalized Near Infrared (NNIR), etc. which gives a better accuracy of 99.58%.


2021 ◽  
Vol 27 (1) ◽  
pp. 48-59
Author(s):  
Thanh-Nghia Nguyen ◽  
Thanh-Hai Nguyen

Heart disease classification with high accuracy can support the physician’s correct decision on patients. This paper proposes a kernel size calculation based on P, Q, R, and S waves of one heartbeat to enhance classification accuracy in a deep learning framework. In addition, Electrocardiogram (ECG) signals were filtered using wavelet transform with dmey wavelet, in which the shape of the dmey is closed to that of one heartbeat. With this selected dmey, each heartbeat was standardized with 300 samples for calculation of kernel sizes so that it contains most features in each heartbeat. Therefore, in this research, with 103,459 heart rhythms from the MIT-BIH Arrhythmia Database, the proposed approach for calculation of kernel sizes is effective with seven convolutional layers and other fully connected layers in a Deep Neural Network (DNN). In particular, with five types of heart disease, the result of the high classification accuracy is about 99.4 %. It means that the proposed kernel size calculation in the convolutional layers can achieve good classification performance and it may be developed for classifying different types of disease.


Information ◽  
2019 ◽  
Vol 10 (8) ◽  
pp. 248 ◽  
Author(s):  
Sumam Francis ◽  
Jordy Van Landeghem ◽  
Marie-Francine Moens

Recent deep learning approaches have shown promising results for named entity recognition (NER). A reasonable assumption for training robust deep learning models is that a sufficient amount of high-quality annotated training data is available. However, in many real-world scenarios, labeled training data is scarcely present. In this paper we consider two use cases: generic entity extraction from financial and from biomedical documents. First, we have developed a character based model for NER in financial documents and a word and character based model with attention for NER in biomedical documents. Further, we have analyzed how transfer learning addresses the problem of limited training data in a target domain. We demonstrate through experiments that NER models trained on labeled data from a source domain can be used as base models and then be fine-tuned with few labeled data for recognition of different named entity classes in a target domain. We also witness an interest in language models to improve NER as a way of coping with limited labeled data. The current most successful language model is BERT. Because of its success in state-of-the-art models we integrate representations based on BERT in our biomedical NER model along with word and character information. The results are compared with a state-of-the-art model applied on a benchmarking biomedical corpus.


2021 ◽  
Vol 2 (2) ◽  
pp. 1-25
Author(s):  
Stein Kristiansen ◽  
Konstantinos Nikolaidis ◽  
Thomas Plagemann ◽  
Vera Goebel ◽  
Gunn Marit Traaen ◽  
...  

Sleep apnea is a common and strongly under-diagnosed severe sleep-related respiratory disorder with periods of disrupted or reduced breathing during sleep. To diagnose sleep apnea, sleep data are collected with either polysomnography or polygraphy and scored by a sleep expert. We investigate in this work the use of supervised machine learning to automate the analysis of polygraphy data from the A3 study containing more than 7,400 hours of sleep monitoring data from 579 patients. We conduct a systematic comparative study of classification performance and resource use with different combinations of 27 classifiers and four sleep signals. The classifiers achieve up to 0.8941 accuracy (kappa: 0.7877) when using all four signal types simultaneously and up to 0.8543 accuracy (kappa: 0.7080) with only one signal, i.e., oxygen saturation. Methods based on deep learning outperform other methods by a large margin. All deep learning methods achieve nearly the same maximum classification performance even when they have very different architectures and sizes. When jointly accounting for classification performance, resource consumption and the ability to achieve with less training data high classification performance, we find that convolutional neural networks substantially outperform the other classifiers.


Author(s):  
G. V. Shilovskii ◽  
V. M. Yulkova

Learning deep neural networks using the backpropagation algorithm is considered implausible from a biological point of view. Numerous recent publications offer sophisticated models for biologically plausible deep learning options that typically define success as achieving a test accuracy of around 98 % in the MNIST dataset. Here we examine how far we can go in the classification of numbers (MNIST) with biologically plausible rules for learning in a network with one hidden layer and one reading layer. The weights of the hidden layer are either fixed (random or random Gabor filters), or are trained by uncontrolled methods (analysis of main/independent components or sparse coding), which can be implemented in accordance with local training rules. The paper shows that high dimensionality of hidden layers is more important for high performance than global functions retrieved by PCA, ICA, or SC. Tests on the CIFAR10 object recognition problem lead to the same conclusion, indicating that this observation is not entirely problem specific. Unlike biologically plausible deep learning algorithms that are derived from the backpropagation algorithm approximations, we have focused here on shallow networks with only one hidden layer. Globally applied, randomly initialized filters with fixed weights/Gabor coefficients (RP/RGs) of large hidden layers result in better classification performance than training them with unsupervised methods such as principal/independent analysis (PCA/ICA) or sparse coding (SC). Therefore, the conclusion is that uncontrolled training does not lead to better performance than fixed random projections or Gabor filters for large hidden layers.


Sign in / Sign up

Export Citation Format

Share Document