scholarly journals Deep Transfer Learning in Diagnosing Leukemia in Blood Cells

Computers ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 29 ◽  
Author(s):  
Mohamed Loey ◽  
Mukdad Naman ◽  
Hala Zayed

Leukemia is a fatal disease that threatens the lives of many patients. Early detection can effectively improve its rate of remission. This paper proposes two automated classification models based on blood microscopic images to detect leukemia by employing transfer learning, rather than traditional approaches that have several disadvantages. In the first model, blood microscopic images are pre-processed; then, features are extracted by a pre-trained deep convolutional neural network named AlexNet, which makes classifications according to numerous well-known classifiers. In the second model, after pre-processing the images, AlexNet is fine-tuned for both feature extraction and classification. Experiments were conducted on a dataset consisting of 2820 images confirming that the second model performs better than the first because of 100% classification accuracy.

2019 ◽  
Vol 158 ◽  
pp. 20-29 ◽  
Author(s):  
Aydin Kaya ◽  
Ali Seydi Keceli ◽  
Cagatay Catal ◽  
Hamdi Yalin Yalic ◽  
Huseyin Temucin ◽  
...  

2018 ◽  
Vol 8 (10) ◽  
pp. 1768 ◽  
Author(s):  
Abdelhak Belhi ◽  
Abdelaziz Bouras ◽  
Sebti Foufou

Cultural heritage represents a reliable medium for history and knowledge transfer. Cultural heritage assets are often exhibited in museums and heritage sites all over the world. However, many assets are poorly labeled, which decreases their historical value. If an asset’s history is lost, its historical value is also lost. The classification and annotation of overlooked or incomplete cultural assets increase their historical value and allows the discovery of various types of historical links. In this paper, we tackle the challenge of automatically classifying and annotating cultural heritage assets using their visual features as well as the metadata available at hand. Traditional approaches mainly rely only on image data and machine-learning-based techniques to predict missing labels. Often, visual data are not the only information available at hand. In this paper, we present a novel multimodal classification approach for cultural heritage assets that relies on a multitask neural network where a convolutional neural network (CNN) is designed for visual feature learning and a regular neural network is used for textual feature learning. These networks are merged and trained using a shared loss. The combined networks rely on both image and textual features to achieve better asset classification. Initial tests related to painting assets showed that our approach performs better than traditional CNNs that only rely on images as input.


2019 ◽  
Vol 9 (16) ◽  
pp. 3362 ◽  
Author(s):  
Shang Shang ◽  
Ling Long ◽  
Sijie Lin ◽  
Fengyu Cong

Zebrafish eggs are widely used in biological experiments to study the environmental and genetic influence on embryo development. Due to the high throughput of microscopic imaging, automated analysis of zebrafish egg microscopic images is highly demanded. However, machine learning algorithms for zebrafish egg image analysis suffer from the problems of small imbalanced training dataset and subtle inter-class differences. In this study, we developed an automated zebrafish egg microscopic image analysis algorithm based on deep convolutional neural network (CNN). To tackle the problem of insufficient training data, the strategies of transfer learning and data augmentation were used. We also adopted the global averaged pooling technique to overcome the subtle phenotype differences between the fertilized and unfertilized eggs. Experimental results of a five-fold cross-validation test showed that the proposed method yielded a mean classification accuracy of 95.0% and a maximum accuracy of 98.8%. The network also demonstrated higher classification accuracy and better convergence performance than conventional CNN methods. This study extends the deep learning technique to zebrafish egg phenotype classification and paves the way for automatic bright-field microscopic image analysis.


2014 ◽  
Vol 2014 ◽  
pp. 1-8 ◽  
Author(s):  
Qian Zhang ◽  
Haigang Li ◽  
Yong Zhang ◽  
Ming Li

Since the transfer learning can employ knowledge in relative domains to help the learning tasks in current target domain, compared with the traditional learning it shows the advantages of reducing the learning cost and improving the learning efficiency. Focused on the situation that sample data from the transfer source domain and the target domain have similar distribution, an instance transfer learning method based on multisource dynamic TrAdaBoost is proposed in this paper. In this method, knowledge from multiple source domains is used well to avoid negative transfer; furthermore, the information that is conducive to target task learning is obtained to train candidate classifiers. The theoretical analysis suggests that the proposed algorithm improves the capability that weight entropy drifts from source to target instances by means of adding the dynamic factor, and the classification effectiveness is better than single source transfer. Finally, experimental results show that the proposed algorithm has higher classification accuracy.


2022 ◽  
Vol 2022 ◽  
pp. 1-8
Author(s):  
Mohammad Manthouri ◽  
Zhila Aghajari ◽  
Sheida Safary

Infection diseases are among the top global issues with negative impacts on health, economy, and society as a whole. One of the most effective ways to detect these diseases is done by analysing the microscopic images of blood cells. Artificial intelligence (AI) techniques are now widely used to detect these blood cells and explore their structures. In recent years, deep learning architectures have been utilized as they are powerful tools for big data analysis. In this work, we are presenting a deep neural network for processing of microscopic images of blood cells. Processing these images is particularly important as white blood cells and their structures are being used to diagnose different diseases. In this research, we design and implement a reliable processing system for blood samples and classify five different types of white blood cells in microscopic images. We use the Gram-Schmidt algorithm for segmentation purposes. For the classification of different types of white blood cells, we combine Scale-Invariant Feature Transform (SIFT) feature detection technique with a deep convolutional neural network. To evaluate our work, we tested our method on LISC and WBCis databases. We achieved 95.84% and 97.33% accuracy of segmentation for these data sets, respectively. Our work illustrates that deep learning models can be promising in designing and developing a reliable system for microscopic image processing.


2021 ◽  
Vol 8 (3) ◽  
pp. 601
Author(s):  
Eko Prasetyo ◽  
Rani Purbaningtyas ◽  
Raden Dimas Adityo ◽  
Enrico Tegar Prabowo ◽  
Achmad Irfan Ferdiansyah

<p class="Abstrak">Ikan merupakan salah satu sumber protein hewani dan sangat diminati masyarakat Indonesia, dari survey bahan makanan yang diminati, bandeng peringkat keempat dibanding bahan makanan yang lain. Khususnya ikan bandeng, ikan ini menjadi satu dari enam ikan yang banyak dikonsumsi masyarakat selain tongkol, kembung, teri, mujair dan lele, maka ketelitian masyarakat ketika membeli ikan bandeng menjadi perhatian serius dalam memilih ikan bandeng segar. Deteksi kesegaran dengan menyentuh tubuh ikan dapat mengakibatkan kerusakan tanpa disengaja, maka deteksi kesegaran ikan harus dilakukan tanpa menyentuh ikan bandeng dengan memanfaatkan citra kondisi mata. Dalam riset ini, kami melakukan eksperimen implementasi klasifikasi kesegaran ikan bandeng sangat segar dan tidak segar berdasarkan mata menggunakan transfer learning dari empat CNN, yaitu Xception, MobileNet V1, Resnet50, dan VGG16. Dari hasil eksperimen klasifikasi dua kelas kesegaran ikan bandeng menggunakan 154 citra menunjukkan bahwa VGG16 mencapai kinerja terbaik dibanding arsitektur lainnya dimana akurasi klasifikasi mencapai 0.97. Dengan akurasi lebih tinggi dibanding arsitektur lainnya maka VGG16 relatif lebih tepat digunakan untuk klasifikasi dua kelas kesegaran ikan bandeng.</p><p class="Abstrak"> </p><p class="Abstrak"><em><strong>Abstract</strong></em></p><p class="Abstract"><em>Fish, one source of animal protein, is an exciting food for Indonesia's people. From a survey of food-ingredients demanded, milkfish are ranked fourth compared to other food-ingredients. Especially for milkfish, this fish is one of the six fish consumed by Indonesia's people besides tuna, bloating, anchovies, tilapia, and catfish, so the exactitude of the people when buying is a severe concern in choosing fresh milkfish. Detection of freshness by touching the fish's body may cause unexpected destruction, so detecting the fish's freshness should be conducted without touching using the eye image. In this research, we conducted an experimental implementation of freshness milkfish classification (vastly fresh and not fresh) based on the eyes using transfer learning from several CNNs, such as Xception, MobileNet V1, Resnet50, and VGG16. The experimental results of the classification of two milkfish freshness classes using 154 images show that VGG16 achieves the best performance compared to other architectures, where the classification accuracy achieves 0.97. With higher accuracy than other architectures, VGG16 is relatively more appropriate for classifying two classes of milkfish freshness.</em></p>


Electronics ◽  
2020 ◽  
Vol 9 (10) ◽  
pp. 1714
Author(s):  
JiWoong Park ◽  
SungChan Nam ◽  
HongBeom Choi ◽  
YoungEun Ko ◽  
Young-Bae Ko

This paper presents an improved ultra-wideband (UWB) line of sight (LOS)/non-line of sight (NLOS) identification scheme based on a hybrid method of deep learning and transfer learning. Previous studies have limitations, in that the classification accuracy significantly decreases in an unknown place. To solve this problem, we propose a transfer learning-based NLOS identification method for classifying the NLOS conditions of the UWB signal in an unmeasured environment. Both the multilayer perceptron and convolutional neural network (CNN) are introduced as classifiers for NLOS conditions. We evaluate the proposed scheme by conducting experiments in both measured and unmeasured environments. Channel data were measured using a Decawave EVK1000 in two similar indoor office environments. In the unmeasured environment, the existing CNN method showed an accuracy of approximately 44%, but when the proposed scheme was applied to the CNN, it showed an accuracy of up to 98%. The training time of the proposed scheme was measured to be approximately 48 times faster than that of the existing CNN. When comparing the proposed scheme with learning a new CNN in an unmeasured environment, the proposed scheme demonstrated an approximately 10% higher accuracy and approximately five times faster training time.


2019 ◽  
Vol 10 (1) ◽  
pp. 42 ◽  
Author(s):  
Alexandros Arjmand ◽  
Constantinos T. Angelis ◽  
Vasileios Christou ◽  
Alexandros T. Tzallas ◽  
Markos G. Tsipouras ◽  
...  

Nonalcoholic fatty liver disease (NAFLD) is responsible for a wide range of pathological disorders. It is characterized by the prevalence of steatosis, which results in excessive accumulation of triglyceride in the liver tissue. At high rates, it can lead to a partial or total occlusion of the organ. In contrast, nonalcoholic steatohepatitis (NASH) is a progressive form of NAFLD, with the inclusion of hepatocellular injury and inflammation histological diseases. Since there is no approved pharmacotherapeutic solution for both conditions, physicians and engineers are constantly in search for fast and accurate diagnostic methods. The proposed work introduces a fully automated classification approach, taking into consideration the high discrimination capability of four histological tissue alterations. The proposed work utilizes a deep supervised learning method, with a convolutional neural network (CNN) architecture achieving a classification accuracy of 95%. The classification capability of the new CNN model is compared with a pre-trained AlexNet model, a visual geometry group (VGG)-16 deep architecture and a conventional multilayer perceptron (MLP) artificial neural network. The results show that the constructed model can achieve better classification accuracy than VGG-16 (94%) and MLP (90.3%), while AlexNet emerges as the most efficient classifier (97%).


2020 ◽  
Vol 12 (11) ◽  
pp. 1780 ◽  
Author(s):  
Yao Liu ◽  
Lianru Gao ◽  
Chenchao Xiao ◽  
Ying Qu ◽  
Ke Zheng ◽  
...  

Convolutional neural networks (CNNs) have been widely applied in hyperspectral imagery (HSI) classification. However, their classification performance might be limited by the scarcity of labeled data to be used for training and validation. In this paper, we propose a novel lightweight shuffled group convolutional neural network (abbreviated as SG-CNN) to achieve efficient training with a limited training dataset in HSI classification. SG-CNN consists of SG conv units that employ conventional and atrous convolution in different groups, followed by channel shuffle operation and shortcut connection. In this way, SG-CNNs have less trainable parameters, whilst they can still be accurately and efficiently trained with fewer labeled samples. Transfer learning between different HSI datasets is also applied on the SG-CNN to further improve the classification accuracy. To evaluate the effectiveness of SG-CNNs for HSI classification, experiments have been conducted on three public HSI datasets pretrained on HSIs from different sensors. SG-CNNs with different levels of complexity were tested, and their classification results were compared with fine-tuned ShuffleNet2, ResNeXt, and their original counterparts. The experimental results demonstrate that SG-CNNs can achieve competitive classification performance when the amount of labeled data for training is poor, as well as efficiently providing satisfying classification results.


Electronics ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 1079
Author(s):  
Abhishek Varshney ◽  
Samit Kumar Ghosh ◽  
Sibasankar Padhy ◽  
Rajesh Kumar Tripathy ◽  
U. Rajendra Acharya

The automated classification of cognitive workload tasks based on the analysis of multi-channel EEG signals is vital for human–computer interface (HCI) applications. In this paper, we propose a computerized approach for categorizing mental-arithmetic-based cognitive workload tasks using multi-channel electroencephalogram (EEG) signals. The approach evaluates various entropy features, such as the approximation entropy, sample entropy, permutation entropy, dispersion entropy, and slope entropy, from each channel of the EEG signal. These features were fed to various recurrent neural network (RNN) models, such as long-short term memory (LSTM), bidirectional LSTM (BLSTM), and gated recurrent unit (GRU), for the automated classification of mental-arithmetic-based cognitive workload tasks. Two cognitive workload classification strategies (bad mental arithmetic calculation (BMAC) vs. good mental arithmetic calculation (GMAC); and before mental arithmetic calculation (BFMAC) vs. during mental arithmetic calculation (DMAC)) are considered in this work. The approach was evaluated using the publicly available mental arithmetic task-based EEG database. The results reveal that our proposed approach obtained classification accuracy values of 99.81%, 99.43%, and 99.81%, using the LSTM, BLSTM, and GRU-based RNN classifiers, respectively for the BMAC vs. GMAC cognitive workload classification strategy using all entropy features and a 10-fold cross-validation (CV) technique. The slope entropy features combined with each RNN-based model obtained higher classification accuracy compared with other entropy features for the classification of the BMAC vs. GMAC task. We obtained the average classification accuracy values of 99.39%, 99.44%, and 99.63% for the classification of the BFMAC vs. DMAC tasks, using the LSTM, BLSTM, and GRU classifiers with all entropy features and a hold-out CV scheme. Our developed automated mental arithmetic task system is ready to be tested with more databases for real-world applications.


Sign in / Sign up

Export Citation Format

Share Document