Classification of Red Blood Cells in Sickle Cell Anemia Using Deep Convolutional Neural Network

Author(s):  
Laith Alzubaidi ◽  
Omran Al-Shamma ◽  
Mohammed A. Fadhel ◽  
Laith Farhan ◽  
Jinglan Zhang
2017 ◽  
Vol 13 (10) ◽  
pp. e1005746 ◽  
Author(s):  
Mengjia Xu ◽  
Dimitrios P. Papageorgiou ◽  
Sabia Z. Abidi ◽  
Ming Dao ◽  
Hong Zhao ◽  
...  

Electronics ◽  
2020 ◽  
Vol 9 (3) ◽  
pp. 427 ◽  
Author(s):  
Laith Alzubaidi ◽  
Mohammed A. Fadhel ◽  
Omran Al-Shamma ◽  
Jinglan Zhang ◽  
Ye Duan

Sickle cell anemia, which is also called sickle cell disease (SCD), is a hematological disorder that causes occlusion in blood vessels, leading to hurtful episodes and even death. The key function of red blood cells (erythrocytes) is to supply all the parts of the human body with oxygen. Red blood cells (RBCs) form a crescent or sickle shape when sickle cell anemia affects them. This abnormal shape makes it difficult for sickle cells to move through the bloodstream, hence decreasing the oxygen flow. The precise classification of RBCs is the first step toward accurate diagnosis, which aids in evaluating the danger level of sickle cell anemia. The manual classification methods of erythrocytes require immense time, and it is possible that errors may be made throughout the classification stage. Traditional computer-aided techniques, which have been employed for erythrocyte classification, are based on handcrafted features techniques, and their performance relies on the selected features. They also are very sensitive to different sizes, colors, and complex shapes. However, microscopy images of erythrocytes are very complex in shape with different sizes. To this end, this research proposes lightweight deep learning models that classify the erythrocytes into three classes: circular (normal), elongated (sickle cells), and other blood content. These models are different in the number of layers and learnable filters. The available datasets of red blood cells with sickle cell disease are very small for training deep learning models. Therefore, addressing the lack of training data is the main aim of this paper. To tackle this issue and optimize the performance, the transfer learning technique is utilized. Transfer learning does not significantly affect performance on medical image tasks when the source domain is completely different from the target domain. In some cases, it can degrade the performance. Hence, we have applied the same domain transfer learning, unlike other methods that used the ImageNet dataset for transfer learning. To minimize the overfitting effect, we have utilized several data augmentation techniques. Our model obtained state-of-the-art performance and outperformed the latest methods by achieving an accuracy of 99.54% with our model and 99.98% with our model plus a multiclass SVM classifier on the erythrocytesIDB dataset and 98.87% on the collected dataset.


SinkrOn ◽  
2020 ◽  
Vol 5 (2) ◽  
pp. 199-207
Author(s):  
Mawaddah Harahap ◽  
Jefferson Jefferson ◽  
Surya Barti ◽  
Suprianto Samosir ◽  
Christi Andika Turnip

Malaria is a disease caused by plasmodium which attacks red blood cells. Diagnosis of malaria can be made by examining the patient's red blood cells using a microscope. Convolutional Neural Network (CNN) is a deep learning method that is growing rapidly. CNN is often used in image classification. The CNN process usually requires considerable resources. This is one of the weaknesses of CNN. In this study, the CNN architecture used in the classification of red blood cell images is LeNet-5 and DRNet. The data used is a segmented image of red blood cells and is secondary data. Before conducting the data training, data pre-processing and data augmentation from the dataset was carried out. The number of layers of the LeNet-5 and DRNet models were 4 and 7. The test accuracy of the LeNet-5 and DrNet models was 95% and 97.3%, respectively. From the test results, it was found that the LeNet-5 model was more suitable in terms of red blood cell classification. By using the LeNet-5 architecture, the resources used to perform classification can be reduced compared to previous studies where the accuracy obtained is also the same because the number of layers is less, which is only 4 layers


2020 ◽  
Vol 2020 (4) ◽  
pp. 4-14
Author(s):  
Vladimir Budak ◽  
Ekaterina Ilyina

The article proposes the classification of lenses with different symmetrical beam angles and offers a scale as a spot-light’s palette. A collection of spotlight’s images was created and classified according to the proposed scale. The analysis of 788 pcs of existing lenses and reflectors with different LEDs and COBs carried out, and the dependence of the axial light intensity from beam angle was obtained. A transfer training of new deep convolutional neural network (CNN) based on the pre-trained GoogleNet was performed using this collection. GradCAM analysis showed that the trained network correctly identifies the features of objects. This work allows us to classify arbitrary spotlights with an accuracy of about 80 %. Thus, light designer can determine the class of spotlight and corresponding type of lens with its technical parameters using this new model based on CCN.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2648
Author(s):  
Muhammad Aamir ◽  
Tariq Ali ◽  
Muhammad Irfan ◽  
Ahmad Shaf ◽  
Muhammad Zeeshan Azam ◽  
...  

Natural disasters not only disturb the human ecological system but also destroy the properties and critical infrastructures of human societies and even lead to permanent change in the ecosystem. Disaster can be caused by naturally occurring events such as earthquakes, cyclones, floods, and wildfires. Many deep learning techniques have been applied by various researchers to detect and classify natural disasters to overcome losses in ecosystems, but detection of natural disasters still faces issues due to the complex and imbalanced structures of images. To tackle this problem, we propose a multilayered deep convolutional neural network. The proposed model works in two blocks: Block-I convolutional neural network (B-I CNN), for detection and occurrence of disasters, and Block-II convolutional neural network (B-II CNN), for classification of natural disaster intensity types with different filters and parameters. The model is tested on 4428 natural images and performance is calculated and expressed as different statistical values: sensitivity (SE), 97.54%; specificity (SP), 98.22%; accuracy rate (AR), 99.92%; precision (PRE), 97.79%; and F1-score (F1), 97.97%. The overall accuracy for the whole model is 99.92%, which is competitive and comparable with state-of-the-art algorithms.


Sign in / Sign up

Export Citation Format

Share Document