Benchmarking of Shallow Learning and Deep Learning Techniques with Transfer Learning for Neurodegenerative Disease Assessment Through Handwriting

2021 ◽  
pp. 7-20
Author(s):  
Vincenzo Dentamaro ◽  
Paolo Giglio ◽  
Donato Impedovo ◽  
Giuseppe Pirlo
2022 ◽  
pp. 1-12
Author(s):  
Amin Ul Haq ◽  
Jian Ping Li ◽  
Samad Wali ◽  
Sultan Ahmad ◽  
Zafar Ali ◽  
...  

Artificial intelligence (AI) based computer-aided diagnostic (CAD) systems can effectively diagnose critical disease. AI-based detection of breast cancer (BC) through images data is more efficient and accurate than professional radiologists. However, the existing AI-based BC diagnosis methods have complexity in low prediction accuracy and high computation time. Due to these reasons, medical professionals are not employing the current proposed techniques in E-Healthcare to effectively diagnose the BC. To diagnose the breast cancer effectively need to incorporate advanced AI techniques based methods in diagnosis process. In this work, we proposed a deep learning based diagnosis method (StackBC) to detect breast cancer in the early stage for effective treatment and recovery. In particular, we have incorporated deep learning models including Convolutional neural network (CNN), Long short term memory (LSTM), and Gated recurrent unit (GRU) for the classification of Invasive Ductal Carcinoma (IDC). Additionally, data augmentation and transfer learning techniques have been incorporated for data set balancing and for effective training the model. To further improve the predictive performance of model we used stacking technique. Among the three base classifiers (CNN, LSTM, GRU) the predictive performance of GRU are better as compared to individual model. The GRU is selected as a meta classifier to distinguish between Non-IDC and IDC breast images. The method Hold-Out has been incorporated and the data set is split into 90% and 10% for training and testing of the model, respectively. Model evaluation metrics have been computed for model performance evaluation. To analyze the efficacy of the model, we have used breast histology images data set. Our experimental results demonstrated that the proposed StackBC method achieved improved performance by gaining 99.02% accuracy and 100% area under the receiver operating characteristics curve (AUC-ROC) compared to state-of-the-art methods. Due to the high performance of the proposed method, we recommend it for early recognition of breast cancer in E-Healthcare.


2020 ◽  
Vol 12 (10) ◽  
pp. 1581 ◽  
Author(s):  
Daniel Perez ◽  
Kazi Islam ◽  
Victoria Hill ◽  
Richard Zimmerman ◽  
Blake Schaeffer ◽  
...  

Coastal ecosystems are critically affected by seagrass, both economically and ecologically. However, reliable seagrass distribution information is lacking in nearly all parts of the world because of the excessive costs associated with its assessment. In this paper, we develop two deep learning models for automatic seagrass distribution quantification based on 8-band satellite imagery. Specifically, we implemented a deep capsule network (DCN) and a deep convolutional neural network (CNN) to assess seagrass distribution through regression. The DCN model first determines whether seagrass is presented in the image through classification. Second, if seagrass is presented in the image, it quantifies the seagrass through regression. During training, the regression and classification modules are jointly optimized to achieve end-to-end learning. The CNN model is strictly trained for regression in seagrass and non-seagrass patches. In addition, we propose a transfer learning approach to transfer knowledge in the trained deep models at one location to perform seagrass quantification at a different location. We evaluate the proposed methods in three WorldView-2 satellite images taken from the coastal area in Florida. Experimental results show that the proposed deep DCN and CNN models performed similarly and achieved much better results than a linear regression model and a support vector machine. We also demonstrate that using transfer learning techniques for the quantification of seagrass significantly improved the results as compared to directly applying the deep models to new locations.


Author(s):  
Arshia Rehman ◽  
Saeeda Naz ◽  
Ahmed Khan ◽  
Ahmad Zaib ◽  
Imran Razzak

AbstractBackgroundCoronavirus disease (COVID-19) is an infectious disease caused by a new virus. Exponential growth is not only threatening lives, but also impacting businesses and disrupting travel around the world.AimThe aim of this work is to develop an efficient diagnosis of COVID-19 disease by differentiating it from viral pneumonia, bacterial pneumonia and healthy cases using deep learning techniques.MethodIn this work, we have used pre-trained knowledge to improve the diagnostic performance using transfer learning techniques and compared the performance different CNN architectures.ResultsEvaluation results using K-fold (10) showed that we have achieved state of the art performance with overall accuracy of 98.75% on the perspective of CT and X-ray cases as a whole.ConclusionQuantitative evaluation showed high accuracy for automatic diagnosis of COVID-19. Pre-trained deep learning models develop in this study could be used early screening of coronavirus, however it calls for extensive need to CT or X-rays dataset to develop a reliable application.


2022 ◽  
Vol 30 (1) ◽  
pp. 641-654
Author(s):  
Ali Abd Almisreb ◽  
Nooritawati Md Tahir ◽  
Sherzod Turaev ◽  
Mohammed A. Saleh ◽  
Syed Abdul Mutalib Al Junid

Arabic handwriting is slightly different from the handwriting of other languages; hence it is possible to distinguish the handwriting written by the native or non-native writer based on their handwriting. However, classifying Arabic handwriting is challenging using traditional text recognition algorithms. Thus, this study evaluated and validated the utilisation of deep transfer learning models to overcome such issues. Hence, seven types of deep learning transfer models, namely the AlexNet, GoogleNet, ResNet18, ResNet50, ResNet101, VGG16, and VGG19, were used to determine the most suitable model for classifying the handwritten images written by the native or non-native. Two datasets comprised of Arabic handwriting images were used to evaluate and validate the newly developed deep learning models used to classify each model’s output as either native or foreign (non-native) writers. The training and validation sets were conducted using both original and augmented datasets. Results showed that the highest accuracy is using the GoogleNet deep learning model for both normal and augmented datasets, with the highest accuracy attained as 93.2% using normal data and 95.5% using augmented data in classifying the native handwriting.


2022 ◽  
Vol 12 (2) ◽  
pp. 622
Author(s):  
Saadman Sakib ◽  
Kaushik Deb ◽  
Pranab Kumar Dhar ◽  
Oh-Jin Kwon

The pedestrian attribute recognition task is becoming more popular daily because of its significant role in surveillance scenarios. As the technological advances are significantly more than before, deep learning came to the surface of computer vision. Previous works applied deep learning in different ways to recognize pedestrian attributes. The results are satisfactory, but still, there is some scope for improvement. The transfer learning technique is becoming more popular for its extraordinary performance in reducing computation cost and scarcity of data in any task. This paper proposes a framework that can work in surveillance scenarios to recognize pedestrian attributes. The mask R-CNN object detector extracts the pedestrians. Additionally, we applied transfer learning techniques on different CNN architectures, i.e., Inception ResNet v2, Xception, ResNet 101 v2, ResNet 152 v2. The main contribution of this paper is fine-tuning the ResNet 152 v2 architecture, which is performed by freezing layers, last 4, 8, 12, 14, 20, none, and all. Moreover, data balancing techniques are applied, i.e., oversampling, to resolve the class imbalance problem of the dataset and analysis of the usefulness of this technique is discussed in this paper. Our proposed framework outperforms state-of-the-art methods, and it provides 93.41% mA and 89.24% mA on the RAP v2 and PARSE100K datasets, respectively.


2020 ◽  
Vol 3 (2) ◽  
pp. 20 ◽  
Author(s):  
Aliyu Abubakar ◽  
Mohammed Ajuji ◽  
Ibrahim Usman Yahya

While visual assessment is the standard technique for burn evaluation, computer-aided diagnosis is increasingly sought due to high number of incidences globally. Patients are increasingly facing challenges which are not limited to shortage of experienced clinicians, lack of accessibility to healthcare facilities and high diagnostic cost. Certain number of studies were proposed in discriminating burn and healthy skin using machine learning leaving a huge and important gap unaddressed; whether burns and related skin injuries can be effectively discriminated using machine learning techniques. Therefore, we specifically use transfer learning by leveraging pre-trained deep learning models due to deficient dataset in this paper, to discriminate two classes of skin injuries—burnt skin and injured skin. Experiments were extensively conducted using three state-of-the-art pre-trained deep learning models that includes ResNet50, ResNet101 and ResNet152 for image patterns extraction via two transfer learning strategies—fine-tuning approach where dense and classification layers were modified and trained with features extracted by base layers and in the second approach support vector machine (SVM) was used to replace top-layers of the pre-trained models, trained using off-the-shelf features from the base layers. Our proposed approach records near perfect classification accuracy in categorizing burnt skin ad injured skin of approximately 99.9%.


Author(s):  
Aliyu Abubakar ◽  
Mohammed Ajuji ◽  
Ibrahim Usman Yahya

While visual assessment is the standard technique for burn evaluation, computer-aided diagnosis is increasingly sought due to high number of incidences globally. Patients are increasingly facing challenges which are not limited to shortage of experienced clinicians, lack of accessibility to healthcare facilities, and high diagnostic cost. Certain number of studies were proposed in discriminating burn and healthy skin using machine learning leaving a huge and important gap unaddressed; whether burns and related skin injuries can be effectively discriminated using machine learning techniques. Therefore, we specifically use pre-trained deep learning models due to deficient dataset to train a new model from scratch. Experiments were extensively conducted using three state-of-the-art pre-trained deep learning models that includes ResNet50, ResNet101 and ResNet152 for image patterns extraction via two transfer learning strategies: fine-tuning approach where dense and classification layers were modified and trained with features extracted by base layers, and in the second approach support vector machine (SVM) was used to replace top-layers of the pre-trained models, trained using off-the-shelf features from the base layers. Our proposed approach records near perfect classification accuracy of approximately 99.9%.


2021 ◽  
Vol 9 (1) ◽  
pp. 115
Author(s):  
Faisal Dharma Adhinata ◽  
Diovianto Putra Rakhmadani ◽  
Merlinda Wibowo ◽  
Akhmad Jayadi

The use of masks on the face in public places is an obligation for everyone because of the Covid-19 pandemic, which claims victims. Indonesia made 3M policies, one of which is to use masks to prevent coronavirus transmission. Currently, several researchers have developed a masked or non-masked face detection system. One of them is using deep learning techniques to classify a masked or non-masked face. Previous research used the MobileNetV2 transfer learning model, which resulted in an F-Measure value below 0.9. Of course, this result made the detection system not accurate enough. In this research, we propose a model with more parameters, namely the DenseNet201 model. The number of parameters of the DenseNet201 model is five times more than that of the MobileNetV2 model. The results obtained from several up to 30 epochs show that the DenseNet201 model produces 99% accuracy when training data. Then, we tested the matching feature on video data, the DenseNet201 model produced an F-Measure value of 0.98, while the MobileNetV2 model only produced an F-measure value of 0.67. These results prove the masked or non-masked face detection system is more accurate using the DenseNet201 model.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Dandi Yang ◽  
Cristhian Martinez ◽  
Lara Visuña ◽  
Hardev Khandhar ◽  
Chintan Bhatt ◽  
...  

AbstractThe main purpose of this work is to investigate and compare several deep learning enhanced techniques applied to X-ray and CT-scan medical images for the detection of COVID-19. In this paper, we used four powerful pre-trained CNN models, VGG16, DenseNet121, ResNet50,and ResNet152, for the COVID-19 CT-scan binary classification task. The proposed Fast.AI ResNet framework was designed to find out the best architecture, pre-processing, and training parameters for the models largely automatically. The accuracy and F1-score were both above 96% in the diagnosis of COVID-19 using CT-scan images. In addition, we applied transfer learning techniques to overcome the insufficient data and to improve the training time. The binary and multi-class classification of X-ray images tasks were performed by utilizing enhanced VGG16 deep transfer learning architecture. High accuracy of 99% was achieved by enhanced VGG16 in the detection of X-ray images from COVID-19 and pneumonia. The accuracy and validity of the algorithms were assessed on X-ray and CT-scan well-known public datasets. The proposed methods have better results for COVID-19 diagnosis than other related in literature. In our opinion, our work can help virologists and radiologists to make a better and faster diagnosis in the struggle against the outbreak of COVID-19.


Sign in / Sign up

Export Citation Format

Share Document