Chapter 11 Applications of conventional machine learning and deep learning for automation of diagnosis: case study

Author(s):  
Roopa B. Hegde ◽  
Vidya Kudva ◽  
Keerthana Prasad ◽  
Brij Mohan Singh ◽  
Shyamala Guruvare
2021 ◽  
pp. 1-12
Author(s):  
Mukul Kumar ◽  
Nipun Katyal ◽  
Nersisson Ruban ◽  
Elena Lyakso ◽  
A. Mary Mekala ◽  
...  

Over the years the need for differentiating various emotions from oral communication plays an important role in emotion based studies. There have been different algorithms to classify the kinds of emotion. Although there is no measure of fidelity of the emotion under consideration, which is primarily due to the reason that most of the readily available datasets that are annotated are produced by actors and not generated in real-world scenarios. Therefore, the predicted emotion lacks an important aspect called authenticity, which is whether an emotion is actual or stimulated. In this research work, we have developed a transfer learning and style transfer based hybrid convolutional neural network algorithm to classify the emotion as well as the fidelity of the emotion. The model is trained on features extracted from a dataset that contains stimulated as well as actual utterances. We have compared the developed algorithm with conventional machine learning and deep learning techniques by few metrics like accuracy, Precision, Recall and F1 score. The developed model performs much better than the conventional machine learning and deep learning models. The research aims to dive deeper into human emotion and make a model that understands it like humans do with precision, recall, F1 score values of 0.994, 0.996, 0.995 for speech authenticity and 0.992, 0.989, 0.99 for speech emotion classification respectively.


Author(s):  
Nourhan Mohamed Zayed ◽  
Heba A. Elnemr

Deep learning (DL) is a special type of machine learning that attains great potency and flexibility by learning to represent input raw data as a nested hierarchy of essences and representations. DL consists of more layers than conventional machine learning that permit higher levels of abstractions and improved prediction from data. More abstract representations computed in terms of less abstract ones. The goal of this chapter is to present an intensive survey of existing literature on DL techniques over the last years especially in the medical imaging analysis field. All these techniques and algorithms have their points of interest and constraints. Thus, analysis of various techniques and transformations, submitted prior in writing, for plan and utilization of DL methods from medical image analysis prospective will be discussed. The authors provide future research directions in DL area and set trends and identify challenges in the medical imaging field. Furthermore, as quantity of medicinal application demands increase, an extended study and investigation in DL area becomes very significant.


2020 ◽  
Vol 12 (12) ◽  
pp. 5074
Author(s):  
Jiyoung Woo ◽  
Jaeseok Yun

Spam posts in web forum discussions cause user inconvenience and lower the value of the web forum as an open source of user opinion. In this regard, as the importance of a web post is evaluated in terms of the number of involved authors, noise distorts the analysis results by adding unnecessary data to the opinion analysis. Here, in this work, an automatic detection model for spam posts in web forums using both conventional machine learning and deep learning is proposed. To automatically differentiate between normal posts and spam, evaluators were asked to recognize spam posts in advance. To construct the machine learning-based model, text features from posted content using text mining techniques from the perspective of linguistics were extracted, and supervised learning was performed to distinguish content noise from normal posts. For the deep learning model, raw text including and excluding special characters was utilized. A comparison analysis on deep neural networks using the two different recurrent neural network (RNN) models of the simple RNN and long short-term memory (LSTM) network was also performed. Furthermore, the proposed model was applied to two web forums. The experimental results indicate that the deep learning model affords significant improvements over the accuracy of conventional machine learning associated with text features. The accuracy of the proposed model using LSTM reaches 98.56%, and the precision and recall of the noise class reach 99% and 99.53%, respectively.


2019 ◽  
Vol 9 (21) ◽  
pp. 4604 ◽  
Author(s):  
Larabi-Marie-Sainte ◽  
Aburahmah ◽  
Almohaini ◽  
Saba

Diabetes is one of the most common diseases worldwide. Many Machine Learning (ML) techniques have been utilized in predicting diabetes in the last couple of years. The increasing complexity of this problem has inspired researchers to explore the robust set of Deep Learning (DL) algorithms. The highest accuracy achieved so far was 95.1% by a combined model CNN-LSTM. Even though numerous ML algorithms were used in solving this problem, there are a set of classifiers that are rarely used or even not used at all in this problem, so it is of interest to determine the performance of these classifiers in predicting diabetes. Moreover, there is no recent survey that has reviewed and compared the performance of all the proposed ML and DL techniques in addition to combined models. This article surveyed all the ML and DL techniques-based diabetes predictions published in the last six years. In addition, one study was developed that aimed to implement those rarely and not used ML classifiers on the Pima Indian Dataset to analyze their performance. The classifiers obtained an accuracy of 68%–74%. The recommendation is to use these classifiers in diabetes prediction and enhance them by developing combined models.


2021 ◽  
Vol 7 (4) ◽  
pp. 65
Author(s):  
Daniel Silva ◽  
Armando Sousa ◽  
Valter Costa

Object recognition represents the ability of a system to identify objects, humans or animals in images. Within this domain, this work presents a comparative analysis among different classification methods aiming at Tactode tile recognition. The covered methods include: (i) machine learning with HOG and SVM; (ii) deep learning with CNNs such as VGG16, VGG19, ResNet152, MobileNetV2, SSD and YOLOv4; (iii) matching of handcrafted features with SIFT, SURF, BRISK and ORB; and (iv) template matching. A dataset was created to train learning-based methods (i and ii), and with respect to the other methods (iii and iv), a template dataset was used. To evaluate the performance of the recognition methods, two test datasets were built: tactode_small and tactode_big, which consisted of 288 and 12,000 images, holding 2784 and 96,000 regions of interest for classification, respectively. SSD and YOLOv4 were the worst methods for their domain, whereas ResNet152 and MobileNetV2 showed that they were strong recognition methods. SURF, ORB and BRISK demonstrated great recognition performance, while SIFT was the worst of this type of method. The methods based on template matching attained reasonable recognition results, falling behind most other methods. The top three methods of this study were: VGG16 with an accuracy of 99.96% and 99.95% for tactode_small and tactode_big, respectively; VGG19 with an accuracy of 99.96% and 99.68% for the same datasets; and HOG and SVM, which reached an accuracy of 99.93% for tactode_small and 99.86% for tactode_big, while at the same time presenting average execution times of 0.323 s and 0.232 s on the respective datasets, being the fastest method overall. This work demonstrated that VGG16 was the best choice for this case study, since it minimised the misclassifications for both test datasets.


First Monday ◽  
2019 ◽  
Author(s):  
Niel Chah

Interest in deep learning, machine learning, and artificial intelligence from industry and the general public has reached a fever pitch recently. However, these terms are frequently misused, confused, and conflated. This paper serves as a non-technical guide for those interested in a high-level understanding of these increasingly influential notions by exploring briefly the historical context of deep learning, its public presence, and growing concerns over the limitations of these techniques. As a first step, artificial intelligence and machine learning are defined. Next, an overview of the historical background of deep learning reveals its wide scope and deep roots. A case study of a major deep learning implementation is presented in order to analyze public perceptions shaped by companies focused on technology. Finally, a review of deep learning limitations illustrates systemic vulnerabilities and a growing sense of concern over these systems.


AI Magazine ◽  
2022 ◽  
Vol 42 (3) ◽  
pp. 7-18
Author(s):  
Harald Steck ◽  
Linas Baltrunas ◽  
Ehtsham Elahi ◽  
Dawen Liang ◽  
Yves Raimond ◽  
...  

Deep learning has profoundly impacted many areas of machine learning. However, it took a while for its impact to be felt in the field of recommender systems. In this article, we outline some of the challenges encountered and lessons learned in using deep learning for recommender systems at Netflix. We first provide an overview of the various recommendation tasks on the Netflix service. We found that different model architectures excel at different tasks. Even though many deep-learning models can be understood as extensions of existing (simple) recommendation algorithms, we initially did not observe significant improvements in performance over well-tuned non-deep-learning approaches. Only when we added numerous features of heterogeneous types to the input data, deep-learning models did start to shine in our setting. We also observed that deep-learning methods can exacerbate the problem of offline–online metric (mis-)alignment. After addressing these challenges, deep learning has ultimately resulted in large improvements to our recommendations as measured by both offline and online metrics. On the practical side, integrating deep-learning toolboxes in our system has made it faster and easier to implement and experiment with both deep-learning and non-deep-learning approaches for various recommendation tasks. We conclude this article by summarizing our take-aways that may generalize to other applications beyond Netflix.


2021 ◽  
Vol 5 (1) ◽  
pp. 34-42
Author(s):  
Refika Sultan Doğan ◽  
Bülent Yılmaz

AbstractDetermination of polyp types requires tissue biopsy during colonoscopy and then histopathological examination of the microscopic images which tremendously time-consuming and costly. The first aim of this study was to design a computer-aided diagnosis system to classify polyp types using colonoscopy images (optical biopsy) without the need for tissue biopsy. For this purpose, two different approaches were designed based on conventional machine learning (ML) and deep learning. Firstly, classification was performed using random forest approach by means of the features obtained from the histogram of gradients descriptor. Secondly, simple convolutional neural networks (CNN) based architecture was built to train with the colonoscopy images containing colon polyps. The performances of these approaches on two (adenoma & serrated vs. hyperplastic) or three (adenoma vs. hyperplastic vs. serrated) category classifications were investigated. Furthermore, the effect of imaging modality on the classification was also examined using white-light and narrow band imaging systems. The performance of these approaches was compared with the results obtained by 3 novice and 4 expert doctors. Two-category classification results showed that conventional ML approach achieved significantly better than the simple CNN based approach did in both narrow band and white-light imaging modalities. The accuracy reached almost 95% for white-light imaging. This performance surpassed the correct classification rate of all 7 doctors. Additionally, the second task (three-category) results indicated that the simple CNN architecture outperformed both conventional ML based approaches and the doctors. This study shows the feasibility of using conventional machine learning or deep learning based approaches in automatic classification of colon types on colonoscopy images.


2021 ◽  
Vol 26 (1) ◽  
pp. 47-57
Author(s):  
Paul Menounga Mbilong ◽  
Asmae Berhich ◽  
Imane Jebli ◽  
Asmae El Kassiri ◽  
Fatima-Zahra Belouadha

Coronavirus 2019 (COVID-19) has reached the stage of an international epidemic with a major socioeconomic negative impact. Considering the weakness of the healthy structure and the limited availability of test kits, particularly in emerging countries, predicting the spread of COVID-19 is expected to help decision-makers to improve health management and contribute to alleviating the related risks. In this article, we studied the effectiveness of machine learning techniques using Morocco as a case-study. We studied the performance of six multi-step models derived from both Machine Learning and Deep Learning regards multiple scenarios by combining different time lags and three COVID-19 datasets(periods): confinement, deconfinement, and hybrid datasets. The results prove the efficiency of Deep Learning models and identify the best combinations of these models and the time lags enabling good predictions of new cases. The results also show that the prediction of the spread of COVID-19 is a context sensitive problem.


Sign in / Sign up

Export Citation Format

Share Document