Prediction of Parkinson’s disease and severity of the disease using Machine Learning and Deep Learning algorithm

Author(s):  
Pooja Raundale ◽  
Chetan Thosar ◽  
Shardul Rane
IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 147635-147646 ◽  
Author(s):  
Wu Wang ◽  
Junho Lee ◽  
Fouzi Harrou ◽  
Ying Sun

2018 ◽  
Vol 7 (2.25) ◽  
pp. 37
Author(s):  
K S. Harish Kumar ◽  
Dijo Micheal Jerald ◽  
A Emmanuel

A good treatment is dependent on the accuracy of the diagnosis. The cure for the disease starts with the process of diagnosis. All these years, the grade and standard of the medical field has been increasing exponentially, yet there has been no significant downfall in the rate of unintentional medical errors. These errors can be avoided using Deep learning algorithm to predict the disease. The Deep Learning algorithm scans analyses and compares the patient's report with its dataset and predicts the nature and severity of the disease. The test results from the patient’s report are extracted by using PDF processing. More the medical reports analyzed, more will be the intelligence gained by the algorithm. This will be of great assistance to the doctors as they can interpret their diagnosis with the results predicted by the algorithm.  


Author(s):  
Fawziya M. Rammo ◽  
Mohammed N. Al-Hamdani

Many languages identification (LID) systems rely on language models that use machine learning (ML) approaches, LID systems utilize rather long recording periods to achieve satisfactory accuracy. This study aims to extract enough information from short recording intervals in order to successfully classify the spoken languages under test. The classification process is based on frames of (2-18) seconds where most of the previous LID systems were based on much longer time frames (from 3 seconds to 2 minutes). This research defined and implemented many low-level features using MFCC (Mel-frequency cepstral coefficients), containing speech files in five languages (English. French, German, Italian, Spanish), from voxforge.org an open-source corpus that consists of user-submitted audio clips in various languages, is the source of data used in this paper. A CNN (convolutional Neural Networks) algorithm applied in this paper for classification and the result was perfect, binary language classification had an accuracy of 100%, and five languages classification with six languages had an accuracy of 99.8%.


Information ◽  
2020 ◽  
Vol 11 (5) ◽  
pp. 279 ◽  
Author(s):  
Bambang Susilo ◽  
Riri Fitri Sari

The internet has become an inseparable part of human life, and the number of devices connected to the internet is increasing sharply. In particular, Internet of Things (IoT) devices have become a part of everyday human life. However, some challenges are increasing, and their solutions are not well defined. More and more challenges related to technology security concerning the IoT are arising. Many methods have been developed to secure IoT networks, but many more can still be developed. One proposed way to improve IoT security is to use machine learning. This research discusses several machine-learning and deep-learning strategies, as well as standard datasets for improving the security performance of the IoT. We developed an algorithm for detecting denial-of-service (DoS) attacks using a deep-learning algorithm. This research used the Python programming language with packages such as scikit-learn, Tensorflow, and Seaborn. We found that a deep-learning model could increase accuracy so that the mitigation of attacks that occur on an IoT network is as effective as possible.


2021 ◽  
Author(s):  
Sidhant Idgunji ◽  
Madison Ho ◽  
Jonathan L. Payne ◽  
Daniel Lehrmann ◽  
Michele Morsilli ◽  
...  

<p>The growing digitization of fossil images has vastly improved and broadened the potential application of big data and machine learning, particularly computer vision, in paleontology. Recent studies show that machine learning is capable of approaching human abilities of classifying images, and with the increase in computational power and visual data, it stands to reason that it can match human ability but at much greater efficiency in the near future. Here we demonstrate this potential of using deep learning to identify skeletal grains at different levels of the Linnaean taxonomic hierarchy. Our approach was two-pronged. First, we built a database of skeletal grain images spanning a wide range of animal phyla and classes and used this database to train the model. We used a Python-based method to automate image recognition and extraction from published sources. Second, we developed a deep learning algorithm that can attach multiple labels to a single image. Conventionally, deep learning is used to predict a single class from an image; here, we adopted a Branch Convolutional Neural Network (B-CNN) technique to classify multiple taxonomic levels for a single skeletal grain image. Using this method, we achieved over 90% accuracy for both the coarse, phylum-level recognition and the fine, class-level recognition across diverse skeletal grains (6 phyla and 15 classes). Furthermore, we found that image augmentation improves the overall accuracy. This tool has potential applications in geology ranging from biostratigraphy to paleo-bathymetry, paleoecology, and microfacies analysis. Further improvement of the algorithm and expansion of the training dataset will continue to narrow the efficiency gap between human expertise and machine learning.</p>


2021 ◽  
Author(s):  
Donghwan Yun ◽  
Semin Cho ◽  
Yong Chul Kim ◽  
Dong Ki Kim ◽  
Kook-Hwan Oh ◽  
...  

BACKGROUND Precise prediction of contrast media-induced acute kidney injury (CIAKI) is an important issue because of its relationship with worse outcomes. OBJECTIVE Herein, we examined whether a deep learning algorithm could predict the risk of intravenous CIAKI better than other machine learning and logistic regression models in patients undergoing computed tomography. METHODS A total of 14,185 cases that underwent intravenous contrast media for computed tomography under the preventive and monitoring facility in Seoul National University Hospital were reviewed. CIAKI was defined as an increase in serum creatinine ≥0.3 mg/dl within 2 days and/or ≥50% within 7 days. Using both time-varying and time-invariant features, machine learning models, such as the recurrent neural network (RNN), light gradient boosting machine, extreme boosting machine, random forest, decision tree, support vector machine, κ-nearest neighboring, and logistic regression, were developed using a training set, and their performance was compared using the area under the receiver operating characteristic curve (AUROC) in a test set. RESULTS CIAKI developed in 261 cases (1.8%). The RNN model had the highest AUROC value of 0.755 (0.708–0.802) for predicting CIAKI, which was superior to those obtained from other machine learning models. Although CIAKI was defined as an increase in serum creatinine ≥0.5 mg/dl and/or ≥25% within 3 days, the highest performance was achieved in the RNN model with an AUROC of 0.716 (0.664–0.768). In the feature ranking analysis, albumin level was the most highly contributing factor to RNN performance, followed by time-varying kidney function. CONCLUSIONS Application of a deep learning algorithm improves the predictability of intravenous CIAKI after computed tomography, representing a basis for future clinical alarming and preventive systems.


Sign in / Sign up

Export Citation Format

Share Document