scholarly journals Detection of Cardiac Arrhythmia using Machine Learning Algorithms

2019 ◽  
Vol 8 (4) ◽  
pp. 11704-11707

Cardiac Arrhythmia is a type of condition a human being suffers from abnormal heart rhythm. This is experienced due to the malfunctioning of electrical impulses that coordinate the heartbeat. When this happens the heartbeats slow/ fast more precisely irregularly. The rhythm of the heart is controlled by a major node called the sinus node which is present at the top of the heart, triggers the electrical pulses which make the heart to beat and pumping of blood to the body. Some of the symptoms of Cardiac Arrhythmia are fainting, unconsciousness, shortness of breath, unexpected functioning of the heart. It leads to death in minutes if medical attention is not provided. To diagnose this doctor, require to study the heart recordings evaluate heartbeats from different parts of the body accurately. It takes a lot of time to evaluate so based on the research work contributed in this field we try to propose a different approach to the same. In this paper, we compare different machine learning techniques and algorithms proposed by different authors and understand the advantages and disadvantages of the system and to bring a new system in place of the existing system where all have used the same ECG recordings from the same database of MIT-BIH. With the initial research work done by us we found out that the use of Phonocardiogram Recordings (PCG) provides more fidelity and accurate compared to ECG recordings. With the initial stage of work, we take the PCG recordings dataset and convert it to a spectrogram image and apply a convolutional neural network to predict the normal or abnormal heartbeat

2020 ◽  
Vol 8 (5) ◽  
pp. 4624-4627

In recent years, a lot of data has been generated about students, which can be utilized for deciding the career path of the student. This paper discusses some of the machine learning techniques which can be used to predict the performance of a student and help to decide his/her career path. Some of the key Machine Learning (ML) algorithms applied in our research work are Linear Regression, Logistics Regression, Support Vector machine, Naïve Bayes Classifier and K- means Clustering. The aim of this paper is to predict the student career path using Machine Learning algorithms. We compare the efficiencies of different ML classification algorithms on a real dataset obtained from University students.


Author(s):  
Meenal Joshi ◽  
Shiv Kumar

<p>According to modern era education is the key to achieve success in the future; it develops a human personality, thoughts, and social skills. The purpose of this research work is to focus on educational data mining (EDM) through machine learning algorithms. EDM means to discover hidden knowledge and pattern about student's performance. Machine learning can be useful to predict the learning outcomes of students. From last few years, several tools have been used to judge the student's performance from different points of view like the student's level, objectives, techniques, algorithms, and different methods. In this paper, predicting and analyzing student performance in secondary school is conducted using data mining techniques and machine learning algorithms such as Naive Bayes, Decision Tree algorithm J48, and Logistic Regression. For this the collection of dataset from "Secondary School" and then filtration is applying on desired values using WEKA, tool.</p>


2019 ◽  
Vol 8 (4) ◽  
pp. 7356-7360

Data Analytics is a scientific as well as an engineering tool used to investigate the raw data to revamp the information to achieve knowledge. This is normally connected with obtaining knowledge from reliable information source and rapidity in information processing, and future prediction of the data analysis. Big Data analytics is strongly evolving with different features of volume, velocity and Vectors. Most of the organizations are now concentrating on analyzing information or raw data that are fascinated in deploying analytics to survive forthcoming issues and challenges. The prediction model or intelligent model is proposed in this research to apply machine learning algorithms in the data set. Then it is interpreted and to analyze the better forecast value of the study. The major objective of this research work is to find the optimum prediction from the medical data set using the machine learning techniques.


2021 ◽  
Vol 3 (3) ◽  
pp. 128-145
Author(s):  
R. Valanarasu

Recently, IoT is referred as a descriptive term for the idea that everything in the world should be connected to the internet. Healthcare and social goods, industrial automation, and energy are just a few of the areas where the Internet of Things applications are widely used. Applications are becoming smarter and linked devices are enabling their exploitation in every element of the Internet of Things [IoT]. Machine Learning (ML) methods are used to improve an application's intelligence and capabilities by analysing the large amounts of data. ML and IoT have been used for smart transportation, which has gained the increasing research interest. This research covers a range of Internet of Things (IoT) applications that use suitable machine learning techniques to enhance efficiency and reliability in the intelligent automation sector. Furthermore, this research article examines and identifies various applications such as energy, high-quality sensors associated, and G-map associated appropriate applications for IoT. In addition to that, the proposed research work includes comparisons and tabulations of several different machine learning algorithms for IoT applications.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Anisha P. Rodrigues ◽  
Roshan Fernandes ◽  
Adarsh Bhandary ◽  
Asha C. Shenoy ◽  
Ashwanth Shetty ◽  
...  

Twitter is a popular microblogging social media, using which its users can share useful information. Keeping a track of user postings and common hashtags allows us to understand what is happening around the world and what are people’s opinions on it. As such, a Twitter trend analysis analyzes Twitter data and hashtags to determine what topics are being talked about the most on Twitter. Feature extraction and trend detection can be performed using machine learning algorithms. Big data tools and techniques are needed to extract relevant information from continuous steam of data originating from Twitter. The objectives of this research work are to analyze the relative popularity of different hashtags and which field has the maximum share of voice. Along with this, the common interests of the community can also be determined. Twitter trends plan an important role in the business field, marketing, politics, sports, and entertainment activities. The proposed work implemented the Twitter trend analysis using latent Dirichlet allocation, cosine similarity, K means clustering, and Jaccard similarity techniques and compared the results with Big Data Apache SPARK tool implementation. The LDA technique for trend analysis resulted in an accuracy of 74% and Jaccard with an accuracy of 83% for static data. The results proved that the real-time tweets are analyzed comparatively faster in the Big Data Apache SPARK tool than in the normal execution environment.


2021 ◽  
Vol 16 (10) ◽  
pp. 186-188
Author(s):  
A. Saran Kumar ◽  
R. Rekha

Drug-Drug interaction (DDI) refers to change in the reaction of a drug when the person consumes other drug. It is the main cause of avertable bad drug reactions causing major issues on the patient’s health and the information systems. Many computational techniques have been used to predict the adverse effects of drug-drug interactions. However, these methods do not provide adequate information required for the prediction of DDI. Machine learning algorithms provide a set of methods which can increase the accuracy and success rate for well-defined issues with abundant data. This study provides a comprehensive survey on most popular machine learning and deep learning algorithms used by the researchers to predict DDI. In addition, the advantages and disadvantages of various machine learning approaches have also been discussed here.


2021 ◽  
Vol 9 (2) ◽  
pp. 554-564
Author(s):  
Golmei Shaheamlung, Harshpreet Kaur

In the 21st-century, the issue of liver disease has been increasing all over the world. As per the latest survey report, liver disease death toll has been rise approximately 2 million per year worldwide. The overall percentage of death by liver disease is 3.5% worldwide. Chronic Liver disease is also considered to be one of the deadly diseases, so early detection and treatment can recover the disease easily. Due to rapid advancement in Artificial intelligence (AI), like various machine learning algorithms SVM, K-mean clustering, KNN, Random forest, Logistic regression, etc., This will improve the life span of a patient suffering from Chronic Liver Disease (CLD) in early stages. The data can be obtained in a large volume due to the broad exploitation of bar codes for supreme marketable products, the mechanization of various business and government dealings, and the development in the data collection tools. This research work is based on liver disease prediction using machine learning algorithms. Liver disease prediction has various levels of steps involved, pre-processing, feature extraction, and classification. In this s research work, a hybrid classification method is proposed for liver disease prediction. And Datasets are collected from the Kaggle database of Indian liver patient records. The proposed model achieved an accuracy of 77.58%. The proposed technique is implemented in Python with the Spyder tool and results are analyzed in terms of accuracy, precision, and recall.  


2020 ◽  
Vol 1 (1) ◽  
pp. 18-36
Author(s):  
Sahar A. EL_Rahman ◽  
Reem Ahmed AlRashed ◽  
Duna Nasser AlZunaytan ◽  
Nada Jahz AlHarbi ◽  
Shroog Abdullah AlThubaiti ◽  
...  

This paper aims to improve the quality of the patient's life and provide them with the lifestyle they need. And we have the intention to obtain this by creating a mobile application that analyzes the patient's data such as diabetes, blood pressure, and kidney. Then, implement the system to diagnose patients of chronic diseases using machine learning techniques such as classification. It's hard for the patients of chronic diseases to record their measurements on a paper every time they measure either the blood pressure or sugar level or any other disease that needs periodic measurements. The paper might be lost, and this can lead the doctor not fully to understand the case. So, the application is going to record measurements in the database. Also, it's difficult for patients to decide what to eat or how many times they should exercise according to their situation. Our idea is to recommend a lifestyle for the patient and make the doctor participate in it by writing notes. In this paper, machine learning classifiers were used to predict whether the person is prone to some chronic diseases. Blood pressure, diabetes and kidney are considered in this work. Orange3 from Anaconda-Navigator is the data mining tool used to test some machine learning algorithms. Blood pressure is the amount of force that blood exerts on the walls of the arteries as it flows through them. When this pressure reaches high levels, it can lead to serious health problems. For hypertension, Tree algorithm has shown 100% accuracy, which was the best one. Chronic Kidney Disease (CKD) is a significant public health concern with rising prevalence. With a set of considered attributes such as specific gravity, albumin, serum creatinine, hemoglobin, packed cell volume and hypertension used to predict if the person has Kidney disease or not. For kidney, Random Forest algorithm has shown 100% accuracy, which was the best one among other algorithms tested. Diabetes is a chronic disease when it cannot the pancreas to produce insulin, or when the body cannot use the insulin the pancreas produced. We considered attributes such as pregnancies, glucose, blood pressure, skin thickness, insulin, diabetes pedigree function, age and BMI of a person to diagnose whether a patient has diabetes based on specific diagnostic measurements or not. For diabetes, neural networks have shown the best accuracy. It was 76.3%.


2021 ◽  
Vol 15 ◽  
Author(s):  
Diu K. Luu ◽  
Anh T. Nguyen ◽  
Ming Jiang ◽  
Jian Xu ◽  
Markus W. Drealan ◽  
...  

Previous literature shows that deep learning is an effective tool to decode the motor intent from neural signals obtained from different parts of the nervous system. However, deep neural networks are often computationally complex and not feasible to work in real-time. Here we investigate different approaches' advantages and disadvantages to enhance the deep learning-based motor decoding paradigm's efficiency and inform its future implementation in real-time. Our data are recorded from the amputee's residual peripheral nerves. While the primary analysis is offline, the nerve data is cut using a sliding window to create a “pseudo-online” dataset that resembles the conditions in a real-time paradigm. First, a comprehensive collection of feature extraction techniques is applied to reduce the input data dimensionality, which later helps substantially lower the motor decoder's complexity, making it feasible for translation to a real-time paradigm. Next, we investigate two different strategies for deploying deep learning models: a one-step (1S) approach when big input data are available and a two-step (2S) when input data are limited. This research predicts five individual finger movements and four combinations of the fingers. The 1S approach using a recurrent neural network (RNN) to concurrently predict all fingers' trajectories generally gives better prediction results than all the machine learning algorithms that do the same task. This result reaffirms that deep learning is more advantageous than classic machine learning methods for handling a large dataset. However, when training on a smaller input data set in the 2S approach, which includes a classification stage to identify active fingers before predicting their trajectories, machine learning techniques offer a simpler implementation while ensuring comparably good decoding outcomes to the deep learning ones. In the classification step, either machine learning or deep learning models achieve the accuracy and F1 score of 0.99. Thanks to the classification step, in the regression step, both types of models result in a comparable mean squared error (MSE) and variance accounted for (VAF) scores as those of the 1S approach. Our study outlines the trade-offs to inform the future implementation of real-time, low-latency, and high accuracy deep learning-based motor decoder for clinical applications.


Machine learning is concerned with algorithms inspired by the structure and function of the brain called artificial neural networks. Neural framework offers wide support for machine learning algorithms. It is an interface, library or tool which allows developers to build machine learning models easily, without getting into the depth of the underlying algorithms. The neural framework is an exceptionally intricate piece of a person that co-ordinate its activities Moreover, tactile data by transmitting signs to and from various pieces of the body. Neural frameworks are applied to perform object gathering and a grasp orchestrating task. Machine Learning techniques have been applied to many sub problems in robot perception – pattern recognition and self-organisation. Modern robot framework which demands a complete detail of each movement of the robot, which breaks the pick-and-spot issue into about free, computationally conceivable sub-issues as a phase toward a comprehensive endeavour level framework


Sign in / Sign up

Export Citation Format

Share Document