Development of an Efficient Monitoring System Using Fog Computing and Machine Learning Algorithms on Healthcare 4.0

2022 ◽  
pp. 78-98
Author(s):  
Sowmya B. J. ◽  
Pradeep Kumar D. ◽  
Hanumantharaju R. ◽  
Gautam Mundada ◽  
Anita Kanavalli ◽  
...  

Disruptive innovations in data management and analytics have led to the development of patient-centric Healthcare 4.0 from the hospital-centric Healthcare 3.0. This work presents an IoT-based monitoring systems for patients with cardiovascular abnormalities. IoT-enabled wearable ECG sensor module transmits the readings in real-time to the fog nodes/mobile app for continuous analysis. Deep learning/machine learning model automatically detect and makes prediction on the rhythmic anomalies in the data. The application alerts and notifies the physician and the patient of the rhythmic variations. Real-time detection aids in the early diagnosis of the impending heart condition in the patient and helps physicians clinically to make quick therapeutic decisions. The system is evaluated on the MIT-BIH arrhythmia dataset of ECG data and achieves an overall accuracy of 95.12% in classifying cardiac arrhythmia.

Author(s):  
Andrea K McIntosh ◽  
Abram Hindle

Machine learning is a popular method of learning functions from data to represent and to classify sensor inputs, multimedia, emails, and calendar events. Smartphone applications have been integrating more and more intelligence in the form of machine learning. Machine learning functionality now appears on most smartphones as voice recognition, spell checking, word disambiguation, face recognition, translation, spatial reasoning, and even natural language summarization. Excited app developers who want to use machine learning on mobile devices face one serious constraint that they did not face on desktop computers or cloud virtual machines: the end-user’s mobile device has limited battery life, thus computationally intensive tasks can harm end-user’s phone availability by draining batteries of their stored energy. How can developers use machine learning and respect the limited battery life of mobile devices? Currently there are few guidelines for developers who want to employ machine learning on mobile devices yet are concerned about software energy consumption of their applications. In this paper we combine empirical measurements of many different machine learning algorithms with complexity theory to provide concrete and theoretically grounded recommendations to developers who want to employ machine learning on smartphones.


Author(s):  
Ambika P.

Machine learning is a subfield of artificial intelligence that encompass the automatic computing to make predictions. The key difference between a traditional program and machine-learning model is that it allows the model to learn from the data and helps to make its own decisions. It is one of the fastest growing areas of computing. The goal of this chapter is to explore the foundations of machine learning theory and mathematical derivations, which transform the theory into practical algorithms. This chapter also focuses a comprehensive review on machine learning and its types and why machine learning is important in real-world applications, and popular machine learning algorithms and their impact on fog computing. This chapter also gives further research directions on machine learning algorithms.


2016 ◽  
Author(s):  
Andrea K McIntosh ◽  
Abram Hindle

Machine learning is a popular method of learning functions from data to represent and to classify sensor inputs, multimedia, emails, and calendar events. Smartphone applications have been integrating more and more intelligence in the form of machine learning. Machine learning functionality now appears on most smartphones as voice recognition, spell checking, word disambiguation, face recognition, translation, spatial reasoning, and even natural language summarization. Excited app developers who want to use machine learning on mobile devices face one serious constraint that they did not face on desktop computers or cloud virtual machines: the end-user’s mobile device has limited battery life, thus computationally intensive tasks can harm end-user’s phone availability by draining batteries of their stored energy. How can developers use machine learning and respect the limited battery life of mobile devices? Currently there are few guidelines for developers who want to employ machine learning on mobile devices yet are concerned about software energy consumption of their applications. In this paper we combine empirical measurements of many different machine learning algorithms with complexity theory to provide concrete and theoretically grounded recommendations to developers who want to employ machine learning on smartphones.


Author(s):  
Juan C. Olivares-Rojas ◽  
Enrique Reyes-Archundia ◽  
Noel E. Rodriiguez-Maya ◽  
Jose A. Gutierrez-Gnecchi ◽  
Ismael Molina-Moreno ◽  
...  

2019 ◽  
Vol 9 (6) ◽  
pp. 1154 ◽  
Author(s):  
Ganjar Alfian ◽  
Muhammad Syafrudin ◽  
Bohan Yoon ◽  
Jongtae Rhee

Radio frequency identification (RFID) is an automated identification technology that can be utilized to monitor product movements within a supply chain in real-time. However, one problem that occurs during RFID data capturing is false positives (i.e., tags that are accidentally detected by the reader but not of interest to the business process). This paper investigates using machine learning algorithms to filter false positives. Raw RFID data were collected based on various tagged product movements, and statistical features were extracted from the received signal strength derived from the raw RFID data. Abnormal RFID data or outliers may arise in real cases. Therefore, we utilized outlier detection models to remove outlier data. The experiment results showed that machine learning-based models successfully classified RFID readings with high accuracy, and integrating outlier detection with machine learning models improved classification accuracy. We demonstrated the proposed classification model could be applied to real-time monitoring, ensuring false positives were filtered and hence not stored in the database. The proposed model is expected to improve warehouse management systems by monitoring delivered products to other supply chain partners.


Author(s):  
Jia Luo ◽  
Dongwen Yu ◽  
Zong Dai

It is not quite possible to use manual methods to process the huge amount of structured and semi-structured data. This study aims to solve the problem of processing huge data through machine learning algorithms. We collected the text data of the company’s public opinion through crawlers, and use Latent Dirichlet Allocation (LDA) algorithm to extract the keywords of the text, and uses fuzzy clustering to cluster the keywords to form different topics. The topic keywords will be used as a seed dictionary for new word discovery. In order to verify the efficiency of machine learning in new word discovery, algorithms based on association rules, N-Gram, PMI, andWord2vec were used for comparative testing of new word discovery. The experimental results show that the Word2vec algorithm based on machine learning model has the highest accuracy, recall and F-value indicators.


2020 ◽  
pp. 426-429
Author(s):  
Devipriya A ◽  
Brindha D ◽  
Kousalya A

Eye state ID is a sort of basic time-arrangement grouping issue in which it is additionally a problem area in the late exploration. Electroencephalography (EEG) is broadly utilized in a vision state in order to recognize people perception form. Past examination was approved possibility of AI & measurable methodologies of EEG vision state arrangement. This research means to propose novel methodology for EEG vision state distinguishing proof utilizing Gradual Characteristic Learning (GCL) in light of neural organizations. GCL is a novel AI methodology which bit by bit imports and prepares includes individually. Past examinations have confirmed that such a methodology is appropriate for settling various example acknowledgment issues. Nonetheless, in these past works, little examination on GCL zeroed in its application to temporal-arrangement issues. Thusly, it is as yet unclear if GCL will be utilized for adapting the temporal-arrangement issues like EEG vision state characterization. Trial brings about this examination shows that, with appropriate element extraction and highlight requesting, GCL cannot just productively adapt to time-arrangement order issues, yet additionally display better grouping execution as far as characterization mistake rates in correlation with ordinary and some different methodologies. Vision state classification is performed and discussed with KNN classification and accuracy is enriched finally discussed the vision state classification with ensemble machine learning model.


Sign in / Sign up

Export Citation Format

Share Document