scholarly journals What should mobile app developers do about machine learning and energy?

Author(s):  
Andrea K McIntosh ◽  
Abram Hindle

Machine learning is a popular method of learning functions from data to represent and to classify sensor inputs, multimedia, emails, and calendar events. Smartphone applications have been integrating more and more intelligence in the form of machine learning. Machine learning functionality now appears on most smartphones as voice recognition, spell checking, word disambiguation, face recognition, translation, spatial reasoning, and even natural language summarization. Excited app developers who want to use machine learning on mobile devices face one serious constraint that they did not face on desktop computers or cloud virtual machines: the end-user’s mobile device has limited battery life, thus computationally intensive tasks can harm end-user’s phone availability by draining batteries of their stored energy. How can developers use machine learning and respect the limited battery life of mobile devices? Currently there are few guidelines for developers who want to employ machine learning on mobile devices yet are concerned about software energy consumption of their applications. In this paper we combine empirical measurements of many different machine learning algorithms with complexity theory to provide concrete and theoretically grounded recommendations to developers who want to employ machine learning on smartphones.

2016 ◽  
Author(s):  
Andrea K McIntosh ◽  
Abram Hindle

Machine learning is a popular method of learning functions from data to represent and to classify sensor inputs, multimedia, emails, and calendar events. Smartphone applications have been integrating more and more intelligence in the form of machine learning. Machine learning functionality now appears on most smartphones as voice recognition, spell checking, word disambiguation, face recognition, translation, spatial reasoning, and even natural language summarization. Excited app developers who want to use machine learning on mobile devices face one serious constraint that they did not face on desktop computers or cloud virtual machines: the end-user’s mobile device has limited battery life, thus computationally intensive tasks can harm end-user’s phone availability by draining batteries of their stored energy. How can developers use machine learning and respect the limited battery life of mobile devices? Currently there are few guidelines for developers who want to employ machine learning on mobile devices yet are concerned about software energy consumption of their applications. In this paper we combine empirical measurements of many different machine learning algorithms with complexity theory to provide concrete and theoretically grounded recommendations to developers who want to employ machine learning on smartphones.


2022 ◽  
pp. 78-98
Author(s):  
Sowmya B. J. ◽  
Pradeep Kumar D. ◽  
Hanumantharaju R. ◽  
Gautam Mundada ◽  
Anita Kanavalli ◽  
...  

Disruptive innovations in data management and analytics have led to the development of patient-centric Healthcare 4.0 from the hospital-centric Healthcare 3.0. This work presents an IoT-based monitoring systems for patients with cardiovascular abnormalities. IoT-enabled wearable ECG sensor module transmits the readings in real-time to the fog nodes/mobile app for continuous analysis. Deep learning/machine learning model automatically detect and makes prediction on the rhythmic anomalies in the data. The application alerts and notifies the physician and the patient of the rhythmic variations. Real-time detection aids in the early diagnosis of the impending heart condition in the patient and helps physicians clinically to make quick therapeutic decisions. The system is evaluated on the MIT-BIH arrhythmia dataset of ECG data and achieves an overall accuracy of 95.12% in classifying cardiac arrhythmia.


Author(s):  
P. Priakanth ◽  
S. Gopikrishnan

The idea of an intelligent, independent learning machine has fascinated humans for decades. The philosophy behind machine learning is to automate the creation of analytical models in order to enable algorithms to learn continuously with the help of available data. Since IoT will be among the major sources of new data, data science will make a great contribution to make IoT applications more intelligent. Machine learning can be applied in cases where the desired outcome is known (guided learning) or the data is not known beforehand (unguided learning) or the learning is the result of interaction between a model and the environment (reinforcement learning). This chapter answers the questions: How could machine learning algorithms be applied to IoT smart data? What is the taxonomy of machine learning algorithms that can be adopted in IoT? And what are IoT data characteristics in real-world which requires data analytics?


Author(s):  
P. Priyanga ◽  
N. C. Naveen

This article describes how healthcare organizations is growing increasingly and are the potential beneficiary users of the data that is generated and gathered. From hospitals to clinics, data and analytics can be a very powerful tool that can improve patient care and satisfaction with efficiency. In developing countries, cardiovascular diseases have a huge impact on increasing death rates and are expected by the end of 2020 in spite of the best clinical practices. The current Machine Learning (ml) algorithms are adapted to estimate the heart disease risks in middle aged patients. Hence, to predict the heart diseases a detailed analysis is made in this research work by taking into account the angiographic heart disease status (i.e. ≥ 50% diameter narrowing). Deep Neural Network (DNN), Extreme Learning Machine (elm), K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) learning algorithm (with linear and polynomial kernel functions) are considered in this work. The accuracy and results of these algorithms are analyzed by comparing the effectiveness among them.


Author(s):  
P. Priakanth ◽  
S. Gopikrishnan

The idea of an intelligent, independent learning machine has fascinated humans for decades. The philosophy behind machine learning is to automate the creation of analytical models in order to enable algorithms to learn continuously with the help of available data. Since IoT will be among the major sources of new data, data science will make a great contribution to make IoT applications more intelligent. Machine learning can be applied in cases where the desired outcome is known (guided learning) or the data is not known beforehand (unguided learning) or the learning is the result of interaction between a model and the environment (reinforcement learning). This chapter answers the questions: How could machine learning algorithms be applied to IoT smart data? What is the taxonomy of machine learning algorithms that can be adopted in IoT? And what are IoT data characteristics in real-world which requires data analytics?


2018 ◽  
Vol 7 (4.6) ◽  
pp. 108
Author(s):  
Priyadarshini Chatterjee ◽  
Ch. Mamatha ◽  
T. Jagadeeswari ◽  
Katha Chandra Shekhar

Every 100th cases in cancer we come across are of breasts cancer cases. It is becoming very common in woman of all ages. Correct detection of these lesions in breast is very important. With less of human intervention, the goal is to do the correct diagnosis. Not all the cases of breast masses are futile. If the cases are not dealt properly, they might create panic amongst people. Human detection without machine intervention is not hundred percent accurate. If machines can be deeply trained, they can do the same work of detection with much more accuracy. Bayesian method has a vast area of application in the field of medical image processing as well as in machine learning. This paper intends to use Bayesian probabilistic in image segmentation as well as in machine learning. Machine learning in image processing means application in pattern recognition. There are various machine learning algorithms that can classify an image at their best. In the proposed system, we will be firstly segment the image using Bayesian method. On the segmented parts of the image, we will be applying machine learning algorithm to diagnose the mass or the growth.  


2019 ◽  
Vol 2019 ◽  
pp. 1-17
Author(s):  
Ju-Young Shin ◽  
Yonghun Ro ◽  
Joo-Wan Cha ◽  
Kyu-Rang Kim ◽  
Jong-Chul Ha

Machine learning algorithms should be tested for use in quantitative precipitation estimation models of rain radar data in South Korea because such an application can provide a more accurate estimate of rainfall than the conventional ZR relationship-based model. The applicability of random forest, stochastic gradient boosted model, and extreme learning machine methods to quantitative precipitation estimation models was investigated using case studies with polarization radar data from Gwangdeoksan radar station. Various combinations of input variable sets were tested, and results showed that machine learning algorithms can be applied to build the quantitative precipitation estimation model of the polarization radar data in South Korea. The machine learning-based quantitative precipitation estimation models led to better performances than ZR relationship-based models, particularly for heavy rainfall events. The extreme learning machine is considered the best of the algorithms used based on evaluation criteria.


Author(s):  
Z. Neili ◽  
M. Fezari ◽  
A. Redjati

The acquisition of Breath sounds (BS) signals from a human respiratory system with an electronic stethoscope, provide and offer prominent information which helps the doctors to diagnosis and classification of pulmonary diseases. Unfortunately, this BS signals with other biological signals have a non-stationary nature according to the variation of the lung volume, and this nature makes it difficult to analyze and classify between several diseases. In this study, we were focused on comparing the ability of the extreme learning machine (ELM) and k-nearest neighbour (K-nn) machine learning algorithms in the classification of adventitious and normal breath sounds. To do so, the empirical mode decomposition (EMD) was used in this work to analyze BS, this method is rarely used in the breath sounds analysis. After the EMD decomposition of the signals into Intrinsic Mode Functions (IMFs), the Hjorth descriptors (Activity) and Permutation Entropy (PE) features were extracted from each IMFs and combined for classification stage. The study has found that the combination of features (activity and PE) yielded an accuracy of 90.71%, 95% using ELM and K-nn respectively in binary classification (normal and abnormal breath sounds), and 83.57%, 86.42% in multiclass classification (five classes).


2021 ◽  
Author(s):  
Igor Miranda ◽  
Gildeberto Cardoso ◽  
Madhurananda Pahar ◽  
Gabriel Oliveira ◽  
Thomas Niesler

Predicting the need for hospitalization due to COVID-19 may help patients to seek timely treatment and assist health professionals to monitor cases and allocate resources. We investigate the use of machine learning algorithms to predict the risk of hospitalization due to COVID-19 using the patient's medical history and self-reported symptoms, regardless of the period in which they occurred. Three datasets containing information regarding 217,580 patients from three different states in Brazil have been used. Decision trees, neural networks, and support vector machines were evaluated, achieving accuracies between 79.1% to 84.7%. Our analysis shows that better performance is achieved in Brazilian states ranked more highly in terms of the official human development index (HDI), suggesting that health facilities with better infrastructure generate data that is less noisy. One of the models developed in this study has been incorporated into a mobile app that is available for public use.


2015 ◽  
Vol 4 (1) ◽  
pp. 148
Author(s):  
Nahid Khorashadizade ◽  
Hassan Rezaei

<p>Hepatitis disease is caused by liver injury. Rapid diagnosis of this disease prevents its development and suffering to cirrhosis of the liver. Data mining is a new branch of science that helps physicians for proper decision making. In data mining using reduction feature and machine learning algorithms are useful for reducing the complexity of the problem and method of disease diagnosis, respectively. In this study, a new algorithm is proposed for hepatitis diagnosis according to Principal Component Analysis (PCA) and Error Minimized Extreme Learning Machine (EMELM). The algorithm includes two stages; in reduction feature phase, missing records were deleted and hepatitis dataset was normalized in [0,1] range. Thereafter, analysis of the principal component was applied for reduction feature. In classification phase, the reduced dataset is classified using EMELM. For evaluation of the algorithm, hepatitis disease dataset from UCI Machine Learning Repository (University of California) was selected. The features of this dataset reduced from 19 to 6 using PCA and the accuracy of the reduced dataset was obtained using EMELM. The results revealed that the proposed hybrid intelligent diagnosis system reached the higher classification accuracy and shorter time compared with other methods.<strong></strong></p>


Sign in / Sign up

Export Citation Format

Share Document