scholarly journals Critical Healthcare Assessment using WBAN and SVM

Author(s):  
Tambe Sagar B ◽  
◽  
Patil Kunal A ◽  
Bhavare Pankaj C ◽  
Kendre Govind L ◽  
...  

Today good healthcare facilities and awareness of need of good healthcare is increasing in India. But as awareness increases it also strains the current healthcare infrastructure as patient expects more secured treatment round the clock. So there arises a need of remote assessment of patient health all the time using IoT devices. But these devices also need to be monitored by health worker in a hospital. Due to human interaction with theses IoT devices it may give rise to errors as human decisions can be late as a human health worker cannot look at the devices 24X7. So, to remove dependence of human decision-making technologies such as WBAN, cloud and machine learning has to be utilized together to make heath decision of a patient with less human interaction. So, we are designing a project where healthcare of a patient can be monitored extensively using WBAN. In first part of our project, we design a IoT device using Arduino and ESP8266 Wi-Fi module. The sensors connected to the Arduino will be pulse sensor, temperature sensor etc. The sensors will transfer data from patient to a server using ESP8266 and Wi-Fi called as WBAN network. The server will then apply SVM machine learning algorithm on the sensor readings and classify in two categories safe and unsafe. Custom made training dataset will be used to train the SVM. If unsafe readings are found the sensor will send a message to concerned doctor and upload readings to the cloud. The doctor on receiving alert can see the readings on the android app designed for the project and take a decision on the condition of the patient. For the project we are using Google Cloud Platform as our cloud provider which is free for use. Thus, by using our project a doctor can monitor his patient remotely from anywhere and the system will help in making decisions on the behalf of the doctor.

Animals ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 1549
Author(s):  
Robert D. Chambers ◽  
Nathanael C. Yoder ◽  
Aletha B. Carson ◽  
Christian Junge ◽  
David E. Allen ◽  
...  

Collar-mounted canine activity monitors can use accelerometer data to estimate dog activity levels, step counts, and distance traveled. With recent advances in machine learning and embedded computing, much more nuanced and accurate behavior classification has become possible, giving these affordable consumer devices the potential to improve the efficiency and effectiveness of pet healthcare. Here, we describe a novel deep learning algorithm that classifies dog behavior at sub-second resolution using commercial pet activity monitors. We built machine learning training databases from more than 5000 videos of more than 2500 dogs and ran the algorithms in production on more than 11 million days of device data. We then surveyed project participants representing 10,550 dogs, which provided 163,110 event responses to validate real-world detection of eating and drinking behavior. The resultant algorithm displayed a sensitivity and specificity for detecting drinking behavior (0.949 and 0.999, respectively) and eating behavior (0.988, 0.983). We also demonstrated detection of licking (0.772, 0.990), petting (0.305, 0.991), rubbing (0.729, 0.996), scratching (0.870, 0.997), and sniffing (0.610, 0.968). We show that the devices’ position on the collar had no measurable impact on performance. In production, users reported a true positive rate of 95.3% for eating (among 1514 users), and of 94.9% for drinking (among 1491 users). The study demonstrates the accurate detection of important health-related canine behaviors using a collar-mounted accelerometer. We trained and validated our algorithms on a large and realistic training dataset, and we assessed and confirmed accuracy in production via user validation.


2021 ◽  
Vol 11 (9) ◽  
pp. 3866
Author(s):  
Jun-Ryeol Park ◽  
Hye-Jin Lee ◽  
Keun-Hyeok Yang ◽  
Jung-Keun Kook ◽  
Sanghee Kim

This study aims to predict the compressive strength of concrete using a machine-learning algorithm with linear regression analysis and to evaluate its accuracy. The open-source software library TensorFlow was used to develop the machine-learning algorithm. In the machine-earning algorithm, a total of seven variables were set: water, cement, fly ash, blast furnace slag, sand, coarse aggregate, and coarse aggregate size. A total of 4297 concrete mixtures with measured compressive strengths were employed to train and testing the machine-learning algorithm. Of these, 70% were used for training, and 30% were utilized for verification. For verification, the research was conducted by classifying the mixtures into three cases: the case where the machine-learning algorithm was trained using all the data (Case-1), the case where the machine-learning algorithm was trained while maintaining the same number of training dataset for each strength range (Case-2), and the case where the machine-learning algorithm was trained after making the subcase of each strength range (Case-3). The results indicated that the error percentages of Case-1 and Case-2 did not differ significantly. The error percentage of Case-3 was far smaller than those of Case-1 and Case-2. Therefore, it was concluded that the range of training dataset of the concrete compressive strength is as important as the amount of training dataset for accurately predicting the concrete compressive strength using the machine-learning algorithm.


Genes ◽  
2021 ◽  
Vol 12 (4) ◽  
pp. 527
Author(s):  
Eran Elhaik ◽  
Dan Graur

In the last 15 years or so, soft selective sweep mechanisms have been catapulted from a curiosity of little evolutionary importance to a ubiquitous mechanism claimed to explain most adaptive evolution and, in some cases, most evolution. This transformation was aided by a series of articles by Daniel Schrider and Andrew Kern. Within this series, a paper entitled “Soft sweeps are the dominant mode of adaptation in the human genome” (Schrider and Kern, Mol. Biol. Evolut. 2017, 34(8), 1863–1877) attracted a great deal of attention, in particular in conjunction with another paper (Kern and Hahn, Mol. Biol. Evolut. 2018, 35(6), 1366–1371), for purporting to discredit the Neutral Theory of Molecular Evolution (Kimura 1968). Here, we address an alleged novelty in Schrider and Kern’s paper, i.e., the claim that their study involved an artificial intelligence technique called supervised machine learning (SML). SML is predicated upon the existence of a training dataset in which the correspondence between the input and output is known empirically to be true. Curiously, Schrider and Kern did not possess a training dataset of genomic segments known a priori to have evolved either neutrally or through soft or hard selective sweeps. Thus, their claim of using SML is thoroughly and utterly misleading. In the absence of legitimate training datasets, Schrider and Kern used: (1) simulations that employ many manipulatable variables and (2) a system of data cherry-picking rivaling the worst excesses in the literature. These two factors, in addition to the lack of negative controls and the irreproducibility of their results due to incomplete methodological detail, lead us to conclude that all evolutionary inferences derived from so-called SML algorithms (e.g., S/HIC) should be taken with a huge shovel of salt.


2021 ◽  
Author(s):  
Marian Popescu ◽  
Rebecca Head ◽  
Tim Ferriday ◽  
Kate Evans ◽  
Jose Montero ◽  
...  

Abstract This paper presents advancements in machine learning and cloud deployment that enable rapid and accurate automated lithology interpretation. A supervised machine learning technique is described that enables rapid, consistent, and accurate lithology prediction alongside quantitative uncertainty from large wireline or logging-while-drilling (LWD) datasets. To leverage supervised machine learning, a team of geoscientists and petrophysicists made detailed lithology interpretations of wells to generate a comprehensive training dataset. Lithology interpretations were based on applying determinist cross-plotting by utilizing and combining various raw logs. This training dataset was used to develop a model and test a machine learning pipeline. The pipeline was applied to a dataset previously unseen by the algorithm, to predict lithology. A quality checking process was performed by a petrophysicist to validate new predictions delivered by the pipeline against human interpretations. Confidence in the interpretations was assessed in two ways. The prior probability was calculated, a measure of confidence in the input data being recognized by the model. Posterior probability was calculated, which quantifies the likelihood that a specified depth interval comprises a given lithology. The supervised machine learning algorithm ensured that the wells were interpreted consistently by removing interpreter biases and inconsistencies. The scalability of cloud computing enabled a large log dataset to be interpreted rapidly; >100 wells were interpreted consistently in five minutes, yielding >70% lithological match to the human petrophysical interpretation. Supervised machine learning methods have strong potential for classifying lithology from log data because: 1) they can automatically define complex, non-parametric, multi-variate relationships across several input logs; and 2) they allow classifications to be quantified confidently. Furthermore, this approach captured the knowledge and nuances of an interpreter's decisions by training the algorithm using human-interpreted labels. In the hydrocarbon industry, the quantity of generated data is predicted to increase by >300% between 2018 and 2023 (IDC, Worldwide Global DataSphere Forecast, 2019–2023). Additionally, the industry holds vast legacy data. This supervised machine learning approach can unlock the potential of some of these datasets by providing consistent lithology interpretations rapidly, allowing resources to be used more effectively.


Author(s):  
Ladly Patel ◽  
Kumar Abhishek Gaurav

In today's world, a huge amount of data is available. So, all the available data are analyzed to get information, and later this data is used to train the machine learning algorithm. Machine learning is a subpart of artificial intelligence where machines are given training with data and the machine predicts the results. Machine learning is being used in healthcare, image processing, marketing, etc. The aim of machine learning is to reduce the work of the programmer by doing complex coding and decreasing human interaction with systems. The machine learns itself from past data and then predict the desired output. This chapter describes machine learning in brief with different machine learning algorithms with examples and about machine learning frameworks such as tensor flow and Keras. The limitations of machine learning and various applications of machine learning are discussed. This chapter also describes how to identify features in machine learning data.


Author(s):  
Shanthi Thangam Manukumar ◽  
Vijayalakshmi Muthuswamy

With the development of edge devices and mobile devices, the authenticated fast access for the networks is necessary and important. To make the edge and mobile devices smart, fast, and for the better quality of service (QoS), fog computing is an efficient way. Fog computing is providing the way for resource provisioning, service providers, high response time, and the best solution for mobile network traffic. In this chapter, the proposed method is for handling the fog resource management using efficient offloading mechanism. Offloading is done based on machine learning prediction technology and also by using the KNN algorithm to identify the nearest fog nodes to offload. The proposed method minimizes the energy consumption, latency and improves the QoS for edge devices, IoT devices, and mobile devices.


2021 ◽  
Vol 11 (3) ◽  
pp. 7273-7278
Author(s):  
M. Anwer ◽  
M. U. Farooq ◽  
S. M. Khan ◽  
W. Waseemullah

Many researchers have examined the risks imposed by the Internet of Things (IoT) devices on big companies and smart towns. Due to the high adoption of IoT, their character, inherent mobility, and standardization limitations, smart mechanisms, capable of automatically detecting suspicious movement on IoT devices connected to the local networks are needed. With the increase of IoT devices connected through internet, the capacity of web traffic increased. Due to this change, attack detection through common methods and old data processing techniques is now obsolete. Detection of attacks in IoT and detecting malicious traffic in the early stages is a very challenging problem due to the increase in the size of network traffic. In this paper, a framework is recommended for the detection of malicious network traffic. The framework uses three popular classification-based malicious network traffic detection methods, namely Support Vector Machine (SVM), Gradient Boosted Decision Trees (GBDT), and Random Forest (RF), with RF supervised machine learning algorithm achieving far better accuracy (85.34%). The dataset NSL KDD was used in the recommended framework and the performances in terms of training, predicting time, specificity, and accuracy were compared.


In a large distributed virtualized environment, predicting the alerting source from its text seems to be daunting task. This paper explores the option of using machine learning algorithm to solve this problem. Unfortunately, our training dataset is highly imbalanced. Where 96% of alerting data is reported by 24% of alerting sources. This is the expected dataset in any live distributed virtualized environment, where new version of device will have relatively less alert compared to older devices. Any classification effort with such imbalanced dataset present different set of challenges compared to binary classification. This type of skewed data distribution makes conventional machine learning less effective, especially while predicting the minority device type alerts. Our challenge is to build a robust model which can cope with this imbalanced dataset and achieves relative high level of prediction accuracy. This research work stared with traditional regression and classification algorithms using bag of words model. Then word2vec and doc2vec models are used to represent the words in vector formats, which preserve the sematic meaning of the sentence. With this alerting text with similar message will have same vector form representation. This vectorized alerting text is used with Logistic Regression for model building. This yields better accuracy, but the model is relatively complex and demand more computational resources. Finally, simple neural network is used for this multi-class text classification problem domain by using keras and tensorflow libraries. A simple two layered neural network yielded 99 % accuracy, even though our training dataset was not balanced. This paper goes through the qualitative evaluation of the different machine learning algorithms and their respective result. Finally, two layered deep learning algorithms is selected as final solution, since it takes relatively less resource and time with better accuracy values.


2021 ◽  
Author(s):  
Jingyuan Wang ◽  
Xiujuan Chen ◽  
Yueshuai Pan ◽  
Kai Chen ◽  
Yan Zhang ◽  
...  

Abstract Purpose: To develop and verify an early prediction model of gestational diabetes mellitus (GDM) using machine learning algorithm.Methods: The dataset collected from a pregnant cohort study in eastern China, from 2017 to 2019. It was randomly divided into 75% as the training dataset and 25% as the test dataset using the train_test_split function. Based on Python, four classic machine learning algorithm and a New-Stacking algorithm were first trained by the training dataset, and then verified by the test dataset. The four models were Logical Regression (LR), Random Forest (RT), Artificial Neural Network (ANN) and Support Vector Machine (SVM). The sensitivity, specificity, accuracy, and area under the Receiver Operating Characteristic Curve (AUC) were used to analyse the performance of models.Results: Valid information from a total of 2811 pregnant women were obtained. The accuracies of the models ranged from 80.09% to 86.91% (RF), sensitivities ranged from 63.30% to 81.65% (SVM), specificities ranged from 79.38% to 97.53% (RF), and AUCs ranged from 0.80 to 0.82 (New-Stacking).Conclusion: This paper successfully constructed a New-Stacking model theoretically, for its better performance in specificity, accuracy and AUC. But the SVM model got the highest sensitivity, the SVM model was recommends as the prediction model for clinical.


Information ◽  
2020 ◽  
Vol 11 (5) ◽  
pp. 279 ◽  
Author(s):  
Bambang Susilo ◽  
Riri Fitri Sari

The internet has become an inseparable part of human life, and the number of devices connected to the internet is increasing sharply. In particular, Internet of Things (IoT) devices have become a part of everyday human life. However, some challenges are increasing, and their solutions are not well defined. More and more challenges related to technology security concerning the IoT are arising. Many methods have been developed to secure IoT networks, but many more can still be developed. One proposed way to improve IoT security is to use machine learning. This research discusses several machine-learning and deep-learning strategies, as well as standard datasets for improving the security performance of the IoT. We developed an algorithm for detecting denial-of-service (DoS) attacks using a deep-learning algorithm. This research used the Python programming language with packages such as scikit-learn, Tensorflow, and Seaborn. We found that a deep-learning model could increase accuracy so that the mitigation of attacks that occur on an IoT network is as effective as possible.


Sign in / Sign up

Export Citation Format

Share Document