scholarly journals Deep Physiological Model for Blood Glucose Prediction in T1DM Patients

Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 3896
Author(s):  
Mario Munoz-Organero

Accurate estimations for the near future levels of blood glucose are crucial for Type 1 Diabetes Mellitus (T1DM) patients in order to be able to react on time and avoid hypo and hyper-glycemic episodes. Accurate predictions for blood glucose are the base for control algorithms in glucose regulating systems such as the artificial pancreas. Numerous research studies have already been conducted in order to provide predictions for blood glucose levels with particularities in the input signals and underlying models used. These models can be categorized into two major families: those based on tuning glucose physiological-metabolic models and those based on learning glucose evolution patterns based on machine learning techniques. This paper reviews the state of the art in blood glucose predictions for T1DM patients and proposes, implements, validates and compares a new hybrid model that decomposes a deep machine learning model in order to mimic the metabolic behavior of physiological blood glucose methods. The differential equations for carbohydrate and insulin absorption in physiological models are modeled using a Recurrent Neural Network (RNN) implemented using Long Short-Term Memory (LSTM) cells. The results show Root Mean Square Error (RMSE) values under 5 mg/dL for simulated patients and under 10 mg/dL for real patients.

2020 ◽  
Vol 12 (2) ◽  
pp. 84-99
Author(s):  
Li-Pang Chen

In this paper, we investigate analysis and prediction of the time-dependent data. We focus our attention on four different stocks are selected from Yahoo Finance historical database. To build up models and predict the future stock price, we consider three different machine learning techniques including Long Short-Term Memory (LSTM), Convolutional Neural Networks (CNN) and Support Vector Regression (SVR). By treating close price, open price, daily low, daily high, adjusted close price, and volume of trades as predictors in machine learning methods, it can be shown that the prediction accuracy is improved.


Author(s):  
Yanxiang Yu ◽  
◽  
Chicheng Xu ◽  
Siddharth Misra ◽  
Weichang Li ◽  
...  

Compressional and shear sonic traveltime logs (DTC and DTS, respectively) are crucial for subsurface characterization and seismic-well tie. However, these two logs are often missing or incomplete in many oil and gas wells. Therefore, many petrophysical and geophysical workflows include sonic log synthetization or pseudo-log generation based on multivariate regression or rock physics relations. Started on March 1, 2020, and concluded on May 7, 2020, the SPWLA PDDA SIG hosted a contest aiming to predict the DTC and DTS logs from seven “easy-to-acquire” conventional logs using machine-learning methods (GitHub, 2020). In the contest, a total number of 20,525 data points with half-foot resolution from three wells was collected to train regression models using machine-learning techniques. Each data point had seven features, consisting of the conventional “easy-to-acquire” logs: caliper, neutron porosity, gamma ray (GR), deep resistivity, medium resistivity, photoelectric factor, and bulk density, respectively, as well as two sonic logs (DTC and DTS) as the target. The separate data set of 11,089 samples from a fourth well was then used as the blind test data set. The prediction performance of the model was evaluated using root mean square error (RMSE) as the metric, shown in the equation below: RMSE=sqrt(1/2*1/m* [∑_(i=1)^m▒〖(〖DTC〗_pred^i-〖DTC〗_true^i)〗^2 + 〖(〖DTS〗_pred^i-〖DTS〗_true^i)〗^2 ] In the benchmark model, (Yu et al., 2020), we used a Random Forest regressor and conducted minimal preprocessing to the training data set; an RMSE score of 17.93 was achieved on the test data set. The top five models from the contest, on average, beat the performance of our benchmark model by 27% in the RMSE score. In the paper, we will review these five solutions, including preprocess techniques and different machine-learning models, including neural network, long short-term memory (LSTM), and ensemble trees. We found that data cleaning and clustering were critical for improving the performance in all models.


Author(s):  
KM Jyoti Rani

Diabetes is a chronic disease with the potential to cause a worldwide health care crisis. According to International Diabetes Federation 382 million people are living with diabetes across the whole world. By 2035, this will be doubled as 592 million. Diabetes is a disease caused due to the increase level of blood glucose. This high blood glucose produces the symptoms of frequent urination, increased thirst, and increased hunger. Diabetes is a one of the leading cause of blindness, kidney failure, amputations, heart failure and stroke. When we eat, our body turns food into sugars, or glucose. At that point, our pancreas is supposed to release insulin. Insulin serves as a key to open our cells, to allow the glucose to enter and allow us to use the glucose for energy. But with diabetes, this system does not work. Type 1 and type 2 diabetes are the most common forms of the disease, but there are also other kinds, such as gestational diabetes, which occurs during pregnancy, as well as other forms. Machine learning is an emerging scientific field in data science dealing with the ways in which machines learn from experience. The aim of this project is to develop a system which can perform early prediction of diabetes for a patient with a higher accuracy by combining the results of different machine learning techniques. The algorithms like K nearest neighbour, Logistic Regression, Random forest, Support vector machine and Decision tree are used. The accuracy of the model using each of the algorithms is calculated. Then the one with a good accuracy is taken as the model for predicting the diabetes.


Author(s):  
Khaled Eskaf ◽  
Tim Ritchings ◽  
Osama Bedawy

Diabetes mellitus is one of the most common chronic diseases. The number of cases of diabetes in the world is likely to increase more than two fold in the next 30 years: from 115 million in 2000 to 284 million in 2030. This chapter is concerned with helping diabetic patients to manage themselves by developing a computer system that predicts their Blood Glucose Level (BGL) after 30 minutes on the basis of their current levels, so that they can administer insulin. This will enable the diabetic patient to continue living a normal daily life, as much as is possible. The prediction of BGLs based on the current levels BGLs become feasible through the advent of Continuous Glucose Monitoring (CGM) systems, which are able to sample patients' BGLs, typically 5 minutes, and computer systems that can process and analyse these samples. The approach taken in this chapter uses machine-learning techniques, specifically Genetic Algorithms (GA), to learn BGL patterns over an hour and the resulting value 30 minutes later, without questioning the patients about their food intake and activities. The GAs were invested using the raw BGLs as input and metadata derived from a Diabetic Dynamic Model of BGLs supplemented by the changes in patients' BGLs over the previous hour. The results obtained in a preliminary study including 4 virtual patients taken from the AIDA diabetes simulation software and 3 volunteers using the DexCom SEVEN system, show that the metadata approach gives more accurate predictions. Online learning, whereby new BGL patterns were incorporated into the prediction system as they were encountered, improved the results further.


Algorithms ◽  
2018 ◽  
Vol 11 (11) ◽  
pp. 170 ◽  
Author(s):  
Zhixi Li ◽  
Vincent Tam

Momentum and reversal effects are important phenomena in stock markets. In academia, relevant studies have been conducted for years. Researchers have attempted to analyze these phenomena using statistical methods and to give some plausible explanations. However, those explanations are sometimes unconvincing. Furthermore, it is very difficult to transfer the findings of these studies to real-world investment trading strategies due to the lack of predictive ability. This paper represents the first attempt to adopt machine learning techniques for investigating the momentum and reversal effects occurring in any stock market. In the study, various machine learning techniques, including the Decision Tree (DT), Support Vector Machine (SVM), Multilayer Perceptron Neural Network (MLP), and Long Short-Term Memory Neural Network (LSTM) were explored and compared carefully. Several models built on these machine learning approaches were used to predict the momentum or reversal effect on the stock market of mainland China, thus allowing investors to build corresponding trading strategies. The experimental results demonstrated that these machine learning approaches, especially the SVM, are beneficial for capturing the relevant momentum and reversal effects, and possibly building profitable trading strategies. Moreover, we propose the corresponding trading strategies in terms of market states to acquire the best investment returns.


Computers ◽  
2019 ◽  
Vol 8 (1) ◽  
pp. 4 ◽  
Author(s):  
Jurgita Kapočiūtė-Dzikienė ◽  
Robertas Damaševičius ◽  
Marcin Woźniak

We describe the sentiment analysis experiments that were performed on the Lithuanian Internet comment dataset using traditional machine learning (Naïve Bayes Multinomial—NBM and Support Vector Machine—SVM) and deep learning (Long Short-Term Memory—LSTM and Convolutional Neural Network—CNN) approaches. The traditional machine learning techniques were used with the features based on the lexical, morphological, and character information. The deep learning approaches were applied on the top of two types of word embeddings (Vord2Vec continuous bag-of-words with negative sampling and FastText). Both traditional and deep learning approaches had to solve the positive/negative/neutral sentiment classification task on the balanced and full dataset versions. The best deep learning results (reaching 0.706 of accuracy) were achieved on the full dataset with CNN applied on top of the FastText embeddings, replaced emoticons, and eliminated diacritics. The traditional machine learning approaches demonstrated the best performance (0.735 of accuracy) on the full dataset with the NBM method, replaced emoticons, restored diacritics, and lemma unigrams as features. Although traditional machine learning approaches were superior when compared to the deep learning methods; deep learning demonstrated good results when applied on the small datasets.


2014 ◽  
Vol 4 (5) ◽  
pp. 20140042 ◽  
Author(s):  
Marie Csete ◽  
John Doyle

Blood glucose levels are controlled by well-known physiological feedback loops: high glucose levels promote insulin release from the pancreas, which in turn stimulates cellular glucose uptake. Low blood glucose levels promote pancreatic glucagon release, stimulating glycogen breakdown to glucose in the liver. In healthy people, this control system is remarkably good at maintaining blood glucose in a tight range despite many perturbations to the system imposed by diet and fasting, exercise, medications and other stressors. Type 1 diabetes mellitus (T1DM) results from loss of the insulin-producing cells of the pancreas, the beta cells. These cells serve as both sensor (of glucose levels) and actuator (insulin/glucagon release) in a control physiological feedback loop. Although the idea of rebuilding this feedback loop seems intuitively easy, considerable control mathematics involving multiple types of control schema were necessary to develop an artificial pancreas that still does not function as well as evolved control mechanisms. Here, we highlight some tools from control engineering used to mimic normal glucose control in an artificial pancreas, and the constraints, trade-offs and clinical consequences inherent in various types of control schemes. T1DM can be viewed as a loss of normal physiologic controls, as can many other disease states. For this reason, we introduce basic concepts of control engineering applicable to understanding pathophysiology of disease and development of physiologically based control strategies for treatment.


10.6036/10007 ◽  
2021 ◽  
Vol 96 (5) ◽  
pp. 528-533
Author(s):  
XAVIER LARRIVA NOVO ◽  
MARIO VEGA BARBAS ◽  
VICTOR VILLAGRA ◽  
JULIO BERROCAL

Cybersecurity has stood out in recent years with the aim of protecting information systems. Different methods, techniques and tools have been used to make the most of the existing vulnerabilities in these systems. Therefore, it is essential to develop and improve new technologies, as well as intrusion detection systems that allow detecting possible threats. However, the use of these technologies requires highly qualified cybersecurity personnel to analyze the results and reduce the large number of false positives that these technologies presents in their results. Therefore, this generates the need to research and develop new high-performance cybersecurity systems that allow efficient analysis and resolution of these results. This research presents the application of machine learning techniques to classify real traffic, in order to identify possible attacks. The study has been carried out using machine learning tools applying deep learning algorithms such as multi-layer perceptron and long-short-term-memory. Additionally, this document presents a comparison between the results obtained by applying the aforementioned algorithms and algorithms that are not deep learning, such as: random forest and decision tree. Finally, the results obtained are presented, showing that the long-short-term-memory algorithm is the one that provides the best results in relation to precision and logarithmic loss.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Jian Jiang ◽  
Fen Zhang

As the planet watches in shock the evolution of the COVID-19 pandemic, new forms of sophisticated, versatile, and extremely difficult-to-detect malware expose society and especially the global economy. Machine learning techniques are posing an increasingly important role in the field of malware identification and analysis. However, due to the complexity of the problem, the training of intelligent systems proves to be insufficient in recognizing advanced cyberthreats. The biggest challenge in information systems security using machine learning methods is to understand the polymorphism and metamorphism mechanisms used by malware developers and how to effectively address them. This work presents an innovative Artificial Evolutionary Fuzzy LSTM Immune System which, by using a heuristic machine learning method that combines evolutionary intelligence, Long-Short-Term Memory (LSTM), and fuzzy knowledge, proves to be able to adequately protect modern information system from Portable Executable Malware. The main innovation in the technical implementation of the proposed approach is the fact that the machine learning system can only be trained from raw bytes of an executable file to determine if the file is malicious. The performance of the proposed system was tested on a sophisticated dataset of high complexity, which emerged after extensive research on PE malware that offered us a realistic representation of their operating states. The high accuracy of the developed model significantly supports the validity of the proposed method. The final evaluation was carried out with in-depth comparisons to corresponding machine learning algorithms and it has revealed the superiority of the proposed immune system.


Sign in / Sign up

Export Citation Format

Share Document