scholarly journals Blood Pressure Estimation Using Photoplethysmography Only: Comparison between Different Machine Learning Approaches

2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
Syed Ghufran Khalid ◽  
Jufen Zhang ◽  
Fei Chen ◽  
Dingchang Zheng

Introduction. Blood pressure (BP) has been a potential risk factor for cardiovascular diseases. BP measurement is one of the most useful parameters for early diagnosis, prevention, and treatment of cardiovascular diseases. At present, BP measurement mainly relies on cuff-based techniques that cause inconvenience and discomfort to users. Although some of the present prototype cuffless BP measurement techniques are able to reach overall acceptable accuracies, they require an electrocardiogram (ECG) and a photoplethysmograph (PPG) that make them unsuitable for true wearable applications. Therefore, developing a single PPG-based cuffless BP estimation algorithm with enough accuracy would be clinically and practically useful. Methods. The University of Queensland vital sign dataset (online database) was accessed to extract raw PPG signals and its corresponding reference BPs (systolic BP and diastolic BP). The online database consisted of PPG waveforms of 32 cases from whom 8133 (good quality) signal segments (5 s for each) were extracted, preprocessed, and normalised in both width and amplitude. Three most significant pulse features (pulse area, pulse rising time, and width 25%) with their corresponding reference BPs were used to train and test three machine learning algorithms (regression tree, multiple linear regression (MLR), and support vector machine (SVM)). A 10-fold cross-validation was applied to obtain overall BP estimation accuracy, separately for the three machine learning algorithms. Their estimation accuracies were further analysed separately for three clinical BP categories (normotensive, hypertensive, and hypotensive). Finally, they were compared with the ISO standard for noninvasive BP device validation (average difference no greater than 5 mmHg and SD no greater than 8 mmHg). Results. In terms of overall estimation accuracy, the regression tree achieved the best overall accuracy for SBP (mean and SD of difference: −0.1 ± 6.5 mmHg) and DBP (mean and SD of difference: −0.6 ± 5.2 mmHg). MLR and SVM achieved the overall mean difference less than 5 mmHg for both SBP and DBP, but their SD of difference was >8 mmHg. Regarding the estimation accuracy in each BP categories, only the regression tree achieved acceptable ISO standard for SBP (−1.1 ± 5.7 mmHg) and DBP (−0.03 ± 5.6 mmHg) in the normotensive category. MLR and SVM did not achieve acceptable accuracies in any BP categories. Conclusion. This study developed and compared three machine learning algorithms to estimate BPs using PPG only and revealed that the regression tree algorithm was the best approach with overall acceptable accuracy to ISO standard for BP device validation. Furthermore, this study demonstrated that the regression tree algorithm achieved acceptable measurement accuracy only in the normotensive category, suggesting that future algorithm development for BP estimation should be more specific for different BP categories.

2020 ◽  
Vol 9 (3) ◽  
pp. 34
Author(s):  
Giovanna Sannino ◽  
Ivanoe De Falco ◽  
Giuseppe De Pietro

One of the most important physiological parameters of the cardiovascular circulatory system is Blood Pressure. Several diseases are related to long-term abnormal blood pressure, i.e., hypertension; therefore, the early detection and assessment of this condition are crucial. The identification of hypertension, and, even more the evaluation of its risk stratification, by using wearable monitoring devices are now more realistic thanks to the advancements in Internet of Things, the improvements of digital sensors that are becoming more and more miniaturized, and the development of new signal processing and machine learning algorithms. In this scenario, a suitable biomedical signal is represented by the PhotoPlethysmoGraphy (PPG) signal. It can be acquired by using a simple, cheap, and wearable device, and can be used to evaluate several aspects of the cardiovascular system, e.g., the detection of abnormal heart rate, respiration rate, blood pressure, oxygen saturation, and so on. In this paper, we take into account the Cuff-Less Blood Pressure Estimation Data Set that contains, among others, PPG signals coming from a set of subjects, as well as the Blood Pressure values of the latter that is the hypertension level. Our aim is to investigate whether or not machine learning methods applied to these PPG signals can provide better results for the non-invasive classification and evaluation of subjects’ hypertension levels. To this aim, we have availed ourselves of a wide set of machine learning algorithms, based on different learning mechanisms, and have compared their results in terms of the effectiveness of the classification obtained.


Mathematics ◽  
2021 ◽  
Vol 9 (20) ◽  
pp. 2537
Author(s):  
Luis Rolando Guarneros-Nolasco ◽  
Nancy Aracely Cruz-Ramos ◽  
Giner Alor-Hernández ◽  
Lisbeth Rodríguez-Mazahua ◽  
José Luis Sánchez-Cervantes

Cardiovascular Diseases (CVDs) are a leading cause of death globally. In CVDs, the heart is unable to deliver enough blood to other body regions. As an effective and accurate diagnosis of CVDs is essential for CVD prevention and treatment, machine learning (ML) techniques can be effectively and reliably used to discern patients suffering from a CVD from those who do not suffer from any heart condition. Namely, machine learning algorithms (MLAs) play a key role in the diagnosis of CVDs through predictive models that allow us to identify the main risks factors influencing CVD development. In this study, we analyze the performance of ten MLAs on two datasets for CVD prediction and two for CVD diagnosis. Algorithm performance is analyzed on top-two and top-four dataset attributes/features with respect to five performance metrics –accuracy, precision, recall, f1-score, and roc-auc—using the train-test split technique and k-fold cross-validation. Our study identifies the top-two and top-four attributes from CVD datasets analyzing the performance of the accuracy metrics to determine that they are the best for predicting and diagnosing CVD. As our main findings, the ten ML classifiers exhibited appropriate diagnosis in classification and predictive performance with accuracy metric with top-two attributes, identifying three main attributes for diagnosis and prediction of a CVD such as arrhythmia and tachycardia; hence, they can be successfully implemented for improving current CVD diagnosis efforts and help patients around the world, especially in regions where medical staff is lacking.


Hypertension ◽  
2020 ◽  
Vol 76 (2) ◽  
pp. 569-576 ◽  
Author(s):  
Kelvin K.F. Tsoi ◽  
Nicholas B. Chan ◽  
Karen K.L. Yiu ◽  
Simon K.S. Poon ◽  
Bryant Lin ◽  
...  

Visit-to-visit blood pressure variability (BPV) has been shown to be a predictor of cardiovascular disease. We aimed to classify the BPV levels using different machine learning algorithms. Visit-to-visit blood pressure readings were extracted from the SPRINT study in the United States and eHealth cohort in Hong Kong (HK cohort). Patients were clustered into low, medium, and high BPV levels with the traditional quantile clustering and 5 machine learning algorithms including K-means. Clustering methods were assessed by Stability Index. Similarities were assessed by Davies-Bouldin Index and Silhouette Index. Cox proportional hazard regression models were fitted to compare the risk of myocardial infarction, stroke, and heart failure. A total of 8133 participants had average blood pressure measurement 14.7 times in 3.28 years in SPRINT and 1094 participants who had average blood pressure measurement 165.4 times in 1.37 years in HK cohort. Quantile clustering assigned one-third participants as high BPV level, but machine learning methods only assigned 10% to 27%. Quantile clustering is the most stable method (stability index: 0.982 in the SPRINT and 0.948 in the HK cohort) with some levels of clustering similarities (Davies-Bouldin Index: 0.752 and 0.764, respectively). K-means clustering is the most stable across the machine learning algorithms (stability index: 0.975 and 0.911, respectively) with the lowest clustering similarities (Davies-Bouldin Index: 0.653 and 0.680, respectively). One out of 7 in the population was classified with high BPV level, who showed to have higher risk of stroke and heart failure. Machine learning methods can improve BPV classification for better prediction of cardiovascular diseases.


2017 ◽  
Vol 7 (1.2) ◽  
pp. 43 ◽  
Author(s):  
K. Sreenivasa Rao ◽  
N. Swapna ◽  
P. Praveen Kumar

Data Mining is the process of extracting useful information from large sets of data. Data mining enablesthe users to have insights into the data and make useful decisions out of the knowledge mined from databases. The purpose of higher education organizations is to offer superior opportunities to its students. As with data mining, now-a-days Education Data Mining (EDM) also is considered as a powerful tool in the field of education. It portrays an effective method for mining the student’s performance based on various parameters to predict and analyze whether a student (he/she) will be recruited or not in the campus placement. Predictions are made using the machine learning algorithms J48, Naïve Bayes, Random Forest, and Random Tree in weka tool and Multiple Linear Regression, binomial logistic regression, Recursive Partitioning and Regression Tree (rpart), conditional inference tree (ctree) and Neural Network (nnet) algorithms in R studio. The results obtained from each approaches are then compared with respect to their performance and accuracy levels by graphical analysis. Based on the result, higher education organizations can offer superior training to its students.


2021 ◽  
Vol 13 (18) ◽  
pp. 3560
Author(s):  
Xiao Sun ◽  
Yunlin Zhang ◽  
Yibo Zhang ◽  
Kun Shi ◽  
Yongqiang Zhou ◽  
...  

Chromophoric dissolved organic matter (CDOM) is crucial in the biogeochemical cycle and carbon cycle of aquatic environments. However, in inland waters, remotely sensed estimates of CDOM remain challenging due to the low optical signal of CDOM and complex optical conditions. Therefore, developing efficient, practical and robust models to estimate CDOM absorption coefficient in inland waters is essential for successful water environment monitoring and management. We examined and improved different machine learning algorithms using extensive CDOM measurements and Landsat 8 images covering different trophic states to develop the robust CDOM estimation model. The algorithms were evaluated via 111 Landsat 8 images and 1708 field measurements covering CDOM light absorption coefficient a(254) from 2.64 to 34.04 m−1. Overall, the four machine learning algorithms achieved more than 70% accuracy for CDOM absorption coefficient estimation. Based on model training, validation and the application on Landsat 8 OLI images, we found that the Gaussian process regression (GPR) had higher stability and estimation accuracy (R2 = 0.74, mean relative error (MRE) = 22.2%) than the other models. The estimation accuracy and MRE were R2 = 0.75 and MRE = 22.5% for backpropagation (BP) neural network, R2 = 0.71 and MRE = 24.4% for random forest regression (RFR) and R2 = 0.71 and MRE = 24.4% for support vector regression (SVR). In contrast, the best three empirical models had estimation accuracies of R2 less than 0.56. The model accuracies applied to Landsat images of Lake Qiandaohu (oligo-mesotrophic state) were better than those of Lake Taihu (eutrophic state) because of the more complex optical conditions in eutrophic lakes. Therefore, machine learning algorithms have great potential for CDOM monitoring in inland waters based on large datasets. Our study demonstrates that machine learning algorithms are available to map CDOM spatial-temporal patterns in inland waters.


2020 ◽  
Vol 7 (10) ◽  
Author(s):  
Yangxiaoyue Liu ◽  
Xiaolin Xia ◽  
Ling Yao ◽  
Wenlong Jing ◽  
Chenghu Zhou ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document