scholarly journals Restoration of Remote PPG Signal through Correspondence with Contact Sensor Signal

Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5910
Author(s):  
So-Eui Kim ◽  
Su-Gyeong Yu ◽  
Na Hye Kim ◽  
Kun Ha Suh ◽  
Eui Chul Lee

Photoplethysmography (PPG) is an optical measurement technique that detects changes in blood volume in the microvascular layer caused by the pressure generated by the heartbeat. To solve the inconvenience of contact PPG measurement, a remote PPG technology that can measure PPG in a non-contact way using a camera was developed. However, the remote PPG signal has a smaller pulsation component than the contact PPG signal, and its shape is blurred, so only heart rate information can be obtained. In this study, we intend to restore the remote PPG to the level of the contact PPG, to not only measure heart rate, but to also obtain morphological information. Three models were used for training: support vector regression (SVR), a simple three-layer deep learning model, and SVR + deep learning model. Cosine similarity and Pearson correlation coefficients were used to evaluate the similarity of signals before and after restoration. The cosine similarity before restoration was 0.921, and after restoration, the SVR, deep learning model, and SVR + deep learning model were 0.975, 0.975, and 0.977, respectively. The Pearson correlation coefficient was 0.778 before restoration and 0.936, 0.933, and 0.939, respectively, after restoration.

Electronics ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. 39
Author(s):  
Zhiyuan Xie ◽  
Shichang Du ◽  
Jun Lv ◽  
Yafei Deng ◽  
Shiyao Jia

Remaining Useful Life (RUL) prediction is significant in indicating the health status of the sophisticated equipment, and it requires historical data because of its complexity. The number and complexity of such environmental parameters as vibration and temperature can cause non-linear states of data, making prediction tremendously difficult. Conventional machine learning models such as support vector machine (SVM), random forest, and back propagation neural network (BPNN), however, have limited capacity to predict accurately. In this paper, a two-phase deep-learning-model attention-convolutional forget-gate recurrent network (AM-ConvFGRNET) for RUL prediction is proposed. The first phase, forget-gate convolutional recurrent network (ConvFGRNET) is proposed based on a one-dimensional analog long short-term memory (LSTM), which removes all the gates except the forget gate and uses chrono-initialized biases. The second phase is the attention mechanism, which ensures the model to extract more specific features for generating an output, compensating the drawbacks of the FGRNET that it is a black box model and improving the interpretability. The performance and effectiveness of AM-ConvFGRNET for RUL prediction is validated by comparing it with other machine learning methods and deep learning methods on the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) dataset and a dataset of ball screw experiment.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Leow Wei Qin ◽  
Muneer Ahmad ◽  
Ihsan Ali ◽  
Rafia Mumtaz ◽  
Syed Mohammad Hassan Zaidi ◽  
...  

Achievement of precision measurement is highly desired in a current industrial revolution where a significant increase in living standards increased municipal solid waste. The current industry 4.0 standards require accurate and efficient edge computing sensors towards solid waste classification. Thus, if waste is not managed properly, it would bring about an adverse impact on health, the economy, and the global environment. All stakeholders need to realize their roles and responsibilities for solid waste generation and recycling. To ensure recycling can be successful, the waste should be correctly and efficiently separated. The performance of edge computing devices is directly proportional to computational complexity in the context of nonorganic waste classification. Existing research on waste classification was done using CNN architecture, e.g., AlexNet, which contains about 62,378,344 parameters, and over 729 million floating operations (FLOPs) are required to classify a single image. As a result, it is too heavy and not suitable for computing applications that require inexpensive computational complexities. This research proposes an enhanced lightweight deep learning model for solid waste classification developed using MobileNetV2, efficient for lightweight applications including edge computing devices and other mobile applications. The proposed model outperforms the existing similar models achieving an accuracy of 82.48% and 83.46% with Softmax and support vector machine (SVM) classifiers, respectively. Although MobileNetV2 may provide a lower accuracy if compared to CNN architecture which is larger and heavier, the accuracy is still comparable, and it is more practical for edge computing devices and mobile applications.


2020 ◽  
Author(s):  
Shaan Khurshid ◽  
Samuel Friedman ◽  
James P. Pirruccello ◽  
Paolo Di Achille ◽  
Nathaniel Diamant ◽  
...  

ABSTRACTBackgroundCardiac magnetic resonance (CMR) is the gold standard for left ventricular hypertrophy (LVH) diagnosis. CMR-derived LV mass can be estimated using proprietary algorithms (e.g., inlineVF), but their accuracy and availability may be limited.ObjectiveTo develop an open-source deep learning model to estimate CMR-derived LV mass.MethodsWithin participants of the UK Biobank prospective cohort undergoing CMR, we trained two convolutional neural networks to estimate LV mass. The first (ML4Hreg) performed regression informed by manually labeled LV mass (available in 5,065 individuals), while the second (ML4Hseg) performed LV segmentation informed by inlineVF contours. We compared ML4Hreg, ML4Hseg, and inlineVF against manually labeled LV mass within an independent holdout set using Pearson correlation and mean absolute error (MAE). We assessed associations between CMR-derived LVH and prevalent cardiovascular disease using logistic regression adjusted for age and sex.ResultsWe generated CMR-derived LV mass estimates within 38,574 individuals. Among 891 individuals in the holdout set, ML4Hseg reproduced manually labeled LV mass more accurately (r=0.864, 95% CI 0.847-0.880; MAE 10.41g, 95% CI 9.82-10.99) than ML4Hreg (r=0.843, 95% CI 0.823-0.861; MAE 10.51, 95% CI 9.86-11.15, p=0.01) and inlineVF (r=0.795, 95% CI 0.770-0.818; MAE 14.30, 95% CI 13.46-11.01, p<0.01). LVH defined using ML4Hseg demonstrated the strongest associations with hypertension (odds ratio 2.76, 95% CI 2.51-3.04), atrial fibrillation (1.75, 95% CI 1.37-2.20), and heart failure (4.53, 95% CI 3.16-6.33).ConclusionsML4Hseg is an open-source deep learning model providing automated quantification of CMR-derived LV mass. Deep learning models characterizing cardiac structure may facilitate broad cardiovascular discovery.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-15 ◽  
Author(s):  
Hao Chao ◽  
Liang Dong ◽  
Yongli Liu ◽  
Baoyun Lu

Emotion recognition based on multichannel electroencephalogram (EEG) signals is a key research area in the field of affective computing. Traditional methods extract EEG features from each channel based on extensive domain knowledge and ignore the spatial characteristics and global synchronization information across all channels. This paper proposes a global feature extraction method that encapsulates the multichannel EEG signals into gray images. The maximal information coefficient (MIC) for all channels was first measured. Subsequently, an MIC matrix was constructed according to the electrode arrangement rules and represented by an MIC gray image. Finally, a deep learning model designed with two principal component analysis convolutional layers and a nonlinear transformation operation extracted the spatial characteristics and global interchannel synchronization features from the constructed feature images, which were then input to support vector machines to perform the emotion recognition tasks. Experiments were conducted on the benchmark dataset for emotion analysis using EEG, physiological, and video signals. The experimental results demonstrated that the global synchronization features and spatial characteristics are beneficial for recognizing emotions and the proposed deep learning model effectively mines and utilizes the two salient features.


2019 ◽  
Vol 2019 ◽  
pp. 1-14
Author(s):  
Renzhou Gui ◽  
Tongjie Chen ◽  
Han Nie

With the continuous development of science, more and more research results have proved that machine learning is capable of diagnosing and studying the major depressive disorder (MDD) in the brain. We propose a deep learning network with multibranch and local residual feedback, for four different types of functional magnetic resonance imaging (fMRI) data produced by depressed patients and control people under the condition of listening to positive- and negative-emotions music. We use the large convolution kernel of the same size as the correlation matrix to match the features and obtain the results of feature matching of 264 regions of interest (ROIs). Firstly, four-dimensional fMRI data are used to generate the two-dimensional correlation matrix of one person’s brain based on ROIs and then processed by the threshold value which is selected according to the characteristics of complex network and small-world network. After that, the deep learning model in this paper is compared with support vector machine (SVM), logistic regression (LR), k-nearest neighbor (kNN), a common deep neural network (DNN), and a deep convolutional neural network (CNN) for classification. Finally, we further calculate the matched ROIs from the intermediate results of our deep learning model which can help related fields further explore the pathogeny of depression patients.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3363 ◽  
Author(s):  
Taylor Mauldin ◽  
Marc Canby ◽  
Vangelis Metsis ◽  
Anne Ngu ◽  
Coralys Rivera

This paper presents SmartFall, an Android app that uses accelerometer data collected from a commodity-based smartwatch Internet of Things (IoT) device to detect falls. The smartwatch is paired with a smartphone that runs the SmartFall application, which performs the computation necessary for the prediction of falls in real time without incurring latency in communicating with a cloud server, while also preserving data privacy. We experimented with both traditional (Support Vector Machine and Naive Bayes) and non-traditional (Deep Learning) machine learning algorithms for the creation of fall detection models using three different fall datasets (Smartwatch, Notch, Farseeing). Our results show that a Deep Learning model for fall detection generally outperforms more traditional models across the three datasets. This is attributed to the Deep Learning model’s ability to automatically learn subtle features from the raw accelerometer data that are not available to Naive Bayes and Support Vector Machine, which are restricted to learning from a small set of extracted features manually specified. Furthermore, the Deep Learning model exhibits a better ability to generalize to new users when predicting falls, an important quality of any model that is to be successful in the real world. We also present a three-layer open IoT system architecture used in SmartFall, which can be easily adapted for the collection and analysis of other sensor data modalities (e.g., heart rate, skin temperature, walking patterns) that enables remote monitoring of a subject’s wellbeing.


2021 ◽  
Vol 11 (4) ◽  
pp. 286-290
Author(s):  
Md. Golam Kibria ◽  
◽  
Mehmet Sevkli

The increased credit card defaulters have forced the companies to think carefully before the approval of credit applications. Credit card companies usually use their judgment to determine whether a credit card should be issued to the customer satisfying certain criteria. Some machine learning algorithms have also been used to support the decision. The main objective of this paper is to build a deep learning model based on the UCI (University of California, Irvine) data sets, which can support the credit card approval decision. Secondly, the performance of the built model is compared with the other two traditional machine learning algorithms: logistic regression (LR) and support vector machine (SVM). Our results show that the overall performance of our deep learning model is slightly better than that of the other two models.


2018 ◽  
Author(s):  
Yu Li ◽  
Zhongxiao Li ◽  
Lizhong Ding ◽  
Yuhui Hu ◽  
Wei Chen ◽  
...  

ABSTRACTMotivationIn most biological data sets, the amount of data is regularly growing and the number of classes is continuously increasing. To deal with the new data from the new classes, one approach is to train a classification model, e.g., a deep learning model, from scratch based on both old and new data. This approach is highly computationally costly and the extracted features are likely very different from the ones extracted by the model trained on the old data alone, which leads to poor model robustness. Another approach is to fine tune the trained model from the old data on the new data. However, this approach often does not have the ability to learn new knowledge without forgetting the previously learned knowledge, which is known as the catastrophic forgetting problem. To our knowledge, this problem has not been studied in the field of bioinformatics despite its existence in many bioinformatic problems.ResultsHere we propose a novel method, SupportNet, to solve the catastrophic forgetting problem efficiently and effectively. SupportNet combines the strength of deep learning and support vector machine (SVM), where SVM is used to identify the support data from the old data, which are fed to the deep learning model together with the new data for further training so that the model can review the essential information of the old data when learning the new information. Two powerful consolidation regularizers are applied to ensure the robustness of the learned model. Comprehensive experiments on various tasks, including enzyme function prediction, subcellular structure classification and breast tumor classification, show that SupportNet drastically outperforms the state-of-the-art incremental learning methods and reaches similar performance as the deep learning model trained from scratch on both old and new data.AvailabilityOur program is accessible at: https://github.com/lykaust15/SupportNet.


Author(s):  
P. Nagaraj ◽  
P. Deepalakshmi

Diabetes, caused by the rise in level of glucose in blood, has many latest devices to identify from blood samples. Diabetes, when unnoticed, may bring many serious diseases like heart attack, kidney disease. In this way, there is a requirement for solid research and learning model’s enhancement in the field of gestational diabetes identification and analysis. SVM is one of the powerful classification models in machine learning, and similarly, Deep Neural Network is powerful under deep learning models. In this work, we applied Enhanced Support Vector Machine and Deep Learning model Deep Neural Network for diabetes prediction and screening. The proposed method uses Deep Neural Network obtaining its input from the output of Enhanced Support Vector Machine, thus having a combined efficacy. The dataset we considered includes 768 patients’ data with eight major features and a target column with result “Positive” or “Negative”. Experiment is done with Python and the outcome of our demonstration shows that the deep Learning model gives more efficiency for diabetes prediction.


2021 ◽  
Author(s):  
Walid M. Abdelmoula ◽  
Sylwia Stopka ◽  
Elizabeth C. Randall ◽  
Michael Regan ◽  
Jeffrey N. Agar ◽  
...  

Motivation: Mass spectrometry imaging (MSI) provides rich biochemical information in a label-free manner and therefore holds promise to substantially impact current practice in disease diagnosis. However, the complex nature of MSI data poses computational challenges in its analysis. The complexity of the data arises from its large size, high dimensionality, and spectral non-linearity. Preprocessing, including peak picking, has been used to reduce raw data complexity, however peak picking is sensitive to parameter selection that, perhaps prematurely, shapes the downstream analysis for tissue classification and ensuing biological interpretation. Results: We propose a deep learning model, massNet, that provides the desired qualities of scalability, non-linearity, and speed in MSI data analysis. This deep learning model was used, without prior preprocessing and peak picking, to classify MSI data from a mouse brain harboring a patient-derived tumor. The massNet architecture established automatically learning of predictive features, and automated methods were incorporated to identify peaks with potential for tumor delineation. The model's performance was assessed using cross-validation, and the results demonstrate higher accuracy and a 174-fold gain in speed compared to the established classical machine learning method, support vector machine. Availability and Implementation: The code is publicly available on GitHub.


Sign in / Sign up

Export Citation Format

Share Document