scholarly journals Classify Elderly Pain Severity from Open-Source Automatically Video Clip Facial Action Units Analysis. A Study from the Integrated Pain Artificial Intelligence Network (I-PAIN) Data Repository.

Author(s):  
Patama Gomutbutra ◽  
Adisak Kittisares ◽  
Atigorn Sanguansri ◽  
Noppon Choosri ◽  
Passakorn Sawaddiruk ◽  
...  

Abstract Background: It is increasingly interesting to monitor pain severity in elderly individuals by applying machine learning models. In previous studies, OpenFace© - a well-known automated facial analysis algorithm, was used to detect facial action units (FAUs) that initially need long hours of human coding. However, OpenFace© developed from the dataset that dominant young Caucasians who were illicit pain in the lab. Therefore, this study aims to evaluate the accuracy and feasibility of the model using data from OpenFace© to classify pain severity in elderly Asian patients in clinical settings.Methods: Data from 255 Thai individuals with chronic pain were collected at Chiang Mai Medical School Hospital. The phone camera recorded faces for 10 seconds at a 1-meter distance briefly after the patients provided self-rating pain severity. For those unable to self-rate, the video was recorded just after the move, which illicit pain. The trained assistant rated each video clip for the Pain Assessment in Advanced Dementia (PAINAD). The classification of pain severity was mild, moderate, or severe. OpenFace© process video clip into 18 FAUs. Five classification models were used, including logistic regression, multilayer perception, naïve Bayes, decision tree, k-nearest neighbors (KNN), and support vector machine (SVM). Results: Among the models that included only FAU described in the literature (FAUs 4, 6, 7, 9, 10, 25, 26, 27 and 45), multilayer perception yielded the highest accuracy of 50%. Among the machine learning selection features, the SVM model for FAU 1, 2, 4, 7, 9, 10, 12, 20, 25, 45, and gender yielded the best accuracy of 58%. Conclusion: Our open-source automatic video clip facial action unit analysis experiment was not robust for classifying elderly pain. Retraining facial action unit detection algorithms, enhancing frame selection strategies, and adding pain-related functions may improve the accuracy and feasibility of the model.

2017 ◽  
Author(s):  
Udit Arora ◽  
Sohit Verma ◽  
Sarthak Sahni ◽  
Tushar Sharma

Several ball tracking algorithms have been reported in literature. However, most of them use high-quality video and multiple cameras, and the emphasis has been on coordinating the cameras or visualizing the tracking results. This paper aims to develop a system for assisting the umpire in the sport of Cricket in making decisions like detection of no-balls, wide-balls, leg before wicket and bouncers, with the help of a single smartphone camera. It involves the implementation of Computer Vision algorithms for object detection and motion tracking, as well as the integration of machine learning algorithms to optimize the results. Techniques like Histogram of Gradients (HOG) and Support Vector Machine (SVM) are used for object classification and recognition. Frame subtraction, minimum enclosing circle, and contour detection algorithms are optimized and used for the detection of a cricket ball. These algorithms are applied using the Open Source Python Library - OpenCV. Machine Learning techniques - Linear and Quadratic Regression are used to track and predict the motion of the ball. It also involves the use of open source Python library VPython for the visual representation of the results. The paper describes the design and structure for the approach undertaken in the system for analyzing and visualizing off-air low-quality cricket videos.


2019 ◽  
Vol 8 (2S8) ◽  
pp. 1317-1323

The muscular activities caused the activation of facial action units (AUs) when a facial expression is shown by a human face. This paper presents the methods to recognize AU using a distance feature between facial points which activates the muscles. The seven AU involved are AU1, AU4, AU6, AU12, AU15, AU17 and AU25 that characterizes a happy and sad expression. The recognition is performed on each AU according to the rules defined based on the distance of each facial point. The facial distances chosen are computed from twelve salient facial points. Then the facial distances are trained using Support Vector Machine (SVM) and Neural Network (NN). Classification result using SVM is presented with several different SVM kernels while result using NN is presented for each training, validation and testing phase. By using any SVM kernels, it is consistent that AUs that are corresponded to sad expression has a high recognition compared to happy expression. The highest average kernel performance across AUs is 93%, scored by quadratic kernel. Best results for NN across AUs is for AU25 (Lips parted) with lowest CE (0.38%) and 0% incorrect classification.


2018 ◽  
Author(s):  
Jeffrey M. Girard ◽  
Jeffrey F Cohn ◽  
László A Jeni ◽  
Simon Lucey ◽  
Fernando De la Torre

By systematically varying the number of subjects and the number of frames per subject, we explored the influence of training set size on appearance and shape-based approaches to facial action unit (AU) detection. Digital video and expert coding of spontaneous facial activity from 80 subjects (over 350,000 frames) were used to train and test support vector machine classifiers. Appearance features were shape-normalized SIFT descriptors and shape features were 66 facial landmarks. Ten-fold cross-validation was used in all evaluations. Number of subjects and number of frames per subject differentially affected appearance and shape-based classifiers. For appearance features, which are high-dimensional, increasing the number of training subjects from 8 to 64 incrementally improved performance, regardless of the number of frames taken from each subject (ranging from 450 through 3600). In contrast, for shape features, increases in the number of training subjects and frames were associated with mixed results. In summary, maximal performance was attained using appearance features from large numbers of subjects with as few as 450 frames per subject. These findings suggest that variation in the number of subjects rather than number of frames per subject yields most efficient performance.


PeerJ ◽  
2020 ◽  
Vol 8 ◽  
pp. e10083 ◽  
Author(s):  
Ashis Kumar Das ◽  
Shiba Mishra ◽  
Saji Saraswathy Gopalan

Background The recent pandemic of CoVID-19 has emerged as a threat to global health security. There are very few prognostic models on CoVID-19 using machine learning. Objectives To predict mortality among confirmed CoVID-19 patients in South Korea using machine learning and deploy the best performing algorithm as an open-source online prediction tool for decision-making. Materials and Methods Mortality for confirmed CoVID-19 patients (n = 3,524) between January 20, 2020 and May 30, 2020 was predicted using five machine learning algorithms (logistic regression, support vector machine, K nearest neighbor, random forest and gradient boosting). The performance of the algorithms was compared, and the best performing algorithm was deployed as an online prediction tool. Results The logistic regression algorithm was the best performer in terms of discrimination (area under ROC curve = 0.830), calibration (Matthews Correlation Coefficient = 0.433; Brier Score = 0.036) and. The best performing algorithm (logistic regression) was deployed as the online CoVID-19 Community Mortality Risk Prediction tool named CoCoMoRP (https://ashis-das.shinyapps.io/CoCoMoRP/). Conclusions We describe the development and deployment of an open-source machine learning tool to predict mortality risk among CoVID-19 confirmed patients using publicly available surveillance data. This tool can be utilized by potential stakeholders such as health providers and policymakers to triage patients at the community level in addition to other approaches.


Author(s):  
Ashis Kumar Das ◽  
Shiba Mishra ◽  
Saji Saraswathy Gopalan

AbstractBackgroundThe recent pandemic of CoVID-19 has emerged as a threat to global health security. There are a very few prognostic models on CoVID-19 using machine learning.ObjectivesTo predict mortality among confirmed CoVID-19 patients in South Korea using machine learning and deploy the best performing algorithm as an open-source online prediction tool for decision-making.Materials and methodsMortality for confirmed CoVID-19 patients (n=3,022) between January 20, 2020 and April 07, 2020 was predicted using five machine learning algorithms (logistic regression, support vector machine, K nearest neighbor, random forest and gradient boosting). Performance of the algorithms was compared, and the best performing algorithm was deployed as an online prediction tool.ResultsThe gradient boosting algorithm was the best performer in terms of discrimination (area under ROC curve=0.966), calibration (Matthews Correlation Coefficient=0.656; Brier Score=0.013) and predictive ability (accuracy=0.987). The best performer algorithm (gradient boosting) was deployed as the online CoVID-19 Community Mortality Risk Prediction tool named CoCoMoRP (https://ashis-das.shinyapps.io/CoCoMoRP/).ConclusionsWe describe the framework for the rapid development and deployment of an open-source machine learning tool to predict mortality risk among CoVID-19 confirmed patients using publicly available surveillance data. This tool can be utilized by potential stakeholders such as health providers and policy makers to triage patients at the community level in addition to other approaches.


2019 ◽  
Vol 11 (1) ◽  
Author(s):  
Kamel Mansouri ◽  
Neal F. Cariello ◽  
Alexandru Korotcov ◽  
Valery Tkachenko ◽  
Chris M. Grulke ◽  
...  

Abstract Background The logarithmic acid dissociation constant pKa reflects the ionization of a chemical, which affects lipophilicity, solubility, protein binding, and ability to pass through the plasma membrane. Thus, pKa affects chemical absorption, distribution, metabolism, excretion, and toxicity properties. Multiple proprietary software packages exist for the prediction of pKa, but to the best of our knowledge no free and open-source programs exist for this purpose. Using a freely available data set and three machine learning approaches, we developed open-source models for pKa prediction. Methods The experimental strongest acidic and strongest basic pKa values in water for 7912 chemicals were obtained from DataWarrior, a freely available software package. Chemical structures were curated and standardized for quantitative structure–activity relationship (QSAR) modeling using KNIME, and a subset comprising 79% of the initial set was used for modeling. To evaluate different approaches to modeling, several datasets were constructed based on different processing of chemical structures with acidic and/or basic pKas. Continuous molecular descriptors, binary fingerprints, and fragment counts were generated using PaDEL, and pKa prediction models were created using three machine learning methods, (1) support vector machines (SVM) combined with k-nearest neighbors (kNN), (2) extreme gradient boosting (XGB) and (3) deep neural networks (DNN). Results The three methods delivered comparable performances on the training and test sets with a root-mean-squared error (RMSE) around 1.5 and a coefficient of determination (R2) around 0.80. Two commercial pKa predictors from ACD/Labs and ChemAxon were used to benchmark the three best models developed in this work, and performance of our models compared favorably to the commercial products. Conclusions This work provides multiple QSAR models to predict the strongest acidic and strongest basic pKas of chemicals, built using publicly available data, and provided as free and open-source software on GitHub.


2017 ◽  
Author(s):  
Udit Arora ◽  
Sohit Verma ◽  
Sarthak Sahni ◽  
Tushar Sharma

Several ball tracking algorithms have been reported in literature. However, most of them use high-quality video and multiple cameras, and the emphasis has been on coordinating the cameras or visualizing the tracking results. This paper aims to develop a system for assisting the umpire in the sport of Cricket in making decisions like detection of no-balls, wide-balls, leg before wicket and bouncers, with the help of a single smartphone camera. It involves the implementation of Computer Vision algorithms for object detection and motion tracking, as well as the integration of machine learning algorithms to optimize the results. Techniques like Histogram of Gradients (HOG) and Support Vector Machine (SVM) are used for object classification and recognition. Frame subtraction, minimum enclosing circle, and contour detection algorithms are optimized and used for the detection of a cricket ball. These algorithms are applied using the Open Source Python Library - OpenCV. Machine Learning techniques - Linear and Quadratic Regression are used to track and predict the motion of the ball. It also involves the use of open source Python library VPython for the visual representation of the results. The paper describes the design and structure for the approach undertaken in the system for analyzing and visualizing off-air low-quality cricket videos.


Sign in / Sign up

Export Citation Format

Share Document