Abstract
Background: It is increasingly interesting to monitor pain severity in elderly individuals by applying machine learning models. In previous studies, OpenFace© - a well-known automated facial analysis algorithm, was used to detect facial action units (FAUs) that initially need long hours of human coding. However, OpenFace© developed from the dataset that dominant young Caucasians who were illicit pain in the lab. Therefore, this study aims to evaluate the accuracy and feasibility of the model using data from OpenFace© to classify pain severity in elderly Asian patients in clinical settings.Methods: Data from 255 Thai individuals with chronic pain were collected at Chiang Mai Medical School Hospital. The phone camera recorded faces for 10 seconds at a 1-meter distance briefly after the patients provided self-rating pain severity. For those unable to self-rate, the video was recorded just after the move, which illicit pain. The trained assistant rated each video clip for the Pain Assessment in Advanced Dementia (PAINAD). The classification of pain severity was mild, moderate, or severe. OpenFace© process video clip into 18 FAUs. Five classification models were used, including logistic regression, multilayer perception, naïve Bayes, decision tree, k-nearest neighbors (KNN), and support vector machine (SVM). Results: Among the models that included only FAU described in the literature (FAUs 4, 6, 7, 9, 10, 25, 26, 27 and 45), multilayer perception yielded the highest accuracy of 50%. Among the machine learning selection features, the SVM model for FAU 1, 2, 4, 7, 9, 10, 12, 20, 25, 45, and gender yielded the best accuracy of 58%. Conclusion: Our open-source automatic video clip facial action unit analysis experiment was not robust for classifying elderly pain. Retraining facial action unit detection algorithms, enhancing frame selection strategies, and adding pain-related functions may improve the accuracy and feasibility of the model.