scholarly journals Lower limb motor function assessment based on TensorFlow convolutional neural network and kernel entropy component analysis–local tangent space alignment

2020 ◽  
Vol 12 (7) ◽  
pp. 168781402094265
Author(s):  
Yan Zhang ◽  
SiNing Li ◽  
Ying Zhou ◽  
Jian Liu

Motor function assessment of patients and the elderly is crucial to gait assessment and gait rehabilitation. Accuracy of the assessment is affected by clinician’s experience. To solve the problem, this article proposes motor function assessment index to assess the motor function of patients. VICON system collects video of subjects when they are walking. And the original gait videos are pre-processed by the pixel-based adaptive segmenter and extracted by the convolutional neural network. The kernel entropy component analysis and local tangent space alignment reduced the dimensions of extracted features, and motor function assessment index is obtained. The Pearson correlation analysis shows that the motor function assessment index and modified gait abnormality rating scale are significantly correlated, and Pearson correlation coefficient is 0.92. These effectiveness results demonstrate that the proposed method has the considerable potential to promote the future design of automatic motor function assessment for clinical rehabilitation research.

JOUTICA ◽  
2021 ◽  
Vol 6 (2) ◽  
pp. 484
Author(s):  
Resty Wulanningrum ◽  
Anggi Nur Fadzila ◽  
Danar Putra Pamungkas

Manusia secara alami menggunakan ekspresi wajah untuk berkomunikasi dan menunjukan emosi mereka dalam berinteraksi sosial. Ekspresi wajah termasuk kedalam komunikasi non-verbal yang dapat menyampaikan keadaan emosi seseorang kepada orang yang telah mengamatinya. Penelitian ini menggunakan metode Principal Component Analysis (PCA) untuk proses ekstraksi ciri pada citra ekspresi dan metode Convolutional Neural Network (CNN) sebagai prosesi klasifikasi emosi, dengan menggunakan data Facial Expression Recognition-2013 (FER-2013) dilakukan proses training dan testing untuk menghasilkan nilai akurasi dan pengenalan emosi wajah. Hasil pengujian akhir mendapatkan nilai akurasi pada metode PCA sebesar 59,375% dan nilai akurasi pada pengujian metode CNN sebesar 59,386%.


2021 ◽  
Author(s):  
James Chung Wai Cheung ◽  
Yiu Chow TAM ◽  
Lok Chun CHAN ◽  
Ping Keung CHAN ◽  
Chunyi WEN

Abstract Objectives To develop a deep convolutional neural network (CNN) for the segmentation of femur and tibia on plain x-ray radiographs, hence enabling an automated measurement of joint space width (JSW) to predict the severity and progression of knee osteoarthritis (KOA). Methods A CNN with ResU-Net architecture was developed for knee X-ray imaging segmentation. The efficiency was evaluated by the Intersection over Union (IoU) score by comparing the outputs with the annotated contour of the distal femur and proximal tibia. By leveraging imaging segmentation, the minimal and multiple JSWs in the tibiofemoral joint were estimated and then validated by radiologists’ measurements in the Osteoarthritis Initiative (OAI) dataset using Pearson correlation and Bland–Altman plot. The estimated JSWs were deployed to predict the radiographic severity and progression of KOA defined by Kellgren-Lawrence (KL) grades using the XGBoost model. The classification performance was assessed using F1 and area under receiver operating curve (AUC). Results The network has attained a segmentation efficiency of 98.9% IoU. Meanwhile, the agreement between the CNN-based estimation and radiologist’s measurement of minimal JSW reached 0.7801 (p < 0.0001). Moreover, the 32-point multiple JSW obtained the highest AUC score of 0.656 to classify KL-grade of KOA. Whereas the 64-point multiple JSWs achieved the best performance in predicting KOA progression defined by KL grade change within 48 months, with AUC of 0.621. The multiple JSWs outperform the commonly used minimum JSW with 0.587 AUC in KL-grade classification and 0.554 AUC in disease progression prediction. Conclusion Fine-grained characterization of joint space width of KOA yields comparable performance to the radiologist in assessing disease severity and progression. We provide a fully automated and efficient radiographic assessment tool for KOA.


SINERGI ◽  
2019 ◽  
Vol 23 (3) ◽  
pp. 239
Author(s):  
Dwi Lydia Zuharah Astuti ◽  
Samsuryadi Samsuryadi ◽  
Dian Palupi Rini

Classification of facial expressions has become an essential part of computer systems and human-computer fast interaction. It is employed in various applications such as digital entertainment, customer service, driver monitoring, and emotional robots. Moreover, it has been studied through several aspects related to the face itself when facial expressions change based on the point of view or perspective. Facial curves such as eyebrows, nose, lips, and mouth will automatically change. Most of the proposed methods have limited frontal Face Expressions Recognition (FER), and their performance decrease when handling non-frontal and multi-view FER cases.  This study combined both methods in the classification of facial expressions, namely the Principal Component Analysis (PCA) and Convolutional Neural Network (CNN) methods. The results of this study proved to be more accurate than that of previous studies. The combination of PCA and CNN methods in the Static Facial Expressions in The Wild (SFEW) 2.0 dataset obtained an accuracy amounting to 70.4%; the CNN method alone only obtained an accuracy amounting to 60.9%.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Kai Hou

The recurrent convolutional neural network is an advanced neural network that integrates deep structure and convolution calculation. The feedforward neural network with convolution operation and deep structure is an important method of deep learning. In this paper, the convolutional neural network and the recurrent neural network are combined to establish a recurrent convolutional neural network model composed of anomalies, LSTM (Long Short-Term Memory), and CNN. This study combines the principal component analysis method to predict and analyze the test results of students’ physical fitness standards. The innovation lies in the introduction of the function of the recurrent convolutional network and the use of principal component analysis to conduct qualitative research on seven evaluation indicators that reflect the three aspects of students’ physical health. The results of the study clearly show that there is a strong correlation between some indicators, such as standing long jump and sitting bends which may have a strong correlation. The first principal component eigenvalue has the highest contribution rate, which mainly reflects the five indicators of standing long jump, sitting forward bend, pull-up, 50 m sprint, and 1000 m long-distance running. This shows that the physical fitness indicators have a great impact on the physical health of students, which also reflects the current status of students’ physical fitness problems. The results of principal component analysis are scientific and reasonable.


Symmetry ◽  
2021 ◽  
Vol 14 (1) ◽  
pp. 33
Author(s):  
Yin-Xin Bao ◽  
Quan Shi ◽  
Qin-Qin Shen ◽  
Yang Cao

Accurate traffic status prediction is of great importance to improve the security and reliability of the intelligent transportation system. However, urban traffic status prediction is a very challenging task due to the tight symmetry among the Human–Vehicle–Environment (HVE). The recently proposed spatial–temporal 3D convolutional neural network (ST-3DNet) effectively extracts both spatial and temporal characteristics in HVE, but ignores the essential long-term temporal characteristics and the symmetry of historical data. Therefore, a novel spatial–temporal 3D residual correlation network (ST-3DRCN) is proposed for urban traffic status prediction in this paper. The ST-3DRCN firstly introduces the Pearson correlation coefficient method to extract a high correlation between traffic data. Then, a dynamic spatial feature extraction component is constructed by using 3D convolution combined with residual units to capture dynamic spatial features. After that, based on the idea of long short-term memory (LSTM), a novel architectural unit is proposed to extract dynamic temporal features. Finally, the spatial and temporal features are fused to obtain the final prediction results. Experiments have been performed using two datasets from Chengdu, China (TaxiCD) and California, USA (PEMS-BAY). Taking the root mean square error (RMSE) as the evaluation index, the prediction accuracy of ST-3DRCN on TaxiCD dataset is 21.4%, 21.3%, 11.7%, 10.8%, 4.7%, 3.6% and 2.3% higher than LSTM, convolutional neural network (CNN), 3D-CNN, spatial–temporal residual network (ST-ResNet), spatial–temporal graph convolutional network (ST-GCN), dynamic global-local spatial–temporal network (DGLSTNet), and ST-3DNet, respectively.


2020 ◽  
Vol 2020 (9) ◽  
pp. 168-1-168-7
Author(s):  
Roger Gomez Nieto ◽  
Hernan Dario Benitez Restrepo ◽  
Roger Figueroa Quintero ◽  
Alan Bovik

Video Quality Assessment (VQA) is an essential topic in several industries ranging from video streaming to camera manufacturing. In this paper, we present a novel method for No-Reference VQA. This framework is fast and does not require the extraction of hand-crafted features. We extracted convolutional features of 3-D C3D Convolutional Neural Network and feed one trained Support Vector Regressor to obtain a VQA score. We did certain transformations to different color spaces to generate better discriminant deep features. We extracted features from several layers, with and without overlap, finding the best configuration to improve the VQA score. We tested the proposed approach in LIVE-Qualcomm dataset. We extensively evaluated the perceptual quality prediction model, obtaining one final Pearson correlation of 0:7749±0:0884 with Mean Opinion Scores, and showed that it can achieve good video quality prediction, outperforming other state-of-the-art VQA leading models.


Circulation ◽  
2020 ◽  
Vol 142 (Suppl_3) ◽  
Author(s):  
Priyanka Agarwal ◽  
Anna Shcherbina ◽  
Sharlene Day ◽  
Sara Saberi ◽  
Matthew E Mealiffe ◽  
...  

Introduction: Overall activity characteristics for patients with hypertrophic cardiomyopathy (HCM) have not been quantified previously. The relationship between physical activity quantified by accelerometry and biomarkers, exercise capacity, and quality of life in patients with HCM is also unknown. Methods: MAVERICK-HCM was a double-blind, placebo-controlled, 16-week study of mavacamten in 59 patients with symptomatic non-obstructive HCM. Patients were asked to wear ActiGraph GT9X Link wrist-worn monitors for ≥11 days between screening and day 1, and between weeks 12 and 16. Features derived from raw accelerometry data included average daily accelerometer units (ADAU) and step count. Univariate Pearson correlation coefficients were calculated between accelerometry data and clinical parameters among all patients. A multi-task convolutional neural network (CNN) was trained on raw accelerometry datapoints to jointly predict clinical markers of HCM severity. Test and training sets were derived by randomly segmenting each patient’s triaxial accelerometry data into non-overlapping minute intervals. Results: Fifty patients wore the accelerometer for ≥1 compliant day. Mean wear time was 12 days during screening and 10 days during treatment. Activity measures are summarized and average step count was 3,076 steps at baseline ( Table ). Activity features correlated with peak oxygen uptake (pVO 2 ), log NT-proBNP, and KCCQ score ( Table ). CNN predictions of clinical measures from activity data found Spearman R correlations of 0.82 for pVO 2 , 0.92 for log NT-proBNP, 0.82 for KCCQ, and 0.79 for E/e’. Conclusions: HCM patients in the MAVERICK study averaged only 3,000 steps/day. Markers of physical activity drawn from accelerometry are associated with standard clinical markers of HCM severity. Deep learning models can be constructed to predict markers of HCM severity from patients’ raw accelerometry data.


Sign in / Sign up

Export Citation Format

Share Document