scholarly journals Wearable Sensor-Based Gait Analysis for Age and Gender Estimation

Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2424 ◽  
Author(s):  
Md Atiqur Rahman Ahad ◽  
Thanh Trung Ngo ◽  
Anindya Das Antar ◽  
Masud Ahmed ◽  
Tahera Hossain ◽  
...  

Wearable sensor-based systems and devices have been expanded in different application domains, especially in the healthcare arena. Automatic age and gender estimation has several important applications. Gait has been demonstrated as a profound motion cue for various applications. A gait-based age and gender estimation challenge was launched in the 12th IAPR International Conference on Biometrics (ICB), 2019. In this competition, 18 teams initially registered from 14 countries. The goal of this challenge was to find some smart approaches to deal with age and gender estimation from sensor-based gait data. For this purpose, we employed a large wearable sensor-based gait dataset, which has 745 subjects (357 females and 388 males), from 2 to 78 years old in the training dataset; and 58 subjects (19 females and 39 males) in the test dataset. It has several walking patterns. The gait data sequences were collected from three IMUZ sensors, which were placed on waist-belt or at the top of a backpack. There were 67 solutions from ten teams—for age and gender estimation. This paper extensively analyzes the methods and achieved-results from various approaches. Based on analysis, we found that deep learning-based solutions lead the competitions compared with conventional handcrafted methods. We found that the best result achieved 24.23% prediction error for gender estimation, and 5.39 mean absolute error for age estimation by employing angle embedded gait dynamic image and temporal convolution network.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mu Sook Lee ◽  
Yong Soo Kim ◽  
Minki Kim ◽  
Muhammad Usman ◽  
Shi Sub Byon ◽  
...  

AbstractWe examined the feasibility of explainable computer-aided detection of cardiomegaly in routine clinical practice using segmentation-based methods. Overall, 793 retrospectively acquired posterior–anterior (PA) chest X-ray images (CXRs) of 793 patients were used to train deep learning (DL) models for lung and heart segmentation. The training dataset included PA CXRs from two public datasets and in-house PA CXRs. Two fully automated segmentation-based methods using state-of-the-art DL models for lung and heart segmentation were developed. The diagnostic performance was assessed and the reliability of the automatic cardiothoracic ratio (CTR) calculation was determined using the mean absolute error and paired t-test. The effects of thoracic pathological conditions on performance were assessed using subgroup analysis. One thousand PA CXRs of 1000 patients (480 men, 520 women; mean age 63 ± 23 years) were included. The CTR values derived from the DL models and diagnostic performance exhibited excellent agreement with reference standards for the whole test dataset. Performance of segmentation-based methods differed based on thoracic conditions. When tested using CXRs with lesions obscuring heart borders, the performance was lower than that for other thoracic pathological findings. Thus, segmentation-based methods using DL could detect cardiomegaly; however, the feasibility of computer-aided detection of cardiomegaly without human intervention was limited.


This paper presents a deep learning approach for age estimation of human beings using their facial images. The different racial groups based on skin colour have been incorporated in the annotations of the images in the dataset, while ensuring an adequate distribution of subjects across the racial groups so as to achieve an accurate Automatic Facial Age Estimation (AFAE). The principle of transfer learning is applied to the ResNet50 Convolutional Neural Network (CNN) initially pretrained for the task of object classification and finetuning it’s hyperparameters to propose an AFAE system that can be used to automate ages of humans across multiple racial groups. The mean absolute error of 4.25 years is obtained at the end of the research which proved the effectiveness and superiority of the proposed method.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Kaori Ishii ◽  
Ryo Asaoka ◽  
Takashi Omoto ◽  
Shingo Mitaki ◽  
Yuri Fujino ◽  
...  

AbstractThe purpose of the current study was to predict intraocular pressure (IOP) using color fundus photography with a deep learning (DL) model, or, systemic variables with a multivariate linear regression model (MLM), along with least absolute shrinkage and selection operator regression (LASSO), support vector machine (SVM), and Random Forest: (RF). Training dataset included 3883 examinations from 3883 eyes of 1945 subjects and testing dataset 289 examinations from 289 eyes from 146 subjects. With the training dataset, MLM was constructed to predict IOP using 35 systemic variables and 25 blood measurements. A DL model was developed to predict IOP from color fundus photographs. The prediction accuracy of each model was evaluated through the absolute error and the marginal R-squared (mR2), using the testing dataset. The mean absolute error with MLM was 2.29 mmHg, which was significantly smaller than that with DL (2.70 dB). The mR2 with MLM was 0.15, whereas that with DL was 0.0066. The mean absolute error (between 2.24 and 2.30 mmHg) and mR2 (between 0.11 and 0.15) with LASSO, SVM and RF were similar to or poorer than MLM. A DL model to predict IOP using color fundus photography proved far less accurate than MLM using systemic variables.


2021 ◽  
Vol 11 (10) ◽  
pp. 4549
Author(s):  
Md. Mahbubul Islam ◽  
Joong-Hwan Baek

The COVID-19 pandemic markedly changed the human shopping nature, necessitating a contactless shopping system to curb the spread of the contagious disease efficiently. Consequently, a customer opts for a store where it is possible to avoid physical contacts and shorten the shopping process with extended services such as personalized product recommendations. Automatic age and gender estimation of a customer in a smart store strongly benefit the consumer by providing personalized advertisement and product recommendation; similarly, it aids the smart store proprietor to promote sales and develop an inventory perpetually for the future retail. In our paper, we propose a deep learning-founded enterprise solution for smart store customer relationship management (CRM), which allows us to predict the age and gender from a customer’s face image taken in an unconstrained environment to facilitate the smart store’s extended services, as it is expected for a modern venture. For the age estimation problem, we mitigate the data sparsity problem of the large public IMDB-WIKI dataset by image enhancement from another dataset and perform data augmentation as required. We handle our classification tasks utilizing an empirically leading pre-trained convolutional neural network (CNN), the VGG-16 network, and incorporate batch normalization. Especially, the age estimation task is posed as a deep classification problem followed by a multinomial logistic regression first-moment refinement. We validate our system for two standard benchmarks, one for each task, and demonstrate state-of-the-art performance for both real age and gender estimation.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3719
Author(s):  
Aoxin Ni ◽  
Arian Azarang ◽  
Nasser Kehtarnavaz

The interest in contactless or remote heart rate measurement has been steadily growing in healthcare and sports applications. Contactless methods involve the utilization of a video camera and image processing algorithms. Recently, deep learning methods have been used to improve the performance of conventional contactless methods for heart rate measurement. After providing a review of the related literature, a comparison of the deep learning methods whose codes are publicly available is conducted in this paper. The public domain UBFC dataset is used to compare the performance of these deep learning methods for heart rate measurement. The results obtained show that the deep learning method PhysNet generates the best heart rate measurement outcome among these methods, with a mean absolute error value of 2.57 beats per minute and a mean square error value of 7.56 beats per minute.


Vibration ◽  
2021 ◽  
Vol 4 (2) ◽  
pp. 341-356
Author(s):  
Jessada Sresakoolchai ◽  
Sakdirat Kaewunruen

Various techniques have been developed to detect railway defects. One of the popular techniques is machine learning. This unprecedented study applies deep learning, which is a branch of machine learning techniques, to detect and evaluate the severity of rail combined defects. The combined defects in the study are settlement and dipped joint. Features used to detect and evaluate the severity of combined defects are axle box accelerations simulated using a verified rolling stock dynamic behavior simulation called D-Track. A total of 1650 simulations are run to generate numerical data. Deep learning techniques used in the study are deep neural network (DNN), convolutional neural network (CNN), and recurrent neural network (RNN). Simulated data are used in two ways: simplified data and raw data. Simplified data are used to develop the DNN model, while raw data are used to develop the CNN and RNN model. For simplified data, features are extracted from raw data, which are the weight of rolling stock, the speed of rolling stock, and three peak and bottom accelerations from two wheels of rolling stock. In total, there are 14 features used as simplified data for developing the DNN model. For raw data, time-domain accelerations are used directly to develop the CNN and RNN models without processing and data extraction. Hyperparameter tuning is performed to ensure that the performance of each model is optimized. Grid search is used for performing hyperparameter tuning. To detect the combined defects, the study proposes two approaches. The first approach uses one model to detect settlement and dipped joint, and the second approach uses two models to detect settlement and dipped joint separately. The results show that the CNN models of both approaches provide the same accuracy of 99%, so one model is good enough to detect settlement and dipped joint. To evaluate the severity of the combined defects, the study applies classification and regression concepts. Classification is used to evaluate the severity by categorizing defects into light, medium, and severe classes, and regression is used to estimate the size of defects. From the study, the CNN model is suitable for evaluating dipped joint severity with an accuracy of 84% and mean absolute error (MAE) of 1.25 mm, and the RNN model is suitable for evaluating settlement severity with an accuracy of 99% and mean absolute error (MAE) of 1.58 mm.


2020 ◽  
Vol 22 (Supplement_2) ◽  
pp. ii148-ii148
Author(s):  
Yoshihiro Muragaki ◽  
Yutaka Matsui ◽  
Takashi Maruyama ◽  
Masayuki Nitta ◽  
Taiichi Saito ◽  
...  

Abstract INTRODUCTION It is useful to know the molecular subtype of lower-grade gliomas (LGG) when deciding on a treatment strategy. This study aims to diagnose this preoperatively. METHODS A deep learning model was developed to predict the 3-group molecular subtype using multimodal data including magnetic resonance imaging (MRI), positron emission tomography (PET), and computed tomography (CT). The performance was evaluated using leave-one-out cross validation with a dataset containing information from 217 LGG patients. RESULTS The model performed best when the dataset contained MRI, PET, and CT data. The model could predict the molecular subtype with an accuracy of 96.6% for the training dataset and 68.7% for the test dataset. The model achieved test accuracies of 58.5%, 60.4%, and 59.4% when the dataset contained only MRI, MRI and PET, and MRI and CT data, respectively. The conventional method used to predict mutations in the isocitrate dehydrogenase (IDH) gene and the codeletion of chromosome arms 1p and 19q (1p/19q) sequentially had an overall accuracy of 65.9%. This is 2.8 percent point lower than the proposed method, which predicts the 3-group molecular subtype directly. CONCLUSIONS AND FUTURE PERSPECTIVE A deep learning model was developed to diagnose the molecular subtype preoperatively based on multi-modality data in order to predict the 3-group classification directly. Cross-validation showed that the proposed model had an overall accuracy of 68.7% for the test dataset. This is the first model to double the expected value for a 3-group classification problem, when predicting the LGG molecular subtype. We plan to apply the techniques of heat map and/or segmentation for an increase in prediction accuracy.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Hua Zheng ◽  
Zhenglong Wu ◽  
Shiqiang Duan ◽  
Jiangtao Zhou

Due to the inevitable deviations between the results of theoretical calculations and physical experiments, flutter tests and flutter signal analysis often play significant roles in designing the aeroelasticity of a new aircraft. The measured structural response from aeroelastic models in both wind tunnel tests and real fight flutter tests contain an abundance of structural information, but traditional methods tend to have limited ability to extract features of concern. Inspired by deep learning concepts, a novel feature extraction method for flutter signal analysis was established in this study by combining the convolutional neural network (CNN) with empirical mode decomposition (EMD). It is widely hypothesized that when flutter occurs, the measured structural signals are harmonic or divergent in the time domain, and that the flutter modal (1) is singular and (2) its energy increases significantly in the frequency domain. A measured-signal feature extraction and flutter criterion framework was constructed accordingly. The measured signals from a wind tunnel test were manually labeled “flutter” and “no-flutter” as the foundational dataset for the deep learning algorithm. After the normalized preprocessing, the intrinsic mode functions (IMFs) of the flutter test signals are obtained by the EMD method. The IMFs are then reshaped to make them the suitable size to be input to the CNN. The CNN parameters are optimized though the training dataset, and the trained model is validated through the test dataset (i.e., cross-validation). The accuracy rate of the proposed method reached 100% on the test dataset. The training model appears to effectively distinguish whether or not the structural response signal contains flutter. The combination of EMD and CNN provides effective feature extraction of time series signals in flutter test data. This research explores the connection between structural response signals and flutter from the perspective of artificial intelligence. The method allows for real-time, online prediction with low computational complexity.


2021 ◽  
Author(s):  
Hangsik Shin

BACKGROUND Arterial stiffness due to vascular aging is a major indicator for evaluating cardiovascular risk. OBJECTIVE In this study, we propose a method of estimating age by applying machine learning to photoplethysmogram for non-invasive vascular age assessment. METHODS The machine learning-based age estimation model that consists of three convolutional layers and two-layer fully connected layers, was developed using segmented photoplethysmogram by pulse from a total of 752 adults aged 19–87 years. The performance of the developed model was quantitatively evaluated using mean absolute error, root-mean-squared-error, Pearson’s correlation coefficient, coefficient of determination. The Grad-Cam was used to explain the contribution of photoplethysmogram waveform characteristic in vascular age estimation. RESULTS Mean absolute error of 8.03, root mean squared error of 9.96, 0.62 of correlation coefficient, and 0.38 of coefficient of determination were shown through 10-fold cross validation. Grad-Cam, used to determine the weight that the input signal contributes to the result, confirmed that the contribution to the age estimation of the photoplethysmogram segment was high around the systolic peak. CONCLUSIONS The machine learning-based vascular aging analysis method using the PPG waveform showed comparable or superior performance compared to previous studies without complex feature detection in evaluating vascular aging. CLINICALTRIAL 2015-0104


Author(s):  
Thanh Trung Ngo ◽  
Md Atiqur Rahman Ahad ◽  
Anindya Das Antar ◽  
Masud Ahmed ◽  
Daigo Muramatsu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document