scholarly journals Quantifying normal and parkinsonian gait features from home movies: Practical application of a deep learning–based 2D pose estimator

2019 ◽  
Author(s):  
Kenichiro Sato ◽  
Yu Nagashima ◽  
Tatsuo Mano ◽  
Atsushi Iwata ◽  
Tatsushi Toda

AbstractObjectiveGait movies recorded in daily clinical practice are usually not filmed with specific devices, which prevents neurologists benefitting from leveraging gait analysis technologies. Here we propose a novel unsupervised approach to quantifying gait features and to extract cadence from normal and parkinsonian gait movies recorded with a home video camera by applying OpenPose, a deep learning–based 2D-pose estimator that can obtain joint coordinates from pictures or videos recorded with a monocular camera.MethodsOur proposed method consisted of two distinct phases: obtaining sequential gait features from movies by extracting body joint coordinates with OpenPose; and estimating cadence of periodic gait steps from the sequential gait features using the short-time pitch detection approach.ResultsThe cadence estimation of gait in its coronal plane (frontally viewed gait) as is frequently filmed in the daily clinical setting was successfully conducted in normal gait movies using the short-time autocorrelation function (ST-ACF). In cases of parkinsonian gait with prominent freezing of gait and involuntary oscillations, using ACF-based statistical distance metrics, we quantified the periodicity of each gait sequence; this metric clearly corresponded with the subjects’ baseline disease statuses.ConclusionThe proposed method allows us to analyze gait movies that have been underutilized to date in a completely data-driven manner, and might broaden the range of movies for which gait analyses can be conducted.

Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3719
Author(s):  
Aoxin Ni ◽  
Arian Azarang ◽  
Nasser Kehtarnavaz

The interest in contactless or remote heart rate measurement has been steadily growing in healthcare and sports applications. Contactless methods involve the utilization of a video camera and image processing algorithms. Recently, deep learning methods have been used to improve the performance of conventional contactless methods for heart rate measurement. After providing a review of the related literature, a comparison of the deep learning methods whose codes are publicly available is conducted in this paper. The public domain UBFC dataset is used to compare the performance of these deep learning methods for heart rate measurement. The results obtained show that the deep learning method PhysNet generates the best heart rate measurement outcome among these methods, with a mean absolute error value of 2.57 beats per minute and a mean square error value of 7.56 beats per minute.


1964 ◽  
Vol 36 (5) ◽  
pp. 1030-1030 ◽  
Author(s):  
A. M. Noll ◽  
M. R. Schroeder
Keyword(s):  

Medical image registration has important value in actual clinical applications. From the traditional time-consuming iterative similarity optimization method to the short time-consuming supervised deep learning to today's unsupervised learning, the continuous optimization of the registration strategy makes it more feasible in clinical applications. This survey mainly focuses on unsupervised learning methods and introduces the latest solutions for different registration relationships. The registration for inter-modality is a more challenging topic. The application of unsupervised learning in registration for inter-modality is the focus of this article. In addition, this survey also proposes ideas for future research methods to show directions of the future research.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6299 ◽  
Author(s):  
Sutanu Bhowmick ◽  
Satish Nagarajaiah ◽  
Ashok Veeraraghavan

Immediate assessment of structural integrity of important civil infrastructures, like bridges, hospitals, or dams, is of utmost importance after natural disasters. Currently, inspection is performed manually by engineers who look for local damages and their extent on significant locations of the structure to understand its implication on its global stability. However, the whole process is time-consuming and prone to human errors. Due to their size and extent, some regions of civil structures are hard to gain access for manual inspection. In such situations, a vision-based system of Unmanned Aerial Vehicles (UAVs) programmed with Artificial Intelligence algorithms may be an effective alternative to carry out a health assessment of civil infrastructures in a timely manner. This paper proposes a framework of achieving the above-mentioned goal using computer vision and deep learning algorithms for detection of cracks on the concrete surface from its image by carrying out image segmentation of pixels, i.e., classification of pixels in an image of the concrete surface and whether it belongs to cracks or not. The image segmentation or dense pixel level classification is carried out using a deep neural network architecture named U-Net. Further, morphological operations on the segmented images result in dense measurements of crack geometry, like length, width, area, and crack orientation for individual cracks present in the image. The efficacy and robustness of the proposed method as a viable real-life application was validated by carrying out a laboratory experiment of a four-point bending test on an 8-foot-long concrete beam of which the video is recorded using a camera mounted on a UAV-based, as well as a still ground-based, video camera. Detection, quantification, and localization of damage on a civil infrastructure using the proposed framework can directly be used in the prognosis of the structure’s ability to withstand service loads.


2020 ◽  
Vol 10 (14) ◽  
pp. 4744
Author(s):  
Hyukzae Lee ◽  
Jonghee Kim ◽  
Chanho Jung ◽  
Yongchan Park ◽  
Woong Park ◽  
...  

The arena fragmentation test (AFT) is one of the tests used to design an effective warhead. Conventionally, complex and expensive measuring equipment is used for testing a warhead and measuring important factors such as the size, velocity, and the spatial distribution of fragments where the fragments penetrate steel target plates. In this paper, instead of using specific sensors and equipment, we proposed the use of a deep learning-based object detection algorithm to detect fragments in the AFT. To this end, we acquired many high-speed videos and built an AFT image dataset with bounding boxes of warhead fragments. Our method fine-tuned an existing object detection network named the Faster R-convolutional neural network (CNN) on this dataset with modification of the network’s anchor boxes. We also employed a novel temporal filtering method, which was demonstrated as an effective non-fragment filtering scheme in our recent previous image processing-based fragment detection approach, to capture only the first penetrating fragments from all detected fragments. We showed that the performance of the proposed method was comparable to that of a sensor-based system under the same experimental conditions. We also demonstrated that the use of deep learning technologies in the task of AFT significantly enhanced the performance via a quantitative comparison between our proposed method and our recent previous image processing-based method. In other words, our proposed method outperformed the previous image processing-based method. The proposed method produced outstanding results in terms of finding the exact fragment positions.


2020 ◽  
Vol 10 (9) ◽  
pp. 3097
Author(s):  
Dmitry Kaplun ◽  
Alexander Voznesensky ◽  
Sergei Romanov ◽  
Valery Andreev ◽  
Denis Butusov

This paper considers two approaches to hydroacoustic signal classification, taking the sounds made by whales as an example: a method based on harmonic wavelets and a technique involving deep learning neural networks. The study deals with the classification of hydroacoustic signals using coefficients of the harmonic wavelet transform (fast computation), short-time Fourier transform (spectrogram) and Fourier transform using a kNN-algorithm. Classification quality metrics (precision, recall and accuracy) are given for different signal-to-noise ratios. ROC curves were also obtained. The use of the deep neural network for classification of whales’ sounds is considered. The effectiveness of using harmonic wavelets for the classification of complex non-stationary signals is proved. A technique to reduce the feature space dimension using a ‘modulo N reduction’ method is proposed. A classification of 26 individual whales from the Whale FM Project dataset is presented. It is shown that the deep-learning-based approach provides the best result for the Whale FM Project dataset both for whale types and individuals.


2020 ◽  
Vol 12 (15) ◽  
pp. 2502 ◽  
Author(s):  
Bulent Ayhan ◽  
Chiman Kwan ◽  
Bence Budavari ◽  
Liyun Kwan ◽  
Yan Lu ◽  
...  

Land cover classification with the focus on chlorophyll-rich vegetation detection plays an important role in urban growth monitoring and planning, autonomous navigation, drone mapping, biodiversity conservation, etc. Conventional approaches usually apply the normalized difference vegetation index (NDVI) for vegetation detection. In this paper, we investigate the performance of deep learning and conventional methods for vegetation detection. Two deep learning methods, DeepLabV3+ and our customized convolutional neural network (CNN) were evaluated with respect to their detection performance when training and testing datasets originated from different geographical sites with different image resolutions. A novel object-based vegetation detection approach, which utilizes NDVI, computer vision, and machine learning (ML) techniques, is also proposed. The vegetation detection methods were applied to high-resolution airborne color images which consist of RGB and near-infrared (NIR) bands. RGB color images alone were also used with the two deep learning methods to examine their detection performances without the NIR band. The detection performances of the deep learning methods with respect to the object-based detection approach are discussed and sample images from the datasets are used for demonstrations.


Sign in / Sign up

Export Citation Format

Share Document