scholarly journals Activities of daily living with bionic arm improved by combination training and latching filter in prosthesis control comparison

Author(s):  
Michael D. Paskett ◽  
Mark R. Brinton ◽  
Taylor C. Hansen ◽  
Jacob A. George ◽  
Tyler S. Davis ◽  
...  

Abstract Background Advanced prostheses can restore function and improve quality of life for individuals with amputations. Unfortunately, most commercial control strategies do not fully utilize the rich control information from residual nerves and musculature. Continuous decoders can provide more intuitive prosthesis control using multi-channel neural or electromyographic recordings. Three components influence continuous decoder performance: the data used to train the algorithm, the algorithm, and smoothing filters on the algorithm’s output. Individual groups often focus on a single decoder, so very few studies compare different decoders using otherwise similar experimental conditions. Methods We completed a two-phase, head-to-head comparison of 12 continuous decoders using activities of daily living. In phase one, we compared two training types and a smoothing filter with three algorithms (modified Kalman filter, multi-layer perceptron, and convolutional neural network) in a clothespin relocation task. We compared training types that included only individual digit and wrist movements vs. combination movements (e.g., simultaneous grasp and wrist flexion). We also compared raw vs. nonlinearly smoothed algorithm outputs. In phase two, we compared the three algorithms in fragile egg, zipping, pouring, and folding tasks using the combination training and smoothing found beneficial in phase one. In both phases, we collected objective, performance-based (e.g., success rate), and subjective, user-focused (e.g., preference) measures. Results Phase one showed that combination training improved prosthesis control accuracy and speed, and that the nonlinear smoothing improved accuracy but generally reduced speed. Phase one importantly showed simultaneous movements were used in the task, and that the modified Kalman filter and multi-layer perceptron predicted more simultaneous movements than the convolutional neural network. In phase two, user-focused metrics favored the convolutional neural network and modified Kalman filter, whereas performance-based metrics were generally similar among all algorithms. Conclusions These results confirm that state-of-the-art algorithms, whether linear or nonlinear in nature, functionally benefit from training on more complex data and from output smoothing. These studies will be used to select a decoder for a long-term take-home trial with implanted neuromyoelectric devices. Overall, clinical considerations may favor the mKF as it is similar in performance, faster to train, and computationally less expensive than neural networks.

2020 ◽  
Author(s):  
Michael D. Paskett ◽  
Mark R. Brinton ◽  
Taylor C. Hansen ◽  
Jacob A. George ◽  
Tyler S. Davis ◽  
...  

AbstractBackgroundAdvanced prostheses can restore function and improve quality of life for individuals with amputations. Unfortunately, most commercial control strategies do not utilize the rich control information from residual nerves and musculature. Continuous decoders can provide more intuitive prosthesis control from multi-channel neural or electromyographic recordings. Three components influence continuous decoder performance: the data used to train the algorithm, the algorithm, and smoothing filters on the algorithm’s output. As individual groups often focus on a single decoder, very few studies compare different decoders using otherwise similar experimental conditions.MethodsWe completed a two-phase head-to-head comparison of 12 continuous decoders using activities of daily living. In phase one, we compared two training types and a smoothing filter with three algorithms (modified Kalman filter, multi-layer perceptron, and convolutional neural network) in a clothespin relocation task. We compared training types that included data with only individual digit and wrist movements vs. combination movements (e.g., simultaneous grasp and wrist flexion). We also compared raw vs. nonlinearly smoothed algorithm outputs. In phase two, we compared the three algorithms in fragile egg, zipping, pouring, and folding tasks using the combination training and smoothing found beneficial in phase one. In both phases, we collected objective, performance-based (e.g., success rate) and subjective, user-focused (e.g., preference) measures.ResultsPhase one showed that combination training improved prosthesis control accuracy and speed, and that the nonlinear smoothing improved accuracy but generally reduced speed. Phase one importantly showed simultaneous movements were used in the task, and that the modified Kalman filter and multi-layer perceptron predicted more simultaneous movements than the convolutional neural network. In phase two, user-focused metrics favored the convolutional neural network and modified Kalman filter, whereas performance-based metrics were generally similar among all algorithms.ConclusionsThese results confirm that state-of-the-art algorithms, whether linear or nonlinear in nature, functionally benefit from training on more complex data and from output smoothing. These studies will be used to select a decoder for a long-term take-home trial with implanted neuromyoelectric devices. Overall, clinical considerations may favor the mKF as it is similar in performance, faster to train, and computationally less expensive than neural networks.


2020 ◽  
Vol 11 (1) ◽  
pp. 10
Author(s):  
Muchun Su ◽  
Diana Wahyu Hayati ◽  
Shaowu Tseng ◽  
Jiehhaur Chen ◽  
Hsihsien Wei

Health care for independently living elders is more important than ever. Automatic recognition of their Activities of Daily Living (ADL) is the first step to solving the health care issues faced by seniors in an efficient way. The paper describes a Deep Neural Network (DNN)-based recognition system aimed at facilitating smart care, which combines ADL recognition, image/video processing, movement calculation, and DNN. An algorithm is developed for processing skeletal data, filtering noise, and pattern recognition for identification of the 10 most common ADL including standing, bending, squatting, sitting, eating, hand holding, hand raising, sitting plus drinking, standing plus drinking, and falling. The evaluation results show that this DNN-based system is suitable method for dealing with ADL recognition with an accuracy rate of over 95%. The findings support the feasibility of this system that is efficient enough for both practical and academic applications.


Author(s):  
Jovin Angelico ◽  
Ken Ratri Retno Wardani

The computer ability to detect human being by computer vision is still being improved both in accuracy or computation time. In low-lighting condition, the detection accuracy is usually low. This research uses additional information, besides RGB channels, namely a depth map that shows objects’ distance relative to the camera. This research integrates Cascade Classifier (CC) to localize the potential object, the Convolutional Neural Network (CNN) technique to identify the human and nonhuman image, and the Kalman filter technique to track human movement. For training and testing purposes, there are two kinds of RGB-D datasets used with different points of view and lighting conditions. Both datasets have been selected to remove images which contain a lot of noises and occlusions so that during the training process it will be more directed. Using these integrated techniques, detection and tracking accuracy reach 77.7%. The impact of using Kalman filter increases computation efficiency by 41%.


Sensors ◽  
2019 ◽  
Vol 19 (4) ◽  
pp. 972 ◽  
Author(s):  
Xingchen Liu ◽  
Qicai Zhou ◽  
Jiong Zhao ◽  
Hehong Shen ◽  
Xiaolei Xiong

Deep learning methods have been widely used in the field of intelligent fault diagnosis due to their powerful feature learning and classification capabilities. However, it is easy to overfit depth models because of the large number of parameters brought by the multilayer-structure. As a result, the methods with excellent performance under experimental conditions may severely degrade under noisy environment conditions, which are ubiquitous in practical industrial applications. In this paper, a novel method combining a one-dimensional (1-D) denoising convolutional autoencoder (DCAE) and a 1-D convolutional neural network (CNN) is proposed to address this problem, whereby the former is used for noise reduction of raw vibration signals and the latter for fault diagnosis using the de-noised signals. The DCAE model is trained with noisy input for denoising learning. In the CNN model, a global average pooling layer, instead of fully-connected layers, is applied as a classifier to reduce the number of parameters and the risk of overfitting. In addition, randomly corrupted signals are adopted as training samples to improve the anti-noise diagnosis ability. The proposed method is validated by bearing and gearbox datasets mixed with Gaussian noise. The experimental result shows that the proposed DCAE model is effective in denoising and almost causes no loss of input information, while the using of global average pooling and input-corrupt training improves the anti-noise ability of the CNN model. As a result, the method combined the DCAE model and the CNN model can realize high-accuracy diagnosis even under noisy environment.


Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6932
Author(s):  
Matthew Burns ◽  
Federico Cruciani ◽  
Philip Morrow ◽  
Chris Nugent ◽  
Sally McClean

The desire to remain living in one’s own home rather than a care home by those in need of 24/7 care is one that requires a level of understanding for the actions of an environment’s inhabitants. This can potentially be accomplished with the ability to recognise Activities of Daily Living (ADLs); however, this research focuses first on producing an unobtrusive solution for pose recognition where the preservation of privacy is a primary aim. With an accurate manner of predicting an inhabitant’s poses, their interactions with objects within the environment and, therefore, the activities they are performing, can begin to be understood. This research implements a Convolutional Neural Network (CNN), which has been designed with an original architecture derived from the popular AlexNet, to predict poses from thermal imagery that have been captured using thermopile infrared sensors (TISs). Five TISs have been deployed within the smart kitchen in Ulster University where each provides input to a corresponding trained CNN. The approach is evaluated using an original dataset and an F1-score of 0.9920 was achieved with all five TISs. The limitations of utilising a ceiling-based TIS are investigated and each possible permutation of corner-based TISs is evaluated to satisfy a trade-off between the number of TISs, the total sensor cost and the performances. These tests are also promising as F1-scores of 0.9266, 0.9149 and 0.8468 were achieved with the isolated use of four, three, and two corner TISs, respectively.


2020 ◽  
Author(s):  
Maria Kaselimi ◽  
Nikolaos Doulamis ◽  
Demitris Delikaraoglou

<p>Knowledge of the ionospheric electron density is essential for a wide range of applications, e.g., telecommunications, satellite positioning and navigation, and Earth observation from space. Therefore, considerable efforts have been concentrated on modeling this ionospheric parameter of interest. Ionospheric electron density is characterized by high complexity and is space−and time−varying, as it is highly dependent on local time, latitude, longitude, season, solar cycle and activity, and geomagnetic conditions. Daytime disturbances cause periodic changes in total electron content (diurnal variation) and additionally, there are multi-day periodicities, seasonal variations, latitudinal variations, or even ionospheric perturbations that cause fluctuations in signal transmission.</p><p>Because of its multiple band frequencies, the current Global Navigation Satellite Systems (GNSS) offer an excellent example of how we can infer ionosphere conditions from its effect on the radiosignals from different GNSS band frequencies. Thus, GNSS techniques provide a way of directly measuring the electron density in the ionosphere. The main advantage of such techniques is the provision of the integrated electron content measurements along the satellite-to-receiver line-of-sight at a large number of sites over a large geographic area.</p><p>Deep learning techniques are essential to reveal accurate ionospheric conditions and create representations at high levels of abstraction. These methods can successfully deal with non-linearity and complexity and are capable of identifying complex data patterns, achieving accurate ionosphere modeling. One application that has recently attracted considerable attention within the geodetic community is the possibility of applying these techniques in order to model the ionosphere delays based on GNSS satellite signals.</p><p>This paper deals with a modeling approach suitable for predicting the ionosphere delay at different locations of the IGS network stations using an adaptive Convolutional Neural Network (CNN). As experimental data we used actual GNSS observations from selected stations of the global IGS network which were participating in the still-ongoing MGEX project that provides various satellite signals from the currently available multiple navigation satellite systems. Slant TEC data (STEC) were obtained using the undifferenced and unconstrained PPP technique. The STEC data were provided by GAMP software and converted to VTEC data values. The proposed CNN uses the following basic information: GNSS signal azimuth and elevation angle, GNSS satellite position (x and y). Then, the adaptive CNN utilizes these data inputs along with the predicted VTEC values of the first CNN for the previous observation epochs. Topics to be discussed in the paper include the design of the CNN network structure, training strategy, data analysis, as well as preliminary testing results of the ionospheric delays predictions as compared with the IGS ionosphere products.   </p>


Author(s):  
Mohammad Javad Shooshtari ◽  
Hossein Etemadfard ◽  
Rouzbeh Shad

The widespread deployment of social media has helped researchers access an enormous amount of data in various domains, including the pandemic caused by the COVID-19 spread. This study presents a heuristic approach to classify Commercial Instagram Posts (CIPs) and explores how the businesses around the Holy Shrine – a sacred complex in Mashhad, Iran, surrounded by numerous shopping centers – were impacted by the pandemic. Two datasets of Instagram posts (one gathered data from March 14th to April 10th, 2020, when Holy Shrine and nearby shops were closed, and one extracted data from the same period in 2019), two word embedding models – aimed at vectorizing associated caption of each post, and two neural networks – multi-layer perceptron and convolutional neural network – were employed to classify CIPs in 2019. Among the scenarios defined for the 2019 CIPs classification, the results revealed that the combination of MLP and CBoW achieved the best performance, which was then used for the 2020 CIPs classification. It is found out that the fraction of CIPs to total Instagram posts has increased from 5.58% in 2019 to 8.08% in 2020, meaning that business owners were using Instagram to increase their sales and continue their commercial activities to compensate for the closure of their stores during the pandemic. Moreover, the portion of non-commercial Instagram posts (NCIPs) in total posts has decreased from 94.42% in 2019 to 91.92% in 2020, implying the fact that since the Holy Shrine was closed, Mashhad citizens and tourists could not visit it and take photos to post on their Instagram accounts.


Sign in / Sign up

Export Citation Format

Share Document