scholarly journals Deep Learning-Based Approaches for Decoding Motor Intent from Peripheral Nerve Signals

2021 ◽  
Author(s):  
Diu Khue Luu ◽  
Anh Tuan Nguyen ◽  
Ming Jiang ◽  
Jian Xu ◽  
Markus W. Drealan ◽  
...  

AbstractThe ultimate goal of an upper-limb neuroprosthesis is to achieve dexterous and intuitive control of individual fingers. Previous literature shows that deep learning (DL) is an effective tool to decode the motor intent from neural signals obtained from different parts of the nervous system. However, it still requires complicated deep neural networks that are inefficient and not feasible to work in real-time. Here we investigate different approaches to enhance the efficiency of the DL-based motor decoding paradigm. First, a comprehensive collection of feature extraction techniques is applied to reduce the input data dimensionality. Next, we investigate two different strategies for deploying DL models: a one-step (1S) approach when big input data are available and a two-step (2S) when input data are limited. With the 1S approach, a single regression stage predicts the trajectories of all fingers. With the 2S approach, a classification stage identifies the fingers in motion, followed by a regression stage that predicts those active digits’ trajectories. The addition of feature extraction substantially lowers the motor decoder’s complexity, making it feasible for translation to a real-time paradigm. The 1S approach using a recurrent neural network (RNN) generally gives better prediction results than all the ML algorithms with mean squared error (MSE) ranges from 10−3 to 10−4 for all finger while variance accounted for (VAF) scores are above 0.8 for the most degree of freedom (DOF). This result reaffirms that DL is more advantageous than classic ML methods for handling a large dataset. However, when training on a smaller input data set as in the 2S approach, ML techniques offers a simpler implementation while ensuring comparably good decoding outcome to the DL ones. In the classification step, either machine-learning (ML) or DL models achieve the accuracy and F1 score of 0.99. Thanks to the classification step, in the regression step, both types of models result in comparable MSE and VAF scores as those of the 1S approach. Our study outlines the trade-offs to inform the implementation of real-time, low-latency, and high accuracy DL-based motor decoder for clinical applications.

2021 ◽  
Vol 15 ◽  
Author(s):  
Diu K. Luu ◽  
Anh T. Nguyen ◽  
Ming Jiang ◽  
Jian Xu ◽  
Markus W. Drealan ◽  
...  

Previous literature shows that deep learning is an effective tool to decode the motor intent from neural signals obtained from different parts of the nervous system. However, deep neural networks are often computationally complex and not feasible to work in real-time. Here we investigate different approaches' advantages and disadvantages to enhance the deep learning-based motor decoding paradigm's efficiency and inform its future implementation in real-time. Our data are recorded from the amputee's residual peripheral nerves. While the primary analysis is offline, the nerve data is cut using a sliding window to create a “pseudo-online” dataset that resembles the conditions in a real-time paradigm. First, a comprehensive collection of feature extraction techniques is applied to reduce the input data dimensionality, which later helps substantially lower the motor decoder's complexity, making it feasible for translation to a real-time paradigm. Next, we investigate two different strategies for deploying deep learning models: a one-step (1S) approach when big input data are available and a two-step (2S) when input data are limited. This research predicts five individual finger movements and four combinations of the fingers. The 1S approach using a recurrent neural network (RNN) to concurrently predict all fingers' trajectories generally gives better prediction results than all the machine learning algorithms that do the same task. This result reaffirms that deep learning is more advantageous than classic machine learning methods for handling a large dataset. However, when training on a smaller input data set in the 2S approach, which includes a classification stage to identify active fingers before predicting their trajectories, machine learning techniques offer a simpler implementation while ensuring comparably good decoding outcomes to the deep learning ones. In the classification step, either machine learning or deep learning models achieve the accuracy and F1 score of 0.99. Thanks to the classification step, in the regression step, both types of models result in a comparable mean squared error (MSE) and variance accounted for (VAF) scores as those of the 1S approach. Our study outlines the trade-offs to inform the future implementation of real-time, low-latency, and high accuracy deep learning-based motor decoder for clinical applications.


2020 ◽  
Vol 39 (4) ◽  
pp. 5699-5711
Author(s):  
Shirong Long ◽  
Xuekong Zhao

The smart teaching mode overcomes the shortcomings of traditional teaching online and offline, but there are certain deficiencies in the real-time feature extraction of teachers and students. In view of this, this study uses the particle swarm image recognition and deep learning technology to process the intelligent classroom video teaching image and extracts the classroom task features in real time and sends them to the teacher. In order to overcome the shortcomings of the premature convergence of the standard particle swarm optimization algorithm, an improved strategy for multiple particle swarm optimization algorithms is proposed. In order to improve the premature problem in the search performance algorithm of PSO algorithm, this paper combines the algorithm with the useful attributes of other algorithms to improve the particle diversity in the algorithm, enhance the global search ability of the particle, and achieve effective feature extraction. The research indicates that the method proposed in this paper has certain practical effects and can provide theoretical reference for subsequent related research.


Water ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 1547
Author(s):  
Jian Sha ◽  
Xue Li ◽  
Man Zhang ◽  
Zhong-Liang Wang

Accurate real-time water quality prediction is of great significance for local environmental managers to deal with upcoming events and emergencies to develop best management practices. In this study, the performances in real-time water quality forecasting based on different deep learning (DL) models with different input data pre-processing methods were compared. There were three popular DL models concerned, including the convolutional neural network (CNN), long short-term memory neural network (LSTM), and hybrid CNN–LSTM. Two types of input data were applied, including the original one-dimensional time series and the two-dimensional grey image based on the complete ensemble empirical mode decomposition algorithm with adaptive noise (CEEMDAN) decomposition. Each type of input data was used in each DL model to forecast the real-time monitoring water quality parameters of dissolved oxygen (DO) and total nitrogen (TN). The results showed that (1) the performances of CNN–LSTM were superior to the standalone model CNN and LSTM; (2) the models used CEEMDAN-based input data performed much better than the models used the original input data, while the improvements for non-periodic parameter TN were much greater than that for periodic parameter DO; and (3) the model accuracies gradually decreased with the increase of prediction steps, while the original input data decayed faster than the CEEMDAN-based input data and the non-periodic parameter TN decayed faster than the periodic parameter DO. Overall, the input data preprocessed by the CEEMDAN method could effectively improve the forecasting performances of deep learning models, and this improvement was especially significant for non-periodic parameters of TN.


2021 ◽  
Vol 11 (15) ◽  
pp. 7148
Author(s):  
Bedada Endale ◽  
Abera Tullu ◽  
Hayoung Shi ◽  
Beom-Soo Kang

Unmanned aerial vehicles (UAVs) are being widely utilized for various missions: in both civilian and military sectors. Many of these missions demand UAVs to acquire artificial intelligence about the environments they are navigating in. This perception can be realized by training a computing machine to classify objects in the environment. One of the well known machine training approaches is supervised deep learning, which enables a machine to classify objects. However, supervised deep learning comes with huge sacrifice in terms of time and computational resources. Collecting big input data, pre-training processes, such as labeling training data, and the need for a high performance computer for training are some of the challenges that supervised deep learning poses. To address these setbacks, this study proposes mission specific input data augmentation techniques and the design of light-weight deep neural network architecture that is capable of real-time object classification. Semi-direct visual odometry (SVO) data of augmented images are used to train the network for object classification. Ten classes of 10,000 different images in each class were used as input data where 80% were for training the network and the remaining 20% were used for network validation. For the optimization of the designed deep neural network, a sequential gradient descent algorithm was implemented. This algorithm has the advantage of handling redundancy in the data more efficiently than other algorithms.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Jiawei Lian ◽  
Junhong He ◽  
Yun Niu ◽  
Tianze Wang

Purpose The current popular image processing technologies based on convolutional neural network have the characteristics of large computation, high storage cost and low accuracy for tiny defect detection, which is contrary to the high real-time and accuracy, limited computing resources and storage required by industrial applications. Therefore, an improved YOLOv4 named as YOLOv4-Defect is proposed aim to solve the above problems. Design/methodology/approach On the one hand, this study performs multi-dimensional compression processing on the feature extraction network of YOLOv4 to simplify the model and improve the feature extraction ability of the model through knowledge distillation. On the other hand, a prediction scale with more detailed receptive field is added to optimize the model structure, which can improve the detection performance for tiny defects. Findings The effectiveness of the method is verified by public data sets NEU-CLS and DAGM 2007, and the steel ingot data set collected in the actual industrial field. The experimental results demonstrated that the proposed YOLOv4-Defect method can greatly improve the recognition efficiency and accuracy and reduce the size and computation consumption of the model. Originality/value This paper proposed an improved YOLOv4 named as YOLOv4-Defect for the detection of surface defect, which is conducive to application in various industrial scenarios with limited storage and computing resources, and meets the requirements of high real-time and precision.


Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6762
Author(s):  
Jung Hyuk Lee ◽  
Geon Woo Lee ◽  
Guiyoung Bong ◽  
Hee Jeong Yoo ◽  
Hong Kook Kim

Autism spectrum disorder (ASD) is a developmental disorder with a life-span disability. While diagnostic instruments have been developed and qualified based on the accuracy of the discrimination of children with ASD from typical development (TD) children, the stability of such procedures can be disrupted by limitations pertaining to time expenses and the subjectivity of clinicians. Consequently, automated diagnostic methods have been developed for acquiring objective measures of autism, and in various fields of research, vocal characteristics have not only been reported as distinctive characteristics by clinicians, but have also shown promising performance in several studies utilizing deep learning models based on the automated discrimination of children with ASD from children with TD. However, difficulties still exist in terms of the characteristics of the data, the complexity of the analysis, and the lack of arranged data caused by the low accessibility for diagnosis and the need to secure anonymity. In order to address these issues, we introduce a pre-trained feature extraction auto-encoder model and a joint optimization scheme, which can achieve robustness for widely distributed and unrefined data using a deep-learning-based method for the detection of autism that utilizes various models. By adopting this auto-encoder-based feature extraction and joint optimization in the extended version of the Geneva minimalistic acoustic parameter set (eGeMAPS) speech feature data set, we acquire improved performance in the detection of ASD in infants compared to the raw data set.


2019 ◽  
Vol 84 ◽  
pp. 24-34 ◽  
Author(s):  
Marco Maggipinto ◽  
Alessandro Beghi ◽  
Seán McLoone ◽  
Gian Antonio Susto

Author(s):  
Riya John ◽  
Akhilesh. s ◽  
Gayathri Geetha Nair ◽  
Jeen Raju ◽  
Krishnendhu. B

Attendance management is an important procedure in an educational institution as well as in business organizations. Most of the available methods are time consuming and manipulative. The traditional method of attendance management is carried out in handwritten registers. Other than the manual method, there exist biometric methods like fingerprint and retinal scan, RFID tags, etc. All of these methods have disadvantages, therefore, in order to avoid these difficulties here, we introduce a new method for attendance management using deep learning technology. Using deep learning we can easily train a data-set. Real-time face algorithms are used and recognized faces of students in real-time while attending lectures. This system aims to be less time- consuming in comparison to the existing system of marking attendance.The program runs on anaconda flask server.Here real time image is captured using mobile phone camera. The faces on the image of the persons are then recognized and attendance is marked on an excel file.


Sign in / Sign up

Export Citation Format

Share Document