scholarly journals Intuitive real-time control strategy for high-density myoelectric hand prosthesis using deep and transfer learning

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Simon Tam ◽  
Mounir Boukadoum ◽  
Alexandre Campeau-Lecours ◽  
Benoit Gosselin

AbstractMyoelectric hand prostheses offer a way for upper-limb amputees to recover gesture and prehensile abilities to ease rehabilitation and daily life activities. However, studies with prosthesis users found that a lack of intuitiveness and ease-of-use in the human-machine control interface are among the main driving factors in the low user acceptance of these devices. This paper proposes a highly intuitive, responsive and reliable real-time myoelectric hand prosthesis control strategy with an emphasis on the demonstration and report of real-time evaluation metrics. The presented solution leverages surface high-density electromyography (HD-EMG) and a convolutional neural network (CNN) to adapt itself to each unique user and his/her specific voluntary muscle contraction patterns. Furthermore, a transfer learning approach is presented to drastically reduce the training time and allow for easy installation and calibration processes. The CNN-based gesture recognition system was evaluated in real-time with a group of 12 able-bodied users. A real-time test for 6 classes/grip modes resulted in mean and median positive predictive values (PPV) of 93.43% and 100%, respectively. Each gesture state is instantly accessible from any other state, with no mode switching required for increased responsiveness and natural seamless control. The system is able to output a correct prediction within less than 116 ms latency. 100% PPV has been attained in many trials and is realistically achievable consistently with user practice and/or employing a thresholded majority vote inference. Using transfer learning, these results are achievable after a sensor installation, data recording and network training/fine-tuning routine taking less than 10 min to complete, a reduction of 89.4% in the setup time of the traditional, non-transfer learning approach.

Healthcare ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. 272
Author(s):  
Khajamoinuddin Syed ◽  
William Sleeman ◽  
Michael Hagan ◽  
Jatinder Palta ◽  
Rishabh Kapoor ◽  
...  

The Radiotherapy Incident Reporting and Analysis System (RIRAS) receives incident reports from Radiation Oncology facilities across the US Veterans Health Affairs (VHA) enterprise and Virginia Commonwealth University (VCU). In this work, we propose a computational pipeline for analysis of radiation oncology incident reports. Our pipeline uses machine learning (ML) and natural language processing (NLP) based methods to predict the severity of the incidents reported in the RIRAS platform using the textual description of the reported incidents. These incidents in RIRAS are reviewed by a radiation oncology subject matter expert (SME), who initially triages some incidents based on the salient elements in the incident report. To automate the triage process, we used the data from the VHA treatment centers and the VCU radiation oncology department. We used NLP combined with traditional ML algorithms, including support vector machine (SVM) with linear kernel, and compared it against the transfer learning approach with the universal language model fine-tuning (ULMFiT) algorithm. In RIRAS, severities are divided into four categories; A, B, C, and D, with A being the most severe to D being the least. In this work, we built models to predict High (A & B) vs. Low (C & D) severity instead of all the four categories. Models were evaluated with macro-averaged precision, recall, and F1-Score. The Traditional ML machine learning (SVM-linear) approach did well on the VHA dataset with 0.78 F1-Score but performed poorly on the VCU dataset with 0.5 F1-Score. The transfer learning approach did well on both datasets with 0.81 F1-Score on VHA dataset and 0.68 F1-Score on the VCU dataset. Overall, our methods show promise in automating the triage and severity determination process from radiotherapy incident reports.


2021 ◽  
Author(s):  
Geoffrey F. Schau ◽  
Hassan Ghani ◽  
Erik A. Burlingame ◽  
Guillaume Thibault ◽  
Joe W. Gray ◽  
...  

AbstractAccurate diagnosis of metastatic cancer is essential for prescribing optimal control strategies to halt further spread of metastasizing disease. While pathological inspection aided by immunohistochemistry staining provides a valuable gold standard for clinical diagnostics, deep learning methods have emerged as powerful tools for identifying clinically relevant features of whole slide histology relevant to a tumor’s metastatic origin. Although deep learning models require significant training data to learn effectively, transfer learning paradigms provide mechanisms to circumvent limited training data by first training a model on related data prior to fine-tuning on smaller data sets of interest. In this work we propose a transfer learning approach that trains a convolutional neural network to infer the metastatic origin of tumor tissue from whole slide images of hematoxylin and eosin (H&E) stained tissue sections and illustrate the advantages of pre-training network on whole slide images of primary tumor morphology. We further characterize statistical dissimilarity between primary and metastatic tumors of various indications on patch-level images to highlight limitations of our indication-specific transfer learning approach. Using a primary-to-metastatic transfer learning approach, we achieved mean class-specific areas under receiver operator characteristics curve (AUROC) of 0.779, which outperformed comparable models trained on only images of primary tumor (mean AUROC of 0.691) or trained on only images of metastatic tumor (mean AUROC of 0.675), supporting the use of large scale primary tumor imaging data in developing computer vision models to characterize metastatic origin of tumor lesions.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Maria Colomba Comes ◽  
Annarita Fanizzi ◽  
Samantha Bove ◽  
Vittorio Didonna ◽  
Sergio Diotaiuti ◽  
...  

AbstractThe dynamic contrast-enhanced MR imaging plays a crucial role in evaluating the effectiveness of neoadjuvant chemotherapy (NAC) even since its early stage through the prediction of the final pathological complete response (pCR). In this study, we proposed a transfer learning approach to predict if a patient achieved pCR (pCR) or did not (non-pCR) by exploiting, separately or in combination, pre-treatment and early-treatment exams from I-SPY1 TRIAL public database. First, low-level features, i.e., related to local structure of the image, were automatically extracted by a pre-trained convolutional neural network (CNN) overcoming manual feature extraction. Next, an optimal set of most stable features was detected and then used to design an SVM classifier. A first subset of patients, called fine-tuning dataset (30 pCR; 78 non-pCR), was used to perform the optimal choice of features. A second subset not involved in the feature selection process was employed as an independent test (7 pCR; 19 non-pCR) to validate the model. By combining the optimal features extracted from both pre-treatment and early-treatment exams with some clinical features, i.e., ER, PgR, HER2 and molecular subtype, an accuracy of 91.4% and 92.3%, and an AUC value of 0.93 and 0.90, were returned on the fine-tuning dataset and the independent test, respectively. Overall, the low-level CNN features have an important role in the early evaluation of the NAC efficacy by predicting pCR. The proposed model represents a first effort towards the development of a clinical support tool for an early prediction of pCR to NAC.


Author(s):  
Zhengxu Yu ◽  
Zhongming Jin ◽  
Long Wei ◽  
Jishun Guo ◽  
Jianqiang Huang ◽  
...  

Model fine-tuning is a widely used transfer learning approach in person Re-identification (ReID) applications, which fine-tuning a pre-trained feature extraction model into the target scenario instead of training a model from scratch. It is challenging due to the significant variations inside the target scenario, e.g., different camera viewpoint, illumination changes, and occlusion. These variations result in a gap between the distribution of each mini-batch and the distribution of the whole dataset when using mini-batch training. In this paper, we study model fine-tuning from the perspective of the aggregation and utilization of the global information of the dataset when using mini-batch training. Specifically, we introduce a novel network structure called Batch-related Convolutional Cell (BConv-Cell), which progressively collects the global information of the dataset into a latent state and uses this latent state to rectify the extracted feature. Based on BConv-Cells, we further proposed the Progressive Transfer Learning (PTL) method to facilitate the model fine-tuning process by joint training the BConv-Cells and the pre-trained ReID model. Empirical experiments show that our proposal can improve the performance of the ReID model greatly on MSMT17, Market-1501, CUHK03 and DukeMTMC-reID datasets. The code will be released later on at \url{https://github.com/ZJULearning/PTL}


Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5209 ◽  
Author(s):  
Andrea Gonzalez-Rodriguez ◽  
Jose L. Ramon ◽  
Vicente Morell ◽  
Gabriel J. Garcia ◽  
Jorge Pomares ◽  
...  

The main goal of this study is to evaluate how to optimally select the best vibrotactile pattern to be used in a closed loop control of upper limb myoelectric prostheses as a feedback of the exerted force. To that end, we assessed both the selection of actuation patterns and the effects of the selection of frequency and amplitude parameters to discriminate between different feedback levels. A single vibrotactile actuator has been used to deliver the vibrations to subjects participating in the experiments. The results show no difference between pattern shapes in terms of feedback perception. Similarly, changes in amplitude level do not reflect significant improvement compared to changes in frequency. However, decreasing the number of feedback levels increases the accuracy of feedback perception and subject-specific variations are high for particular participants, showing that a fine-tuning of the parameters is necessary in a real-time application to upper limb prosthetics. In future works, the effects of training, location, and number of actuators will be assessed. This optimized selection will be tested in a real-time proportional myocontrol of a prosthetic hand.


Sensors ◽  
2019 ◽  
Vol 19 (22) ◽  
pp. 4850 ◽  
Author(s):  
Carlos S. Pereira ◽  
Raul Morais ◽  
Manuel J. C. S. Reis

Frequently, the vineyards in the Douro Region present multiple grape varieties per parcel and even per row. An automatic algorithm for grape variety identification as an integrated software component was proposed that can be applied, for example, to a robotic harvesting system. However, some issues and constraints in its development were highlighted, namely, the images captured in natural environment, low volume of images, high similarity of the images among different grape varieties, leaf senescence, and significant changes on the grapevine leaf and bunch images in the harvest seasons, mainly due to adverse climatic conditions, diseases, and the presence of pesticides. In this paper, the performance of the transfer learning and fine-tuning techniques based on AlexNet architecture were evaluated when applied to the identification of grape varieties. Two natural vineyard image datasets were captured in different geographical locations and harvest seasons. To generate different datasets for training and classification, some image processing methods, including a proposed four-corners-in-one image warping algorithm, were used. The experimental results, obtained from the application of an AlexNet-based transfer learning scheme and trained on the image dataset pre-processed through the four-corners-in-one method, achieved a test accuracy score of 77.30%. Applying this classifier model, an accuracy of 89.75% on the popular Flavia leaf dataset was reached. The results obtained by the proposed approach are promising and encouraging in helping Douro wine growers in the automatic task of identifying grape varieties.


Sign in / Sign up

Export Citation Format

Share Document