scholarly journals Monte Carlo Dropout for Uncertainty Estimation and Motor Imagery Classification

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7241
Author(s):  
Daily Milanés-Hermosilla ◽  
Rafael Trujillo Codorniú ◽  
René López-Baracaldo ◽  
Roberto Sagaró-Zamora ◽  
Denis Delisle-Rodriguez ◽  
...  

Motor Imagery (MI)-based Brain–Computer Interfaces (BCIs) have been widely used as an alternative communication channel to patients with severe motor disabilities, achieving high classification accuracy through machine learning techniques. Recently, deep learning techniques have spotlighted the state-of-the-art of MI-based BCIs. These techniques still lack strategies to quantify predictive uncertainty and may produce overconfident predictions. In this work, methods to enhance the performance of existing MI-based BCIs are proposed in order to obtain a more reliable system for real application scenarios. First, the Monte Carlo dropout (MCD) method is proposed on MI deep neural models to improve classification and provide uncertainty estimation. This approach was implemented using Shallow Convolutional Neural Network (SCNN-MCD) and with an ensemble model (E-SCNN-MCD). As another contribution, to discriminate MI task predictions of high uncertainty, a threshold approach is introduced and tested for both SCNN-MCD and E-SCNN-MCD approaches. The BCI Competition IV Databases 2a and 2b were used to evaluate the proposed methods for both subject-specific and non-subject-specific strategies, obtaining encouraging results for MI recognition.

2011 ◽  
Vol 2011 ◽  
pp. 1-9 ◽  
Author(s):  
Dieter Devlaminck ◽  
Bart Wyns ◽  
Moritz Grosse-Wentrup ◽  
Georges Otte ◽  
Patrick Santens

Motor-imagery-based brain-computer interfaces (BCIs) commonly use the common spatial pattern filter (CSP) as preprocessing step before feature extraction and classification. The CSP method is a supervised algorithm and therefore needs subject-specific training data for calibration, which is very time consuming to collect. In order to reduce the amount of calibration data that is needed for a new subject, one can apply multitask (from now on called multisubject) machine learning techniques to the preprocessing phase. Here, the goal of multisubject learning is to learn a spatial filter for a new subject based on its own data and that of other subjects. This paper outlines the details of the multitask CSP algorithm and shows results on two data sets. In certain subjects a clear improvement can be seen, especially when the number of training trials is relatively low.


10.2196/16344 ◽  
2019 ◽  
Vol 21 (11) ◽  
pp. e16344 ◽  
Author(s):  
Giacomo Valle

Decades of technological developments have populated the field of brain-machine interfaces and neuroprosthetics with several replacement strategies, neural modulation treatments, and rehabilitation techniques to improve the quality of life for patients affected by sensory and motor disabilities. This field is now quickly expanding thanks to advances in neural interfaces, machine learning techniques, and robotics. Despite many clinical successes, and multiple innovations in animal models, brain-machine interfaces remain mainly confined to sophisticated laboratory environments indicating a necessary step forward in the used technology. Interestingly, Elon Musk and Neuralink have recently presented a new brain-machine interface platform with thousands of channels, fast implantation, and advanced signal processing. Here, how their work takes part in the context of the restoration of sensory-motor functions through neuroprostheses is commented.


2020 ◽  
Author(s):  
Diego Fabian Collazos Huertas ◽  
Andres Marino Alvarez Meza ◽  
German Castellanos Dominguez

Abstract Interpretation of brain activity responses using Motor Imagery (MI) paradigms is vital for medical diagnosis and monitoring. Assessed by machine learning techniques, identification of imagined actions is hindered by substantial intra and inter subject variability. Here, we develop an architecture of Convolutional Neural Networks (CNN) with enhanced interpretation of the spatial brain neural patterns that mainly contribute to the classification of MI tasks. Two methods of 2D-feature extraction from EEG data are contrasted: Power Spectral Density and Continuous Wavelet Transform. For preserving the spatial interpretation of extracting EEG patterns, we project the multi-channel data using a topographic interpolation. Besides, we include a spatial dropping algorithm to remove the learned weights that reflect the localities not engaged with the elicited brain response. Obtained results in a bi-task MI database show that the thresholding strategy in combination with Continuous Wavelet Transform improves the accuracy and enhances the interpretability of CNN architecture, showing that the highest contribution clusters over the sensorimotor cortex with differentiated behavior between μ and β rhythms.


2007 ◽  
Vol 106 (3) ◽  
pp. 495-500 ◽  
Author(s):  
Elizabeth A. Felton ◽  
J. Adam Wilson ◽  
Justin C. Williams ◽  
P. Charles Garell

✓Brain–computer interface (BCI) technology can offer individuals with severe motor disabilities greater independence and a higher quality of life. The BCI systems take recorded brain signals and translate them into real-time actions, for improved communication, movement, or perception. Four patient participants with a clinical need for intracranial electrocorticography (ECoG) participated in this study. The participants were trained over multiple sessions to use motor and/or auditory imagery to modulate their brain signals in order to control the movement of a computer cursor. Participants with electrodes over motor and/or sensory areas were able to achieve cursor control over 2 to 7 days of training. These findings indicate that sensory and other brain areas not previously considered ideal for ECoG-based control can provide additional channels of control that may be useful for a motor BCI.


2021 ◽  
Vol 11 (23) ◽  
pp. 11440
Author(s):  
Alexander Paz ◽  
Gustavo A. Orozco ◽  
Rami K. Korhonen ◽  
José J. García ◽  
Mika E. Mononen

Osteoarthritis (OA) is a degenerative disease that affects the synovial joints, especially the knee joint, diminishing the ability of patients to perform daily physical activities. Unfortunately, there is no cure for this nearly irreversible musculoskeletal disorder. Nowadays, many researchers aim for in silico-based methods to simulate personalized risks for the onset and progression of OA and evaluate the effects of different conservative preventative actions. Finite element analysis (FEA) has been considered a promising method to be developed for knee OA management. The FEA pipeline consists of three well-established phases: pre-processing, processing, and post-processing. Currently, these phases are time-consuming, making the FEA workflow cumbersome for the clinical environment. Hence, in this narrative review, we overviewed present-day trends towards clinical methods for subject-specific knee OA studies utilizing FEA. We reviewed studies focused on understanding mechanisms that initiate knee OA and expediting the FEA workflow applied to the whole-organ level. Based on the current trends we observed, we believe that forthcoming knee FEAs will provide nearly real-time predictions for the personalized risk of developing knee OA. These analyses will integrate subject-specific geometries, loading conditions, and estimations of local tissue mechanical properties. This will be achieved by combining state-of-the-art FEA workflows with automated approaches aided by machine learning techniques.


2010 ◽  
pp. 334-352 ◽  
Author(s):  
Thierry Bertin-Mahieux ◽  
Douglas Eck ◽  
Michael Mandel

Recently there has been a great deal of attention paid to the automatic prediction of tags for music and audio in general. Social tags are user-generated keywords associated with some resource on the Web. In the case of music, social tags have become an important component of ``Web 2.0‘‘ recommender systems. There have been many attempts at automatically applying tags to audio for different purposes: database management, music recommendation, improved human-computer interfaces, estimating similarity among songs, and so on. Many published results show that this problem can be tackled using machine learning techniques, however, no method so far has been proven to be particularly suited to the task. First, it seems that no one has yet found an appropriate algorithm to solve this challenge. But second, the task definition itself is problematic. In an effort to better understand the task and also to help new researchers bring their insights to bear on this problem, this chapter provides a review of the state-of-the-art methods for addressing automatic tagging of audio. It is divided in the following sections: goal, framework, audio representation, labeled data, classification, evaluation, and future directions. Such a division helps understand the commonalities and strengths of the different methods that have been proposed.


Author(s):  
Vladimir Cherepanov ◽  
Elzbieta Richter-Was ◽  
Zbigniew Andrzej Was

Status of \tauτ lepton decay Monte Carlo generator TAUOLA, and its main recent applications are reviewed. It is underlined, that in recent efforts on development of new hadronic currents, the multi-dimensional nature of distributions of the experimental data must be taken with a great care. Studies for H \to \tau\tau ; \tau \to hadronsH→ττ;τ→hadrons indeed demonstrate that multi-dimensional nature of distributions is important and available for evaluation of observables where \tauτ leptons are used to constrain experimental data. For that part of the presentation, use of the TAUOLA program for phenomenology of HH and ZZ decays at LHC is discussed, in particular in the context of the Higgs boson parity measurements with the use of Machine Learning techniques. Some additions, relevant for QED lepton pair emission and electroweak corrections are mentioned as well.


Sign in / Sign up

Export Citation Format

Share Document