Cognitively Motivated Novelty Detection in Video Data Streams

Author(s):  
James M. Kang ◽  
Muhammad Aurangzeb Ahmad ◽  
Ankur Teredesai ◽  
Roger Gaborski
Author(s):  
Kemilly Dearo Garcia ◽  
Mannes Poel ◽  
Joost N. Kok ◽  
André C. P. L. F. de Carvalho

Author(s):  
Yi Wang ◽  
Yi Ding ◽  
Xiangjian He ◽  
Xin Fan ◽  
Chi Lin ◽  
...  

2015 ◽  
Vol 27 (11) ◽  
pp. 2961-2973 ◽  
Author(s):  
Elaine Ribeiro de Faria ◽  
Isabel Ribeiro Goncalves ◽  
Jo ao Gama ◽  
Andre Carlos Ponce de Leon Ferreira Carvalho

2018 ◽  
Vol 32 (6) ◽  
pp. 1597-1633 ◽  
Author(s):  
Mohamed-Rafik Bouguelia ◽  
Slawomir Nowaczyk ◽  
Amir H. Payberah

2015 ◽  
Vol 30 (3) ◽  
pp. 640-680 ◽  
Author(s):  
Elaine Ribeiro de Faria ◽  
André Carlos Ponce de Leon Ferreira Carvalho ◽  
João Gama

2009 ◽  
Vol 13 (3) ◽  
pp. 405-422 ◽  
Author(s):  
Eduardo J. Spinosa ◽  
André Ponce de Leon F. de Carvalho ◽  
João Gama

2021 ◽  
Author(s):  
Steven M. Peterson ◽  
Rajesh P. N. Rao ◽  
Bingni W. Brunton

AbstractRecent advances in neural decoding have accelerated the development of brain-computer interfaces aimed at assisting users with everyday tasks such as speaking, walking, and manipulating objects. However, current approaches for training neural decoders commonly require large quantities of labeled data, which can be laborious or infeasible to obtain in real-world settings. One intriguing alternative uses self-supervised models that share self-generated pseudo-labels between two data streams; such models have shown exceptional performance on unlabeled audio and video data, but it remains unclear how well they extend to neural decoding. Here, we learn neural decoders without labels by leveraging multiple simultaneously recorded data streams, including neural, kinematic, and physiological signals. Specifically, we apply cross-modal, self-supervised deep clustering to decode movements from brain recordings; these decoders are compared to supervised and unimodal, self-supervised models. We find that sharing pseudo-labels between two data streams during training substantially increases decoding performance compared to unimodal, self-supervised models, with accuracies approaching those of supervised decoders trained on labeled data. Next, we develop decoders trained on three modalities that match or slightly exceed the performance of supervised models, achieving state-of-the-art neural decoding accuracy. Cross-modal decoding is a flexible, promising approach for robust, adaptive neural decoding in real-world applications without any labels.


Sign in / Sign up

Export Citation Format

Share Document