feature subspace
Recently Published Documents


TOTAL DOCUMENTS

80
(FIVE YEARS 26)

H-INDEX

7
(FIVE YEARS 3)

2021 ◽  
Vol 15 ◽  
Author(s):  
Alexander Kuc ◽  
Sergey Korchagin ◽  
Vladimir A. Maksimenko ◽  
Natalia Shusharina ◽  
Alexander E. Hramov

Incorporating brain-computer interfaces (BCIs) into daily life requires reducing the reliance of decoding algorithms on the calibration or enabling calibration with the minimal burden on the user. A potential solution could be a pre-trained decoder demonstrating a reasonable accuracy on the naive operators. Addressing this issue, we considered ambiguous stimuli classification tasks and trained an artificial neural network to classify brain responses to the stimuli of low and high ambiguity. We built a pre-trained classifier utilizing time-frequency features corresponding to the fundamental neurophysiological processes shared between subjects. To extract these features, we statistically contrasted electroencephalographic (EEG) spectral power between the classes in the representative group of subjects. As a result, the pre-trained classifier achieved 74% accuracy on the data of newly recruited subjects. Analysis of the literature suggested that a pre-trained classifier could help naive users to start using BCI bypassing training and further increased accuracy during the feedback session. Thus, our results contribute to using BCI during paralysis or limb amputation when there is no explicit user-generated kinematic output to properly train a decoder. In machine learning, our approach may facilitate the development of transfer learning (TL) methods for addressing the cross-subject problem. It allows extracting the interpretable feature subspace from the source data (the representative group of subjects) related to the target data (a naive user), preventing the negative transfer in the cross-subject tasks.


Author(s):  
Sanjay Kumar Sonbhadra ◽  
Sonali Agarwal ◽  
P. Nagabhushan

Existing dimensionality reduction (DR) techniques such as principal component analysis (PCA) and its variants are not suitable for target class mining due to the negligence of unique statistical properties of class-of-interest (CoI) samples. Conventionally, these approaches utilize higher or lower eigenvalued principal components (PCs) for data transformation; but the higher eigenvalued PCs may split the target class, whereas lower eigenvalued PCs do not contribute significant information and wrong selection of PCs leads to performance degradation. Considering these facts, the present research offers a novel target class-guided feature extraction method. In this approach, initially, the eigendecomposition is performed on variance–covariance matrix of only the target class samples, where the higher- and lower-valued eigenvectors are rejected via statistical analysis, and the selected eigenvectors are utilized to extract the most promising feature subspace. The extracted feature-subset gives a more tighter description of the CoI with enhanced associativity among target class samples and ensures the strong separation from nontarget class samples. One-class support vector machine (OCSVM) is evaluated to validate the performance of learned features. To obtain optimized values of hyperparameters of OCSVM a novel [Formula: see text]-ary search-based autonomous method is also proposed. Exhaustive experiments with a wide variety of datasets are performed in feature-space (original and reduced) and eigenspace (obtained from original and reduced features) to validate the performance of the proposed approach in terms of accuracy, precision, specificity and sensitivity.


Author(s):  
Hao Sun ◽  
Jing Jin ◽  
Ren Xu ◽  
Andrzej Cichocki

Motor imagery (MI) based brain–computer interfaces help patients with movement disorders to regain the ability to control external devices. Common spatial pattern (CSP) is a popular algorithm for feature extraction in decoding MI tasks. However, due to noise and nonstationarity in electroencephalography (EEG), it is not optimal to combine the corresponding features obtained from the traditional CSP algorithm. In this paper, we designed a novel CSP feature selection framework that combines the filter method and the wrapper method. We first evaluated the importance of every CSP feature by the infinite latent feature selection method. Meanwhile, we calculated Wasserstein distance between feature distributions of the same feature under different tasks. Then, we redefined the importance of every CSP feature based on two indicators mentioned above, which eliminates half of CSP features to create a new CSP feature subspace according to the new importance indicator. At last, we designed the improved binary gravitational search algorithm (IBGSA) by rebuilding its transfer function and applied IBGSA on the new CSP feature subspace to find the optimal feature set. To validate the proposed method, we conducted experiments on three public BCI datasets and performed a numerical analysis of the proposed algorithm for MI classification. The accuracies were comparable to those reported in related studies and the presented model outperformed other methods in literature on the same underlying data.


Electronics ◽  
2020 ◽  
Vol 9 (12) ◽  
pp. 2125
Author(s):  
Xiaoyu Wu ◽  
Tiantian Wang ◽  
Shengjin Wang

Text-video retrieval tasks face a great challenge in the semantic gap between cross modal information. Some existing methods transform the text or video into the same subspace to measure their similarity. However, this kind of method does not consider adding a semantic consistency constraint when associating the two modalities of semantic encoding, and the associated result is poor. In this paper, we propose a multi-modal retrieval algorithm based on semantic association and multi-task learning. Firstly, the multi-level features of video or text are extracted based on multiple deep learning networks, so that the information of the two modalities can be fully encoded. Then, in the public feature space where the two modalities information are mapped together, we propose a semantic similarity measurement and semantic consistency classification based on text-video features for a multi-task learning framework. With the semantic consistency classification task, the learning of semantic association task is restrained. So multi-task learning guides the better feature mapping of two modalities and optimizes the construction of unified feature subspace. Finally, the experimental results of our proposed algorithm on the Microsoft Video Description dataset (MSVD) and MSR-Video to Text (MSR-VTT) are better than the existing research, which prove that our algorithm can improve the performance of cross-modal retrieval.


Sign in / Sign up

Export Citation Format

Share Document