scholarly journals Subcortical Brain Segmentation Based on a Novel Discriminative Dictionary Learning Method and Sparse Coding

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 149785-149796
Author(s):  
Xiang Li ◽  
Ying Wei ◽  
Yunlong Zhou ◽  
Bin Hong
NeuroImage ◽  
2013 ◽  
Vol 76 ◽  
pp. 11-23 ◽  
Author(s):  
Tong Tong ◽  
Robin Wolz ◽  
Pierrick Coupé ◽  
Joseph V. Hajnal ◽  
Daniel Rueckert

2016 ◽  
Vol 2016 ◽  
pp. 1-15 ◽  
Author(s):  
Zhongrong Shi

Discriminative dictionary learning, playing a critical role in sparse representation based classification, has led to state-of-the-art classification results. Among the existing discriminative dictionary learning methods, two different approaches, shared dictionary and class-specific dictionary, which associate each dictionary atom to all classes or a single class, have been studied. The shared dictionary is a compact method but with lack of discriminative information; the class-specific dictionary contains discriminative information but consists of redundant atoms among different class dictionaries. To combine the advantages of both methods, we propose a new weighted block dictionary learning method. This method introduces proto dictionary and class dictionary. The proto dictionary is a base dictionary without label information. The class dictionary is a class-specific dictionary, which is a weighted proto dictionary. The weight value indicates the contribution of each proto dictionary block when constructing a class dictionary. These weight values can be computed conveniently as they are designed to adapt sparse coefficients. Different class dictionaries have different weight vectors but share the same proto dictionary, which results in higher discriminative power and lower redundancy. Experimental results demonstrate that the proposed algorithm has better classification results compared with several dictionary learning algorithms.


2020 ◽  
Vol 191 ◽  
pp. 105233 ◽  
Author(s):  
Xin Zheng ◽  
Luyue Lin ◽  
Bo Liu ◽  
Yanshan Xiao ◽  
Xiaoming Xiong

Author(s):  
Yuki Takashima ◽  
Toru Nakashika ◽  
Tetsuya Takiguchi ◽  
Yasuo Ariki

Abstract Voice conversion (VC) is a technique of exclusively converting speaker-specific information in the source speech while preserving the associated phonemic information. Non-negative matrix factorization (NMF)-based VC has been widely researched because of the natural-sounding voice it achieves when compared with conventional Gaussian mixture model-based VC. In conventional NMF-VC, models are trained using parallel data which results in the speech data requiring elaborate pre-processing to generate parallel data. NMF-VC also tends to be an extensive model as this method has several parallel exemplars for the dictionary matrix, leading to a high computational cost. In this study, an innovative parallel dictionary-learning method using non-negative Tucker decomposition (NTD) is proposed. The proposed method uses tensor decomposition and decomposes an input observation into a set of mode matrices and one core tensor. The proposed NTD-based dictionary-learning method estimates the dictionary matrix for NMF-VC without using parallel data. The experimental results show that the proposed method outperforms other methods in both parallel and non-parallel settings.


Sign in / Sign up

Export Citation Format

Share Document