Object Recognition Using Sparse Representation of Overcomplete Dictionary

Author(s):  
Chu-Kiong Loo ◽  
Ali Memariani
2013 ◽  
Vol 2013 ◽  
pp. 1-11 ◽  
Author(s):  
Zunyi Tang ◽  
Shuxue Ding ◽  
Zhenni Li ◽  
Linlin Jiang

Sparse representation of signals via an overcomplete dictionary has recently received much attention as it has produced promising results in various applications. Since the nonnegativities of the signals and the dictionary are required in some applications, for example, multispectral data analysis, the conventional dictionary learning methods imposed simply with nonnegativity may become inapplicable. In this paper, we propose a novel method for learning a nonnegative, overcomplete dictionary for such a case. This is accomplished by posing the sparse representation of nonnegative signals as a problem of nonnegative matrix factorization (NMF) with a sparsity constraint. By employing the coordinate descent strategy for optimization and extending it to multivariable case for processing in parallel, we develop a so-called parallel coordinate descent dictionary learning (PCDDL) algorithm, which is structured by iteratively solving the two optimal problems, the learning process of the dictionary and the estimating process of the coefficients for constructing the signals. Numerical experiments demonstrate that the proposed algorithm performs better than the conventional nonnegative K-SVD (NN-KSVD) algorithm and several other algorithms for comparison. What is more, its computational consumption is remarkably lower than that of the compared algorithms.


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Wang Wei ◽  
Tang Can ◽  
Wang Xin ◽  
Luo Yanhong ◽  
Hu Yongle ◽  
...  

An image object recognition approach based on deep features and adaptive weighted joint sparse representation (D-AJSR) is proposed in this paper. D-AJSR is a data-lightweight classification framework, which can classify and recognize objects well with few training samples. In D-AJSR, the convolutional neural network (CNN) is used to extract the deep features of the training samples and test samples. Then, we use the adaptive weighted joint sparse representation to identify the objects, in which the eigenvectors are reconstructed by calculating the contribution weights of each eigenvector. Aiming at the high-dimensional problem of deep features, we use the principal component analysis (PCA) method to reduce the dimensions. Lastly, combined with the joint sparse model, the public features and private features of images are extracted from the training sample feature set so as to construct the joint feature dictionary. Based on the joint feature dictionary, sparse representation-based classifier (SRC) is used to recognize the objects. Experiments on face images and remote sensing images show that D-AJSR is superior to the traditional SRC method and some other advanced methods.


2015 ◽  
Vol 27 (9) ◽  
pp. 1951-1982 ◽  
Author(s):  
Zhenni Li ◽  
Shuxue Ding ◽  
Yujie Li

We present a fast, efficient algorithm for learning an overcomplete dictionary for sparse representation of signals. The whole problem is considered as a minimization of the approximation error function with a coherence penalty for the dictionary atoms and with the sparsity regularization of the coefficient matrix. Because the problem is nonconvex and nonsmooth, this minimization problem cannot be solved efficiently by an ordinary optimization method. We propose a decomposition scheme and an alternating optimization that can turn the problem into a set of minimizations of piecewise quadratic and univariate subproblems, each of which is a single variable vector problem, of either one dictionary atom or one coefficient vector. Although the subproblems are still nonsmooth, remarkably they become much simpler so that we can find a closed-form solution by introducing a proximal operator. This leads to an efficient algorithm for sparse representation. To our knowledge, applying the proximal operator to the problem with an incoherence term and obtaining the optimal dictionary atoms in closed form with a proximal operator technique have not previously been studied. The main advantages of the proposed algorithm are that, as suggested by our analysis and simulation study, it has lower computational complexity and a higher convergence rate than state-of-the-art algorithms. In addition, for real applications, it shows good performance and significant reductions in computational time.


Sign in / Sign up

Export Citation Format

Share Document