learned dictionary
Recently Published Documents


TOTAL DOCUMENTS

89
(FIVE YEARS 22)

H-INDEX

11
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Naghmeh Farhangkhah ◽  
Sadegh Samadi ◽  
Mohammad R. Khosravi ◽  
Reza Mohseni

2021 ◽  
Vol 12 ◽  
Author(s):  
Tongguang Ni ◽  
Yuyao Ni ◽  
Jing Xue ◽  
Suhong Wang

The brain-computer interface (BCI) interprets the physiological information of the human brain in the process of consciousness activity. It builds a direct information transmission channel between the brain and the outside world. As the most common non-invasive BCI modality, electroencephalogram (EEG) plays an important role in the emotion recognition of BCI; however, due to the individual variability and non-stationary of EEG signals, the construction of EEG-based emotion classifiers for different subjects, different sessions, and different devices is an important research direction. Domain adaptation utilizes data or knowledge from more than one domain and focuses on transferring knowledge from the source domain (SD) to the target domain (TD), in which the EEG data may be collected from different subjects, sessions, or devices. In this study, a new domain adaptation sparse representation classifier (DASRC) is proposed to address the cross-domain EEG-based emotion classification. To reduce the differences in domain distribution, the local information preserved criterion is exploited to project the samples from SD and TD into a shared subspace. A common domain-invariant dictionary is learned in the projection subspace so that an inherent connection can be built between SD and TD. In addition, both principal component analysis (PCA) and Fisher criteria are exploited to promote the recognition ability of the learned dictionary. Besides, an optimization method is proposed to alternatively update the subspace and dictionary learning. The comparison of CSFDDL shows the feasibility and competitive performance for cross-subject and cross-dataset EEG-based emotion classification problems.


2021 ◽  
Vol 15 (6) ◽  
pp. 1-22
Author(s):  
Shaoning Zeng ◽  
Bob Zhang ◽  
Jianping Gou ◽  
Yong Xu ◽  
Wei Huang

Dictionary-based classification has been promising in knowledge discovery from image data, due to its good performance and interpretable theoretical system. Dictionary learning effectively supports both small- and large-scale datasets, while its robustness and performance depends on the atoms of the dictionary most of the time. Empirically, using a large number of atoms is helpful to obtain a robust classification, while robustness cannot be ensured when setting a small number of atoms. However, learning a huge dictionary dramatically slows down the speed of classification, which is especially worse on the large-scale datasets. To address the problem, we propose a Fast and Robust Dictionary-based Classification (FRDC) framework, which fully utilizes the learned dictionary for classification by staging - and -norms to obtain a robust sparse representation. The new objective function, on the one hand, introduces an additional -norm term upon the conventional -norm optimization, which generates a more robust classification. On the other hand, the optimization based on both - and -norms is solved in two stages, which is much easier and faster than current solutions. In this way, even when using a limited size of dictionary, which makes sure the classification runs very fast, it still can gain higher robustness for multiple types of image data. The optimization is then theoretically analyzed in a new formulation, close but distinct to elastic-net, to prove it is crucial to improve the performance under the premise of robustness. According to our extensive experiments conducted on four image datasets for face and object classification, FRDC keeps generating a robust classification no matter whether using a small or large number of atoms. This guarantees a fast and robust dictionary-based image classification. Furthermore, when simply using deep features extracted via some popular pre-trained neural networks, it outperforms many state-of-the-art methods on the specific datasets.


2021 ◽  
Vol 14 (1) ◽  
pp. 203-211
Author(s):  
Nouf Alotaibi ◽  

Noise may affect images in many ways during different processes. Such as during obtaining, distribution, processing, or compressing. The Sparse Representation (SR) algorithm is one of the best strategies for noise reduction. One meta-heuristic algorithm is the Particle Swarm Optimization (PSO). This research demonstrates excellent results in noise reduction in the Fast PSO version while utilizing the SRs as well as meta-heuristic algorithms to gain. This method is known as FPSO-MP and it depends on the Pursuit Algorithm (MP) that matches. In this proposed study, a Dynamic-Multi-Swarm (DMS) method and a pre-learned dictionary (FPSO-MP) approach is presented to reduce the time for the learning dictionary calculations. The output of the denoising algorithm QPSO-MP is dependable on dictionary learning because of the dictionary size or increased number of patches. Similar to this work, a Non-locally Estimated Sparse Coefficient (NESC) is one explanation for the low efficiency of the original algorithm. Compared to the original PSO-MP method, these enhancements have achieved substantial gains in computational efficiency of approximately 92% without sacrosanct image quality. After modification, the proposed FPSO-MP technique is in contrast with the original PSO-MP method. The scientific findings demonstrate that the FPSO-MP algorithm is much more efficient and faster than the original algorithm, without affecting image quality. The proposed method follows the original technique and therefore reduces during run-time. The result of this study demonstrates that the bestdenoised images can always be accessed from the pre-learned dictionary rather than the learning dictionary developed across the noisy image during runtime. We constructed images dataset from the BSD500 collection and performed a statistical test on these images. The actual findings reveal that the suggested method is excellent for noise reduction (noise elimination) as well as highly efficient during runtime. The analytical findings indicate that both quantitative and image performance outcomes are obtained with the proposed FPSO-MP approach during its contradiction with when denoising algorithms.


2020 ◽  
Vol 36 (4) ◽  
pp. 347-363
Author(s):  
Nguyen Hoang Vu ◽  
Tran Quoc Cuong ◽  
Tran Thanh Phong

Dictionary learning (DL) for sparse coding has been widely applied in the field of computer vision. Many DL approaches have been developed recently to solve pattern classification problems and have achieved promising performance. In this paper, to improve the discriminability of the popular dictionary pair learning (DPL) algorithm, we propose a new method called discriminative dictionary pair learning (DDPL) for image classification. To achieve the goal of signal representation and discrimination, we impose the incoherence constraints on the synthesis dictionary and the low-rank regularization on the analysis dictionary. The DDPL method ensures that the learned dictionary has the powerful discriminative ability and the signals are more separable after coding. We evaluate the proposed method on benchmark image databases in comparison with existing DL methods. The experimental results demonstrate that our method outperforms many recently proposed dictionary learning approaches.


2020 ◽  
Vol 64 (1-4) ◽  
pp. 129-136
Author(s):  
Wei Guan ◽  
Longlei Dong ◽  
Jinxiong Zhou

With the engineering structures becoming more complicated, it is difficult to obtain complete measurement responses with limited sensors. Thus, carrying out the underdetermined modal identification will have practical engineering application values. In this paper, a new approach for underdetermined blind modal identification based on dictionary learning in the framework of compressed sensing (CS) is proposed. The principal idea is to estimate modal shapes using a clustering technique, and recover modal responses combing the estimated mode shapes matrix and the learned dictionary. The experiment results on a typical cantilever beam structure illustrate that the proposed method can perform accurate dynamic parameters identification whether in underdetermined case or determined case.


Tecnura ◽  
2020 ◽  
Vol 24 (66) ◽  
pp. 62-75
Author(s):  
Edwin Vargas ◽  
Kevin Arias ◽  
Fernando Rojas ◽  
Henry Arguello

Objective: Hyperspectral (HS) imaging systems are commonly used in a diverse range of applications that involve detection and classification tasks. However, the low spatial resolution of hyperspectral images may limit the performance of the involved tasks in such applications. In the last years, fusing the information of an HS image with high spatial resolution multispectral (MS) or panchromatic (PAN) images has been widely studied to enhance the spatial resolution. Image fusion has been formulated as an inverse problem whose solution is an HS image which assumed to be sparse in an analytic or learned dictionary. This work proposes a non-local centralized sparse representation model on a set of learned dictionaries in order to regularize the conventional fusion problem.Methodology: The dictionaries are learned from the estimated abundance data taking advantage of the depth correlation between abundance maps and the non-local self- similarity over the spatial domain. Then, conditionally on these dictionaries, the fusion problem is solved by an alternating iterative numerical algorithm.Results: Experimental results with real data show that the proposed method outperforms the state-of-the-art methods under different quantitative assessments.Conclusions: In this work, we propose a hyperspectral and multispectral image fusion method based on a non-local centralized sparse representation on abundance maps. This model allows us to include the non-local redundancy of abundance maps in the fusion problem using spectral unmixing and improve the performance of the sparsity-based fusion approaches.


2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
Chao Bi ◽  
Yugen Yi ◽  
Lei Zhang ◽  
Caixia Zheng ◽  
Yanjiao Shi ◽  
...  

Recently, dictionary learning has become an active topic. However, the majority of dictionary learning methods directly employs original or predefined handcrafted features to describe the data, which ignores the intrinsic relationship between the dictionary and features. In this study, we present a method called jointly learning the discriminative dictionary and projection (JLDDP) that can simultaneously learn the discriminative dictionary and projection for both image-based and video-based face recognition. The dictionary can realize a tight correspondence between atoms and class labels. Simultaneously, the projection matrix can extract discriminative information from the original samples. Through adopting the Fisher discrimination criterion, the proposed framework enables a better fit between the learned dictionary and projection. With the representation error and coding coefficients, the classification scheme further improves the discriminative ability of our method. An iterative optimization algorithm is proposed, and the convergence is proved mathematically. Extensive experimental results on seven image-based and video-based face databases demonstrate the validity of JLDDP.


2020 ◽  
Vol 10 (12) ◽  
pp. 4395
Author(s):  
Jongsu Yoon ◽  
Yoonsik Choe

Retinex theory represents the human visual system by showing the relative reflectance of an object under various illumination conditions. A feature of this human visual system is color constancy, and the Retinex theory is designed in consideration of this feature. The Retinex algorithms have been popularly used to effectively decompose the illumination and reflectance of an object. The main aim of this paper is to study image enhancement using convolution sparse coding and sparse representations of the reflectance component in the Retinex model over a learned dictionary. To realize this, we use the convolutional sparse coding model to represent the reflectance component in detail. In addition, we propose that the reflectance component can be reconstructed using a trained general dictionary by using convolutional sparse coding from a large dataset. We use singular value decomposition in limited memory to construct a best reflectance dictionary. This allows the reflectance component to provide improved visual quality over conventional methods, as shown in the experimental results. Consequently, we can reduce the difference in perception between humans and machines through the proposed Retinex-based image enhancement.


2020 ◽  
Vol 222 (3) ◽  
pp. 1846-1863
Author(s):  
Yangkang Chen ◽  
Shaohuan Zu ◽  
Wei Chen ◽  
Mi Zhang ◽  
Zhe Guan

SUMMARY Deblending plays an important role in preparing high-quality seismic data from modern blended simultaneous-source seismic acquisition. State-of-the-art deblending is based on the sparsity-constrained iterative inversion. Inversion-based deblending assumes that the ambient noise level is low and the data misfit during iterative inversion accounts for the random ambient noise. The traditional method becomes problematic when the random ambient noise becomes extremely strong and the inversion iteratively fits the random noise instead of the signal and blending interference. We propose a constrained inversion model that takes the strong random noise into consideration and can achieve satisfactory result even when strong random noise exists. The principle of this new method is that we use sparse dictionaries to learn the blending spikes and thus the learned dictionary atoms are able to distinguish between blending spikes and random noise. The separated signal and blending spikes can then be better fitted by the iterative inversion framework. Synthetic and field data examples are used to demonstrate the performance of the new approach.


Sign in / Sign up

Export Citation Format

Share Document