scholarly journals A Study Of Three Dicitonary Learning Algorithms

2021 ◽  
Author(s):  
Zhiliang Xing

Sparse Representation is a topic that has been gaining popularity in recent years due to its efficiency, performance and its applications in communication and data extraction fields. A number of algorithms exist that can be used to implement sparse coding techniques in different fields which include K-SVD, ODL, OMP etc. In this project one of the most popular sparse algorithms, the OMP (Orthogonal Matching Pursuit) technique, is investigatedin depth. Since OMP is not capable of finding the global optimum, a Top-Down Search (TDS) algorithm is proposed in this project to achieve much better results by sacrificing the execution time. Another contribution of this project is to investigate the properties of dictionary by modifying the frequency and shifting the phase of a standard Discrete Cosine Transfer (DCT) dictionary. The results of this project show that the performance of sparse coding algorithm still has room for improvement using new techniques.

2021 ◽  
Author(s):  
Zhiliang Xing

Sparse Representation is a topic that has been gaining popularity in recent years due to its efficiency, performance and its applications in communication and data extraction fields. A number of algorithms exist that can be used to implement sparse coding techniques in different fields which include K-SVD, ODL, OMP etc. In this project one of the most popular sparse algorithms, the OMP (Orthogonal Matching Pursuit) technique, is investigatedin depth. Since OMP is not capable of finding the global optimum, a Top-Down Search (TDS) algorithm is proposed in this project to achieve much better results by sacrificing the execution time. Another contribution of this project is to investigate the properties of dictionary by modifying the frequency and shifting the phase of a standard Discrete Cosine Transfer (DCT) dictionary. The results of this project show that the performance of sparse coding algorithm still has room for improvement using new techniques.


Author(s):  
Dirk Doyle ◽  
Lawrence Benedict ◽  
Fritz Christian Awitan

Abstract Novel techniques to expose substrate-level defects are presented in this paper. New techniques such as inter-layer dielectric (ILD) thinning, high keV imaging, and XeF2 poly etch overflow are introduced. We describe these techniques as applied to two different defects types at FEOL. In the first case, by using ILD thinning and high keV imaging, coupled with focused ion beam (FIB) cross section and scanning transmission electron microscopy (STEM,) we were able to judge where to sample for TEM from a top down perspective while simultaneously providing the top down images giving both perspectives on the same sample. In the second case we show retention of the poly Si short after removal of CoSi2 formation on poly. Removal of the CoSi2 exposes the poly Si such that we can utilize XeF2 to remove poly without damaging gate oxide to reveal pinhole defects in the gate oxide. Overall, using these techniques have led to 1) increased chances of successfully finding the defects, 2) better characterization of the defects by having a planar view perspective and 3) reduced time in localizing defects compared to performing cross section alone.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3586
Author(s):  
Wenqing Wang ◽  
Han Liu ◽  
Guo Xie

The spectral mismatch between a multispectral (MS) image and its corresponding panchromatic (PAN) image affects the pansharpening quality, especially for WorldView-2 data. To handle this problem, a pansharpening method based on graph regularized sparse coding (GRSC) and adaptive coupled dictionary is proposed in this paper. Firstly, the pansharpening process is divided into three tasks according to the degree of correlation among the MS and PAN channels and the relative spectral response of WorldView-2 sensor. Then, for each task, the image patch set from the MS channels is clustered into several subsets, and the sparse representation of each subset is estimated through the GRSC algorithm. Besides, an adaptive coupled dictionary pair for each task is constructed to effectively represent the subsets. Finally, the high-resolution image subsets for each task are obtained by multiplying the estimated sparse coefficient matrix by the corresponding dictionary. A variety of experiments are conducted on the WorldView-2 data, and the experimental results demonstrate that the proposed method achieves better performance than the existing pansharpening algorithms in both subjective analysis and objective evaluation.


Author(s):  
Maryam Abedini ◽  
Horriyeh Haddad ◽  
Marzieh Faridi Masouleh ◽  
Asadollah Shahbahrami

This study proposes an image denoising algorithm based on sparse representation and Principal Component Analysis (PCA). The proposed algorithm includes the following steps. First, the noisy image is divided into overlapped [Formula: see text] blocks. Second, the discrete cosine transform is applied as a dictionary for the sparse representation of the vectors created by the overlapped blocks. To calculate the sparse vector, the orthogonal matching pursuit algorithm is used. Then, the dictionary is updated by means of the PCA algorithm to achieve the sparsest representation of vectors. Since the signal energy, unlike the noise energy, is concentrated on a small dataset by transforming into the PCA domain, the signal and noise can be well distinguished. The proposed algorithm was implemented in a MATLAB environment and its performance was evaluated on some standard grayscale images under different levels of standard deviations of white Gaussian noise by means of peak signal-to-noise ratio, structural similarity indexes, and visual effects. The experimental results demonstrate that the proposed denoising algorithm achieves significant improvement compared to dual-tree complex discrete wavelet transform and K-singular value decomposition image denoising methods. It also obtains competitive results with the block-matching and 3D filtering method, which is the current state-of-the-art for image denoising.


2021 ◽  
Vol 336 ◽  
pp. 08013
Author(s):  
Zhaosheng Xu

Based on the author's research time, this paper studies the software credibility algorithm based on deep convolutional sparse coding. Firstly, it summarizes the convolutional sparse coding and trust classification system, and then constructs the algorithm from two aspects: factor processing based on deep convolution neural network and trust classification based on sparse representation.


2019 ◽  
Vol 5 (11) ◽  
pp. 85 ◽  
Author(s):  
Ayan Chatterjee ◽  
Peter W. T. Yuen

This paper proposes a simple yet effective method for improving the efficiency of sparse coding dictionary learning (DL) with an implication of enhancing the ultimate usefulness of compressive sensing (CS) technology for practical applications, such as in hyperspectral imaging (HSI) scene reconstruction. CS is the technique which allows sparse signals to be decomposed into a sparse representation “a” of a dictionary D u . The goodness of the learnt dictionary has direct impacts on the quality of the end results, e.g., in the HSI scene reconstructions. This paper proposes the construction of a concise and comprehensive dictionary by using the cluster centres of the input dataset, and then a greedy approach is adopted to learn all elements within this dictionary. The proposed method consists of an unsupervised clustering algorithm (K-Means), and it is then coupled with an advanced sparse coding dictionary (SCD) method such as the basis pursuit algorithm (orthogonal matching pursuit, OMP) for the dictionary learning. The effectiveness of the proposed K-Means Sparse Coding Dictionary (KMSCD) is illustrated through the reconstructions of several publicly available HSI scenes. The results have shown that the proposed KMSCD achieves ~40% greater accuracy, 5 times faster convergence and is twice as robust as that of the classic Spare Coding Dictionary (C-SCD) method that adopts random sampling of data for the dictionary learning. Over the five data sets that have been employed in this study, it is seen that the proposed KMSCD is capable of reconstructing these scenes with mean accuracies of approximately 20–500% better than all competing algorithms adopted in this work. Furthermore, the reconstruction efficiency of trace materials in the scene has been assessed: it is shown that the KMSCD is capable of recovering ~12% better than that of the C-SCD. These results suggest that the proposed DL using a simple clustering method for the construction of the dictionary has been shown to enhance the scene reconstruction substantially. When the proposed KMSCD is incorporated with the Fast non-negative orthogonal matching pursuit (FNNOMP) to constrain the maximum number of materials to coexist in a pixel to four, experiments have shown that it achieves approximately ten times better than that constrained by using the widely employed TMM algorithm. This may suggest that the proposed DL method using KMSCD and together with the FNNOMP will be more suitable to be the material allocation module of HSI scene simulators like the CameoSim package.


Sign in / Sign up

Export Citation Format

Share Document