scholarly journals Variable Length K-SVD: A New Dictionary Learning Approach and Multi-Stage OMP Method for Sparse Representation

2021 ◽  
Author(s):  
Mahdi Marsousi

The Sparse representation research field and applications have been rapidly growing during the past 15 years. The use of overcomplete dictionaries in sparse representation has gathered extensive attraction. Sparse representation was followed by the concept of adapting dictionaries to the input data (dictionary learning). The K-SVD is a well-known dictionary learning approach and is widely used in different applications. In this thesis, a novel enhancement to the K-SVD algorithm is proposed which creates a learnt dictionary with a specific number of atoms adapted for the input data set. To increase the efficiency of the orthogonal matching pursuit (OMP) method, a new sparse representation method is proposed which applies a multi-stage strategy to reduce computational cost. A new phase included DCT (PI-DCT) dictionary is also proposed which significantly reduces the blocking artifact problem of using the conventional DCT. The accuracy and efficiency of the proposed methods are then compared with recent approaches that demonstrate the promising performance of the methods proposed in this thesis.

2021 ◽  
Author(s):  
Mahdi Marsousi

The Sparse representation research field and applications have been rapidly growing during the past 15 years. The use of overcomplete dictionaries in sparse representation has gathered extensive attraction. Sparse representation was followed by the concept of adapting dictionaries to the input data (dictionary learning). The K-SVD is a well-known dictionary learning approach and is widely used in different applications. In this thesis, a novel enhancement to the K-SVD algorithm is proposed which creates a learnt dictionary with a specific number of atoms adapted for the input data set. To increase the efficiency of the orthogonal matching pursuit (OMP) method, a new sparse representation method is proposed which applies a multi-stage strategy to reduce computational cost. A new phase included DCT (PI-DCT) dictionary is also proposed which significantly reduces the blocking artifact problem of using the conventional DCT. The accuracy and efficiency of the proposed methods are then compared with recent approaches that demonstrate the promising performance of the methods proposed in this thesis.


Geophysics ◽  
2018 ◽  
Vol 83 (4) ◽  
pp. V263-V282 ◽  
Author(s):  
Pierre Turquais ◽  
Endrias G. Asgedom ◽  
Walter Söllner ◽  
Leiv Gelius

Dictionary learning (DL) methods are effective tools to automatically find a sparse representation of a data set. They train a set of basis vectors on the data to capture the morphology of the redundant signals. The basis vectors are called atoms, and the set is referred to as the dictionary. This dictionary can be used to represent the data in a sparse manner with a linear combination of a few of its atoms. In conventional DL, the atoms are unstructured and are only numerically defined over a grid that has the same sampling as the data. Consequently, the atoms are unknown away from this sampling grid, and a sparse representation of the data in the dictionary domain is not sufficient information to interpolate the data. To overcome this limitation, we have developed a DL method called parabolic DL, in which each learned atom is constrained to represent an elementary waveform that has a constant amplitude along a parabolic traveltime moveout. The parabolic structure is consistent with the physics inherent to the seismic wavefield and can be used to easily interpolate or extrapolate the atoms. Hence, we have developed a parabolic DL-based process to interpolate and regularize seismic data. Briefly, it consists of learning a parabolic dictionary from the data, finding a sparse representation of the data in the dictionary domain, interpolating the dictionary atoms over the desired grid, and, finally, taking the sparse representation of the data in the interpolated dictionary domain. We examine three characteristics of this method, i.e., the parabolic structure, the sparsity promotion, and the adaptation to the data, and we conclude that they strengthen robustness to noise and to aliasing and that they increase the accuracy of the interpolation. For both synthetic and field data sets, we have successful seismic wavefield reconstructions across the streamers for typical 3D acquisition geometries.


2015 ◽  
Vol 24 (1) ◽  
pp. 135-143 ◽  
Author(s):  
Omer F. Alcin ◽  
Abdulkadir Sengur ◽  
Jiang Qian ◽  
Melih C. Ince

AbstractExtreme learning machine (ELM) is a recent scheme for single hidden layer feed forward networks (SLFNs). It has attracted much interest in the machine intelligence and pattern recognition fields with numerous real-world applications. The ELM structure has several advantages, such as its adaptability to various problems with a rapid learning rate and low computational cost. However, it has shortcomings in the following aspects. First, it suffers from the irrelevant variables in the input data set. Second, choosing the optimal number of neurons in the hidden layer is not well defined. In case the hidden nodes are greater than the training data, the ELM may encounter the singularity problem, and its solution may become unstable. To overcome these limitations, several methods have been proposed within the regularization framework. In this article, we considered a greedy method for sparse approximation of the output weight vector of the ELM network. More specifically, the orthogonal matching pursuit (OMP) algorithm is embedded to the ELM. This new technique is named OMP-ELM. OMP-ELM has several advantages over regularized ELM methods, such as lower complexity and immunity to the singularity problem. Experimental works on nine commonly used regression problems indicate that the investigated OMP-ELM method confirms these advantages. Moreover, OMP-ELM is compared with the ELM method, the regularized ELM scheme, and artificial neural networks.


This Research proposal addresses the issues of dimension reduction algorithms in Deep Learning(DL) for Hyperspectral Imaging (HSI) classification, to reduce the size of training dataset and for feature extraction ICA(Independent Component Analysis) are adopted. The proposed algorithm evaluated uses real HSI data set. It shows that ICA gives the most optimistic presentation it shrinks off the feature occupying a small portion of all pixels distinguished from the noisy bands based on non Gaussian assumption of independent sources. In turn, finding the independent components to address the challenge. A new approach DL based method is adopted, that has greater attention in the research field of HSI. DL based method is evaluated by a sequence prediction architecture that includes a recurrent neural network the LSTM architecture. It includes CNN layers for feature extraction of input datasets that have better accuracy with minimum computational cost


Author(s):  
M. Peng ◽  
W. Wan ◽  
Z. Liu ◽  
K. Di

The multi-source DEMs generated using the images acquired in the descent and landing phase and after landing contain supplementary information, and this makes it possible and beneficial to produce a higher-quality DEM through fusing the multi-scale DEMs. The proposed fusion method consists of three steps. First, source DEMs are split into small DEM patches, then the DEM patches are classified into a few groups by local density peaks clustering. Next, the grouped DEM patches are used for sub-dictionary learning by stochastic coordinate coding. The trained sub-dictionaries are combined into a dictionary for sparse representation. Finally, the simultaneous orthogonal matching pursuit (SOMP) algorithm is used to achieve sparse representation. We use the real DEMs generated from Chang’e-3 descent images and navigation camera (Navcam) stereo images to validate the proposed method. Through the experiments, we have reconstructed a seamless DEM with the highest resolution and the largest spatial coverage among the input data. The experimental results demonstrated the feasibility of the proposed method.


Geophysics ◽  
2019 ◽  
Vol 84 (5) ◽  
pp. KS155-KS172
Author(s):  
Jie Shao ◽  
Yibo Wang ◽  
Yi Yao ◽  
Shaojiang Wu ◽  
Qingfeng Xue ◽  
...  

Microseismic data usually have a low signal-to-noise ratio, necessitating the application of an effective denoising method. Most conventional denoising methods treat each component of multicomponent data separately, e.g., denoising methods with sparse representation. However, microseismic data are often acquired with a 3C receiver, especially in borehole monitoring cases. Independent denoising ignores the relative amplitudes and vector relationships between different components. We have developed a new simultaneous denoising method for 3C microseismic data based on joint sparse representation. The three components are represented by different dictionary atoms; the dictionary can be fixed or adaptive depending on the dictionary learning method that is used. Our method adds an extra time consistency constraint with simultaneous transformation of 3C data. The joint sparse optimization problem is solved using the extended orthogonal matching pursuit. Synthetic microseismic data with a double-couple source mechanism and two field downhole microseismic data were used for testing. Independent denoising of 1C data with the fixed dictionary method and simultaneous denoising of 3C data with the fixed dictionary and dictionary learning (3C-DL) methods were compared. The results indicate that among the three methods, the 3C-DL method is the most effective in suppressing random noise, preserving weak signals, and restoring polarization information; this is achieved by combining the time consistency constraint and dictionary learning.


Geophysics ◽  
2019 ◽  
Vol 84 (3) ◽  
pp. V169-V183 ◽  
Author(s):  
Shaohuan Zu ◽  
Hui Zhou ◽  
Rushan Wu ◽  
Maocai Jiang ◽  
Yangkang Chen

In recent years, sparse representation is seeing increasing application to fundamental signal and image-processing tasks. In sparse representation, a signal can be expressed as a linear combination of a dictionary (atom signals) and sparse coefficients. Dictionary learning has a critical role in obtaining a state-of-the-art sparse representation. A good dictionary should capture the representative features of the data. The whole signal can be used as training patches to learn a dictionary. However, this approach suffers from high computational costs, especially for a 3D cube. A common method is to randomly select some patches from given data as training patches to accelerate the learning process. However, the random selection method without any prior information will damage the signal if the selected patches for training are inappropriately chosen from a simple structure (e.g., training patches are chosen from a simple structure to recover the complex structure). We have developed a dip-oriented dictionary learning method, which incorporates an estimation of the dip field into the selection procedure of training patches. In the proposed approach, patches with a large dip value are selected for the training. However, it is not easy to estimate an accurate dip field from the noisy data directly. Hence, we first apply a curvelet-transform noise reduction method to remove some fine-scale components that presumably contain mostly random noise, and we then calculate a more reliable dip field from the preprocessed data to guide the patch selection. Numerical tests on synthetic shot records and field seismic image examples demonstrate that the proposed method can obtain a similar result compared with the method trained on the entire data set and obtain a better denoised result compared with the random selection method. We also compare the performance using of the proposed method and those methods based on curvelet thresholding and rank reduction on a synthetic shot record.


Sign in / Sign up

Export Citation Format

Share Document