scholarly journals IPGM: Inertial Proximal Gradient Method for Convolutional Dictionary Learning

Electronics ◽  
2021 ◽  
Vol 10 (23) ◽  
pp. 3021
Author(s):  
Jing Li ◽  
Xiao Wei ◽  
Fengpin Wang ◽  
Jinjia Wang

Inspired by the recent success of the proximal gradient method (PGM) and recent efforts to develop an inertial algorithm, we propose an inertial PGM (IPGM) for convolutional dictionary learning (CDL) by jointly optimizing both an ℓ2-norm data fidelity term and a sparsity term that enforces an ℓ1 penalty. Contrary to other CDL methods, in the proposed approach, the dictionary and needles are updated with an inertial force by the PGM. We obtain a novel derivative formula for the needles and dictionary with respect to the data fidelity term. At the same time, a gradient descent step is designed to add an inertial term. The proximal operation uses the thresholding operation for needles and projects the dictionary to a unit-norm sphere. We prove the convergence property of the proposed IPGM algorithm in a backtracking case. Simulation results show that the proposed IPGM achieves better performance than the PGM and slice-based methods that possess the same structure and are optimized using the alternating-direction method of multipliers (ADMM).

Algorithms ◽  
2018 ◽  
Vol 12 (1) ◽  
pp. 7
Author(s):  
Di Guo ◽  
Zhangren Tu ◽  
Jiechao Wang ◽  
Min Xiao ◽  
Xiaofeng Du ◽  
...  

Images may be corrupted by salt and pepper impulse noise during image acquisitions or transmissions. Although promising denoising performances have been recently obtained with sparse representations, how to restore high-quality images remains challenging and open. In this work, image sparsity is enhanced with a fast multiclass dictionary learning, and then both the sparsity regularization and robust data fidelity are formulated as minimizations of L0-L0 norms for salt and pepper impulse noise removal. Additionally, a numerical algorithm of modified alternating direction minimization is derived to solve the proposed denoising model. Experimental results demonstrate that the proposed method outperforms the compared state-of-the-art ones on preserving image details and achieving higher objective evaluation criteria.


2015 ◽  
Vol 56 ◽  
pp. 160 ◽  
Author(s):  
Jueyou Li ◽  
Changzhi Wu ◽  
Zhiyou Wu ◽  
Qiang Long ◽  
Xiangyu Wang

Author(s):  
Patrick Knöbelreiter ◽  
Thomas Pock

AbstractIn this work, we propose a learning-based method to denoise and refine disparity maps. The proposed variational network arises naturally from unrolling the iterates of a proximal gradient method applied to a variational energy defined in a joint disparity, color, and confidence image space. Our method allows to learn a robust collaborative regularizer leveraging the joint statistics of the color image, the confidence map and the disparity map. Due to the variational structure of our method, the individual steps can be easily visualized, thus enabling interpretability of the method. We can therefore provide interesting insights into how our method refines and denoises disparity maps. To this end, we can visualize and interpret the learned filters and activation functions and prove the increased reliability of the predicted pixel-wise confidence maps. Furthermore, the optimization based structure of our refinement module allows us to compute eigen disparity maps, which reveal structural properties of our refinement module. The efficiency of our method is demonstrated on the publicly available stereo benchmarks Middlebury 2014 and Kitti 2015.


Sign in / Sign up

Export Citation Format

Share Document