proximal gradient method
Recently Published Documents


TOTAL DOCUMENTS

80
(FIVE YEARS 41)

H-INDEX

11
(FIVE YEARS 2)

Electronics ◽  
2021 ◽  
Vol 10 (23) ◽  
pp. 3021
Author(s):  
Jing Li ◽  
Xiao Wei ◽  
Fengpin Wang ◽  
Jinjia Wang

Inspired by the recent success of the proximal gradient method (PGM) and recent efforts to develop an inertial algorithm, we propose an inertial PGM (IPGM) for convolutional dictionary learning (CDL) by jointly optimizing both an ℓ2-norm data fidelity term and a sparsity term that enforces an ℓ1 penalty. Contrary to other CDL methods, in the proposed approach, the dictionary and needles are updated with an inertial force by the PGM. We obtain a novel derivative formula for the needles and dictionary with respect to the data fidelity term. At the same time, a gradient descent step is designed to add an inertial term. The proximal operation uses the thresholding operation for needles and projects the dictionary to a unit-norm sphere. We prove the convergence property of the proposed IPGM algorithm in a backtracking case. Simulation results show that the proposed IPGM achieves better performance than the PGM and slice-based methods that possess the same structure and are optimized using the alternating-direction method of multipliers (ADMM).


Author(s):  
Jianyu Miao ◽  
Tiejun Yang ◽  
Jun-Wei Jin ◽  
Lijun Sun ◽  
Lingfeng Niu ◽  
...  

Broad Learning System (BLS) has been proven to be one of the most important techniques for classification and regression in machine learning and data mining. BLS directly collects all the features from feature and enhancement nodes as input of the output layer, which neglects vast amounts of redundant information. It usually leads to be inefficient and overfitting. To resolve this issue, we propose sparse regularization-based compact broad learning system (CBLS) framework, which can simultaneously remove redundant nodes and weights. To be more specific, we use group sparse regularization based on [Formula: see text] norm to promote the competition between different nodes and then remove redundant nodes, and a class of nonconvex sparsity regularization to promote the competition between different weights and then remove redundant weights. To optimize the resulting problem of the proposed CBLS, we exploit an efficient alternative optimization algorithm based on proximal gradient method together with computational complexity. Finally, extensive experiments on the classification task are conducted on public benchmark datasets to verify the effectiveness and superiority of the proposed CBLS.


Author(s):  
Carolin Natemeyer ◽  
Daniel Wachsmuth

AbstractWe investigate the convergence of the proximal gradient method applied to control problems with non-smooth and non-convex control cost. Here, we focus on control cost functionals that promote sparsity, which includes functionals of $$L^p$$ L p -type for $$p\in [0,1)$$ p ∈ [ 0 , 1 ) . We prove stationarity properties of weak limit points of the method. These properties are weaker than those provided by Pontryagin’s maximum principle and weaker than L-stationarity.


Author(s):  
Patrick Knöbelreiter ◽  
Thomas Pock

AbstractIn this work, we propose a learning-based method to denoise and refine disparity maps. The proposed variational network arises naturally from unrolling the iterates of a proximal gradient method applied to a variational energy defined in a joint disparity, color, and confidence image space. Our method allows to learn a robust collaborative regularizer leveraging the joint statistics of the color image, the confidence map and the disparity map. Due to the variational structure of our method, the individual steps can be easily visualized, thus enabling interpretability of the method. We can therefore provide interesting insights into how our method refines and denoises disparity maps. To this end, we can visualize and interpret the learned filters and activation functions and prove the increased reliability of the predicted pixel-wise confidence maps. Furthermore, the optimization based structure of our refinement module allows us to compute eigen disparity maps, which reveal structural properties of our refinement module. The efficiency of our method is demonstrated on the publicly available stereo benchmarks Middlebury 2014 and Kitti 2015.


Frequenz ◽  
2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Diksha Thakur ◽  
Vikas Baghel ◽  
Salman Raju Talluri

Abstract The Capon beamformer has excellent resolution and interference suppression capability but due to various attributes of practical environment such as inaccurate and/or insufficient information about the source, transmission medium and antenna array its performance deteriorates. To enhance its performance various efforts have been devoted and one effective method is presented here. In this paper, a novel and efficient robust Capon beamformer is devised which is based on proximal gradient method (PGRCB) and the robustness is achieved through remodeling the optimization problem of standard Capon beamformer (SCB). In the proposed PGRCB, the proximal gradient method is used to formulate a new optimization problem in order to obtain the optimum weights of the robust beamformer. The proposed method can achieve better performance as compared to some recent methods in the literature and its effectiveness is verified by the simulation results.


Author(s):  
Dmitry Grishchenko ◽  
Franck Iutzeler ◽  
Jérôme Malick

Many applications in machine learning or signal processing involve nonsmooth optimization problems. This nonsmoothness brings a low-dimensional structure to the optimal solutions. In this paper, we propose a randomized proximal gradient method harnessing this underlying structure. We introduce two key components: (i) a random subspace proximal gradient algorithm; and (ii) an identification-based sampling of the subspaces. Their interplay brings a significant performance improvement on typical learning problems in terms of dimensions explored.


Sign in / Sign up

Export Citation Format

Share Document