nonconvex regularization
Recently Published Documents


TOTAL DOCUMENTS

50
(FIVE YEARS 27)

H-INDEX

10
(FIVE YEARS 3)

2021 ◽  
Vol 2021 ◽  
pp. 1-19
Author(s):  
Zhijun Luo ◽  
Zhibin Zhu ◽  
Benxin Zhang

This paper proposes a nonconvex model (called LogTVSCAD) for deblurring images with impulsive noises, using the log-function penalty as the regularizer and adopting the smoothly clipped absolute deviation (SCAD) function as the data-fitting term. The proposed nonconvex model can effectively overcome the poor performance of the classical TVL1 model for high-level impulsive noise. A difference of convex functions algorithm (DCA) is proposed to solve the nonconvex model. For the model subproblem, we consider the alternating direction method of multipliers (ADMM) algorithm to solve it. The global convergence is discussed based on Kurdyka–Lojasiewicz. Experimental results show the advantages of the proposed nonconvex model over existing models.


2021 ◽  
pp. 1-23
Author(s):  
Xiao-Juan Yang ◽  
Jin Jing

Abstract In this paper, we propose a variation model which takes advantage of the wavelet tight frame and nonconvex shrinkage penalties for compressed sensing recovery. We address the proposed optimization problem by introducing a adjustable parameter and a firm thresholding operations. Numerical experiment results show that the proposed method outperforms some existing methods in terms of the convergence speed and reconstruction errors. JEL classification numbers: 68U10, 65K10, 90C25, 62H35. Keywords: Compressed Sensing, Nonconvex, Firm thresholding, Wavelet tight frame.


2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Shuo Wang ◽  
Zhibin Zhu ◽  
Ruwen Zhao ◽  
Benxin Zhang

Hyperspectral images (HSIs) can help deliver more reliable representations of real scenes than traditional images and enhance the performance of many computer vision tasks. However, in real cases, an HSI is often degraded by a mixture of various types of noise, including Gaussian noise and impulse noise. In this paper, we propose a logarithmic nonconvex regularization model for HSI mixed noise removal. The logarithmic penalty function can approximate the tensor fibered rank more accurately and treats singular values differently. An alternating direction method of multipliers (ADMM) is also presented to solve the optimization problem, and each subproblem within ADMM is proven to have a closed-form solution. The experimental results demonstrate the effectiveness of the proposed method.


Author(s):  
Ming Han ◽  
Jing Qin Wang ◽  
Jing Tao Wang ◽  
Jun Ying Meng

The energy functional of the CV and LBF model is single, which makes the curve to get into the local minimum easily during the evolution process, and results inaccurate segmentation of the images with nonuniform grayscale and nonsmooth edges. The proposed algorithm, which is based on local entropy fitting under the constraint of nonconvex regularization term, is used to deal with such problems. In this algorithm, global information and local entropy are fitted to avoid segmentation falling into local optimum, and nonconvex regularization term is imported for constraint to protect edge smoothing. First, global information is used to evolve the approximate contour curve of the target segmentation. Then, a local energy functional with local entropy information is constructed to avoid the segmentation process from falling into a local minimum, and to precisely segment the image. Finally, nonconvex regularization terms are used in the energy functional to protect the smoothness of edge information during image segmentation process. The experimental results clearly indicate that the new algorithm can effectively resist noise, precisely segment images with nonuniform grayscale, and achieve the global optimal.


Author(s):  
Kevin Bui ◽  
Fredrick Park ◽  
Shuai Zhang ◽  
Yingyong Qi ◽  
Jack Xin

Convolutional neural networks (CNN) have been hugely successful recently with superior accuracy and performance in various imaging applications, such as classification, object detection, and segmentation. However, a highly accurate CNN model requires millions of parameters to be trained and utilized. Even to increase its performance slightly would require significantly more parameters due to adding more layers and/or increasing the number of filters per layer. Apparently, many of these weight parameters turn out to be redundant and extraneous, so the original, dense model can be replaced by its compressed version attained by imposing inter- and intra-group sparsity onto the layer weights during training. In this paper, we propose a nonconvex family of sparse group lasso that blends nonconvex regularization (e.g., transformed ℓ1, ℓ1−ℓ2, and ℓ0) that induces sparsity onto the individual weights and ℓ2,1 regularization onto the output channels of a layer. We apply variable splitting onto the proposed regularization to develop an algorithm that consists of two steps per iteration: gradient descent and thresholding. Numerical experiments are demonstrated on various CNN architectures showcasing the effectiveness of the nonconvex family of sparse group lasso in network sparsification and test accuracy on par with the current state of the art.


2021 ◽  
Author(s):  
Duo Qiu ◽  
Minru Bai ◽  
Michael Ng ◽  
Xiongjun Zhang

Sign in / Sign up

Export Citation Format

Share Document