sparse group lasso
Recently Published Documents


TOTAL DOCUMENTS

80
(FIVE YEARS 41)

H-INDEX

13
(FIVE YEARS 2)

Author(s):  
Wei Liu ◽  
Hanwen Xu ◽  
Cheng Fang ◽  
Lei Yang ◽  
Weidong Jiao

2021 ◽  
Author(s):  
Changkun Han ◽  
Wei Lu ◽  
Pengxin Wang ◽  
Liuyang Song ◽  
Huaqing Wang

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Lei Wang ◽  
Juntao Li ◽  
Juanfang Liu ◽  
Mingming Chang

In view of the challenges of the group Lasso penalty methods for multicancer microarray data analysis, e.g., dividing genes into groups in advance and biological interpretability, we propose a robust adaptive multinomial regression with sparse group Lasso penalty (RAMRSGL) model. By adopting the overlapping clustering strategy, affinity propagation clustering is employed to obtain each cancer gene subtype, which explores the group structure of each cancer subtype and merges the groups of all subtypes. In addition, the data-driven weights based on noise are added to the sparse group Lasso penalty, combining with the multinomial log-likelihood function to perform multiclassification and adaptive group gene selection simultaneously. The experimental results on acute leukemia data verify the effectiveness of the proposed method.


Author(s):  
Kevin Bui ◽  
Fredrick Park ◽  
Shuai Zhang ◽  
Yingyong Qi ◽  
Jack Xin

Convolutional neural networks (CNN) have been hugely successful recently with superior accuracy and performance in various imaging applications, such as classification, object detection, and segmentation. However, a highly accurate CNN model requires millions of parameters to be trained and utilized. Even to increase its performance slightly would require significantly more parameters due to adding more layers and/or increasing the number of filters per layer. Apparently, many of these weight parameters turn out to be redundant and extraneous, so the original, dense model can be replaced by its compressed version attained by imposing inter- and intra-group sparsity onto the layer weights during training. In this paper, we propose a nonconvex family of sparse group lasso that blends nonconvex regularization (e.g., transformed ℓ1, ℓ1−ℓ2, and ℓ0) that induces sparsity onto the individual weights and ℓ2,1 regularization onto the output channels of a layer. We apply variable splitting onto the proposed regularization to develop an algorithm that consists of two steps per iteration: gradient descent and thresholding. Numerical experiments are demonstrated on various CNN architectures showcasing the effectiveness of the nonconvex family of sparse group lasso in network sparsification and test accuracy on par with the current state of the art.


2021 ◽  
Vol 6 (58) ◽  
pp. 3024
Author(s):  
Adam Richie-Halford ◽  
Manjari Narayan ◽  
Noah Simon ◽  
Jason Yeatman ◽  
Ariel Rokem

2021 ◽  
Vol 36 (1) ◽  
pp. A-JB1_1-11
Author(s):  
Yasutoshi Ida ◽  
Yasuhiro Fujiwara ◽  
Hisashi Kashima

Sign in / Sign up

Export Citation Format

Share Document