network sparsification
Recently Published Documents


TOTAL DOCUMENTS

10
(FIVE YEARS 5)

H-INDEX

2
(FIVE YEARS 0)

2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Amin Kaveh ◽  
Matteo Magnani ◽  
Christian Rohner

AbstractSparsification is the process of decreasing the number of edges in a network while one or more topological properties are preserved. For probabilistic networks, sparsification has only been studied to preserve the expected degree of the nodes. In this work we introduce a sparsification method to preserve ego betweenness. Moreover, we study the effect of backboning and density on the resulting sparsified networks. Our experimental results show that the sparsification of high density networks can be used to efficiently and accurately estimate measures from the original network, with the choice of backboning algorithm only partially affecting the result.


Author(s):  
Kevin Bui ◽  
Fredrick Park ◽  
Shuai Zhang ◽  
Yingyong Qi ◽  
Jack Xin

Convolutional neural networks (CNN) have been hugely successful recently with superior accuracy and performance in various imaging applications, such as classification, object detection, and segmentation. However, a highly accurate CNN model requires millions of parameters to be trained and utilized. Even to increase its performance slightly would require significantly more parameters due to adding more layers and/or increasing the number of filters per layer. Apparently, many of these weight parameters turn out to be redundant and extraneous, so the original, dense model can be replaced by its compressed version attained by imposing inter- and intra-group sparsity onto the layer weights during training. In this paper, we propose a nonconvex family of sparse group lasso that blends nonconvex regularization (e.g., transformed ℓ1, ℓ1−ℓ2, and ℓ0) that induces sparsity onto the individual weights and ℓ2,1 regularization onto the output channels of a layer. We apply variable splitting onto the proposed regularization to develop an algorithm that consists of two steps per iteration: gradient descent and thresholding. Numerical experiments are demonstrated on various CNN architectures showcasing the effectiveness of the nonconvex family of sparse group lasso in network sparsification and test accuracy on par with the current state of the art.


2020 ◽  
pp. 1-1
Author(s):  
Hao Wang ◽  
Xiangyu Yang ◽  
Yuanming Shi ◽  
Jun Lin

2018 ◽  
Vol 14 (4) ◽  
pp. 1-73 ◽  
Author(s):  
Marcin Pilipczuk ◽  
Michał Pilipczuk ◽  
Piotr Sankowski ◽  
Erik Jan Van Leeuwen

Author(s):  
Aristides Gionis ◽  
Polina Rozenshtein ◽  
Nikolaj Tatti ◽  
Evimaria Terzi

Sign in / Sign up

Export Citation Format

Share Document