adaptive compression
Recently Published Documents


TOTAL DOCUMENTS

168
(FIVE YEARS 30)

H-INDEX

13
(FIVE YEARS 1)

Author(s):  
Yannis Foufoulas ◽  
Lefteris Sidirourgos ◽  
Eleftherios Stamatogiannakis ◽  
Yannis Ioannidis
Keyword(s):  

2021 ◽  
Author(s):  
Yufei Cui ◽  
Ziquan Liu ◽  
Qiao Li ◽  
Antoni B. Chan ◽  
Chun Jason Xue

Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 1943
Author(s):  
Bingjun Guo ◽  
Yazhi Liu ◽  
Chunyang Zhang

Running Deep Neural Networks (DNNs) in distributed Internet of Things (IoT) nodes is a promising scheme to enhance the performance of IoT systems. However, due to the limited computing and communication resources of the IoT nodes, the communication efficiency of the distributed DNN training strategy is a problem demanding a prompt solution. In this paper, an adaptive compression strategy based on gradient partition is proposed to solve the problem of high communication overhead between nodes during the distributed training procedure. Firstly, a neural network is trained to predict the gradient distribution of its parameters. According to the distribution characteristics of the gradient, the gradient is divided into the key region and the sparse region. At the same time, combined with the information entropy of gradient distribution, a reasonable threshold is selected to filter the gradient value in the partition, and only the gradient value greater than the threshold is transmitted and updated, to reduce the traffic and improve the distributed training efficiency. The strategy uses gradient sparsity to achieve the maximum compression ratio of 37.1 times, which improves the training efficiency to a certain extent.


Sign in / Sign up

Export Citation Format

Share Document