adaptive aggregation
Recently Published Documents


TOTAL DOCUMENTS

67
(FIVE YEARS 17)

H-INDEX

10
(FIVE YEARS 2)

2022 ◽  
Vol 59 (2) ◽  
pp. 102868
Author(s):  
Zhi Yu ◽  
Jiaming Pei ◽  
Mingpeng Zhu ◽  
Jiwei Zhang ◽  
Jinhai Li
Keyword(s):  

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Xianxian Li ◽  
Yanxia Gong ◽  
Yuan Liang ◽  
Li-e Wang

Heterogeneous data and models pose critical challenges for federated learning. However, the traditional federated learning framework, which trains the global model by transferring model parameters, has major limitations; it requires that all participants have the same training model architectures, and the trained global model does not guarantee accurate projections for participants’ personal data. To solve this problem, we propose a new federal framework named personalized federated learning with semisupervised distillation (pFedSD), which ensures the privacy of the participants’ model architectures and improves the communication efficiency by transmitting the model’s predicted class distribution rather than model parameters. First, the server adopts the adaptive aggregation method to reduce the weight of low-quality model predictions for the model’s predicted class distributions uploaded by all clients, which helps to improve the quality of the aggregation of the prediction class distribution. Then, the server sends it back to the clients for local training to obtain the personalized model. We finally conducted experiments on different datasets (MNIST, FMNIST, and CIFAR10), and the results show that the model performance of pFedSD exceeds the latest federated distillation algorithms.


2021 ◽  
Author(s):  
Jianjun Bao ◽  
Haibo Wang ◽  
Haixiang Li ◽  
Ke Luo ◽  
Xiaolin Shen

Sign in / Sign up

Export Citation Format

Share Document