model aggregation
Recently Published Documents


TOTAL DOCUMENTS

87
(FIVE YEARS 36)

H-INDEX

10
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Yuwei Sun ◽  
Hideya Ochiai

Federated learning (FL) has been facilitating privacy-preserving deep learning in many walks of life such as medical image classification, network intrusion detection, and so forth. Whereas it necessitates a central parameter server for model aggregation, which brings about delayed model communication and vulnerability to adversarial attacks. A fully decentralized architecture like Swarm Learning allows peer-to-peer communication among distributed nodes, without the central server. One of the most challenging issues in decentralized deep learning is that data owned by each node are usually non-independent and identically distributed (non-IID), causing time-consuming convergence of model training. To this end, we propose a decentralized learning model called Homogeneous Learning (HL) for tackling non-IID data with a self-attention mechanism. In HL, training performs on each round’s selected node, and the trained model of a node is sent to the next selected node at the end of each round. Notably, for the selection, the self-attention mechanism leverages reinforcement learning to observe a node’s inner state and its surrounding environment’s state, and find out which node should be selected to optimize the training. We evaluate our method with various scenarios for two different image classification tasks. The result suggests that HL can achieve a better performance compared with standalone learning and greatly reduce both the total training rounds by 50.8% and the communication cost by 74.6% for decentralized learning with non-IID data.


2021 ◽  
Author(s):  
Yuwei Sun ◽  
Hideya Ochiai

Federated learning (FL) has been facilitating privacy-preserving deep learning in many walks of life such as medical image classification, network intrusion detection, and so forth. Whereas it necessitates a central parameter server for model aggregation, which brings about delayed model communication and vulnerability to adversarial attacks. A fully decentralized architecture like Swarm Learning allows peer-to-peer communication among distributed nodes, without the central server. One of the most challenging issues in decentralized deep learning is that data owned by each node are usually non-independent and identically distributed (non-IID), causing time-consuming convergence of model training. To this end, we propose a decentralized learning model called Homogeneous Learning (HL) for tackling non-IID data with a self-attention mechanism. In HL, training performs on each round’s selected node, and the trained model of a node is sent to the next selected node at the end of each round. Notably, for the selection, the self-attention mechanism leverages reinforcement learning to observe a node’s inner state and its surrounding environment’s state, and find out which node should be selected to optimize the training. We evaluate our method with various scenarios for two different image classification tasks. The result suggests that HL can achieve a better performance compared with standalone learning and greatly reduce both the total training rounds by 50.8% and the communication cost by 74.6% for decentralized learning with non-IID data.


2021 ◽  
Author(s):  
Ion Chiţescu ◽  
Mădălina Giurgescu Manea ◽  
Titi Paraschiv

Abstract This paper introduces a mathematical model describing how the EEG type waves are processed in order to characterize the level of anxiety. The electroencephalogram (EEG) is a recording of the electrical activity of the brain. The main frequencies of the human EEG waves are: Delta, Theta, Alpha (Low Alpha and High Alpha), Beta (Low Beta and High Beta), Gamma. Psychologists' studies show that there is an interactive relationship between anxiety and two factors in the Big Five theory, namely, extraversion and neuroticism. The specialists in psychology state that the anxiety is characterized by LowAlpha, HighAlpha, LowBeta and HighBeta waves. In this regard, we developed a mathematical model through which EEG waves are processed in order to determine the level of anxiety. Our main idea is to use the Choquet integral with respect to a suitable monotone measure in order to characterize the level of anxiety. This measure was obtained using the measurements of the values of EEG waves made on 70 subjects and the corresponding levels of anxiety (established by psychologists) of these subjects. In order to verify our mathematical model (aggregation tool) we used it to determine the level of anxiety of 10 other subjects, comparing our results with the results provided by psychologists (the comparison validated our results).


2021 ◽  
Vol 199 ◽  
pp. 108468
Author(s):  
Juncai Liu ◽  
Jessie Hui Wang ◽  
Chenghao Rong ◽  
Yuedong Xu ◽  
Tao Yu ◽  
...  

2021 ◽  
Author(s):  
Chamath Palihawadana ◽  
Nirmalie Wiratunga ◽  
Anjana Wijekoon ◽  
Harsha Kalutarage
Keyword(s):  

2021 ◽  
Author(s):  
Vineeth S

Federated learning is a distributed learning paradigm where a centralized model is trained on data distributed over a large number of clients, each with unreliable and relatively slow network connections. The client connections typically have limited bandwidth available to them when using networks such as 2G, 3G, or WiFi. As a result, communication often becomes a bottleneck. Currently, the communication between the clients and server is mostly based on TCP protocol. In this paper, we explore using the UDP protocol for the communication between the clients and server. In particular, we develop UDP-based algorithms for gradient aggregation-based federated learning and model aggregation-based federated learning. We propose methods to construct model updates in case of packet loss with the UDP protocol. We present a scalable framework for practical federated learning. We conduct experiments over WiFi and observe that the UDP-based protocols can lead to faster convergence than the TCP-based protocol -- especially in bad networks. Code available at the repository: \url{https://github.com/vineeths96/Federated-Learning}.


2021 ◽  
Author(s):  
Hui Zeng ◽  
Tongqing Zhou ◽  
Yeting Guo ◽  
Zhiping Cai ◽  
Fang Liu

2021 ◽  
Author(s):  
Qiming Cao ◽  
Xing Zhang ◽  
Yushun Zhang ◽  
Yongdong Zhu

Sign in / Sign up

Export Citation Format

Share Document