multiple clients
Recently Published Documents


TOTAL DOCUMENTS

59
(FIVE YEARS 21)

H-INDEX

9
(FIVE YEARS 1)

Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6791
Author(s):  
Yunji Yang ◽  
Yonggi Hong ◽  
Jaehyun Park

In this paper, efficient gradient updating strategies are developed for the federated learning when distributed clients are connected to the server via a wireless backhaul link. Specifically, a common convolutional neural network (CNN) module is shared for all the distributed clients and it is trained through the federated learning over wireless backhaul connected to the main server. However, during the training phase, local gradients need to be transferred from multiple clients to the server over wireless backhaul link and can be distorted due to wireless channel fading. To overcome it, an efficient gradient updating method is proposed, in which the gradients are combined such that the effective SNR is maximized at the server. In addition, when the backhaul links for all clients have small channel gain simultaneously, the server may have severely distorted gradient vectors. Accordingly, we also propose a binary gradient updating strategy based on thresholding in which the round associated with all channels having small channel gains is excluded from federated learning. Because each client has limited transmission power, it is effective to allocate more power on the channel slots carrying specific important information, rather than allocating power equally to all channel resources (equivalently, slots). Accordingly, we also propose an adaptive power allocation method, in which each client allocates its transmit power proportionally to the magnitude of the gradient information. This is because, when training a deep learning model, the gradient elements with large values imply the large change of weight to decrease the loss function.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-20
Author(s):  
Kai Hu ◽  
Yaogen Li ◽  
Min Xia ◽  
Jiasheng Wu ◽  
Meixia Lu ◽  
...  

Federated learning (FL) is a distributed machine learning (ML) framework. In FL, multiple clients collaborate to solve traditional distributed ML problems under the coordination of the central server without sharing their local private data with others. This paper mainly sorts out FLs based on machine learning and deep learning. First of all, this paper introduces the development process, definition, architecture, and classification of FL and explains the concept of FL by comparing it with traditional distributed learning. Then, it describes typical problems of FL that need to be solved. On the basis of classical FL algorithms, several federated machine learning algorithms are briefly introduced, with emphasis on deep learning and classification and comparisons of those algorithms are carried out. Finally, this paper discusses possible future developments of FL based on deep learning.


Author(s):  
Jinjin Xu ◽  
Yaochu Jin ◽  
Wenli Du

AbstractData-driven optimization has found many successful applications in the real world and received increased attention in the field of evolutionary optimization. Most existing algorithms assume that the data used for optimization are always available on a central server for construction of surrogates. This assumption, however, may fail to hold when the data must be collected in a distributed way and are subject to privacy restrictions. This paper aims to propose a federated data-driven evolutionary multi-/many-objective optimization algorithm. To this end, we leverage federated learning for surrogate construction so that multiple clients collaboratively train a radial-basis-function-network as the global surrogate. Then a new federated acquisition function is proposed for the central server to approximate the objective values using the global surrogate and estimate the uncertainty level of the approximated objective values based on the local models. The performance of the proposed algorithm is verified on a series of multi-/many-objective benchmark problems by comparing it with two state-of-the-art surrogate-assisted multi-objective evolutionary algorithms.


2021 ◽  
Vol 12 (4) ◽  
pp. 1-23
Author(s):  
Anbu Huang ◽  
Yang Liu ◽  
Tianjian Chen ◽  
Yongkai Zhou ◽  
Quan Sun ◽  
...  

From facial recognition to autonomous driving, Artificial Intelligence (AI) will transform the way we live and work over the next couple of decades. Existing AI approaches for urban computing suffer from various challenges, including dealing with synchronization and processing of vast amount of data generated from the edge devices, as well as the privacy and security of individual users, including their bio-metrics, locations, and itineraries. Traditional centralized-based approaches require data in each organization be uploaded to the central database, which may be prohibited by data protection acts, such as GDPR and CCPA. To decouple model training from the need to store the data in the cloud, a new training paradigm called Federated Learning (FL) is proposed. FL enables multiple devices to collaboratively learn a shared model while keeping the training data on devices locally, which can significantly mitigate privacy leakage risk. However, under urban computing scenarios, data are often communication-heavy, high-frequent, and asynchronized, posing new challenges to FL implementation. To handle these challenges, we propose a new hybrid federated learning architecture called StarFL. By combining with Trusted Execution Environment (TEE), Secure Multi-Party Computation (MPC), and (Beidou) satellites, StarFL enables safe key distribution, encryption, and decryption, and provides a verification mechanism for each participant to ensure the security of the local data. In addition, StarFL can provide accurate timestamp matching to facilitate synchronization of multiple clients. All these improvements make StarFL more applicable to the security-sensitive scenarios for the next generation of urban computing.


2021 ◽  
Author(s):  
Mefta Sadat

The same defect may be rediscovered by multiple clients, causing unplanned outages and leading to reduced customer satisfaction. One solution is forcing clients to install a fix for every defect. However, this approach is economically infeasible, because it requires extra resources and increases downtime. Moreover, it may lead to regression of functionality, as new fixes may break the existing functionality. Our goal is to find a way to proactively predict defects that a client may rediscover in the future. We build a predictive model by leveraging recommender algorithms. We evaluate our approach with extracted rediscovery data from four groups of large-scale open source software projects (namely, Eclipse, Gentoo, KDE, and Libre) and one enterprise software. The datasets contain information about ⇡ 1.33 million unique defect reports over a period of 18 years (1999-2017). Our proposed approach may help in understanding the defect rediscovery phenomenon, leading to improvement of software quality and customer satisfaction.


2021 ◽  
Author(s):  
Mefta Sadat

The same defect may be rediscovered by multiple clients, causing unplanned outages and leading to reduced customer satisfaction. One solution is forcing clients to install a fix for every defect. However, this approach is economically infeasible, because it requires extra resources and increases downtime. Moreover, it may lead to regression of functionality, as new fixes may break the existing functionality. Our goal is to find a way to proactively predict defects that a client may rediscover in the future. We build a predictive model by leveraging recommender algorithms. We evaluate our approach with extracted rediscovery data from four groups of large-scale open source software projects (namely, Eclipse, Gentoo, KDE, and Libre) and one enterprise software. The datasets contain information about ⇡ 1.33 million unique defect reports over a period of 18 years (1999-2017). Our proposed approach may help in understanding the defect rediscovery phenomenon, leading to improvement of software quality and customer satisfaction.


Author(s):  
Sheyda Kiani Mehr ◽  
Prasad Jogalekar ◽  
Deep Medhi

AbstractObjective Quality of Experience (QoE) for Dynamic Adaptive Streaming over HTTP (DASH) video streaming has received considerable attention in recent years. While there are a number of objective QoE models, a limitation of the current models is that the QoE is provided after the entire video is delivered; also, the models are on a per client basis. For content service providers, QoE observed is important to monitor to understand ensemble performance during streaming such as for live events or concurrent streaming when multiple clients are streaming. For this purpose, we propose Moving QoE (MQoE, in short) models to measure QoE during periodically during video streaming for multiple simultaneous clients. Our first model MQoE_RF is a nonlinear model considering the bitrate gain and sensitivity from bitrate switching frequency. Our second model MQoE_SD is a linear model that focuses on capturing the standard deviation in the bitrate switching magnitude among segments along with the bitrate gain. We then study the effectiveness of both models in a multi-user mobile client environment, with the mobility patterns being based on traces from a train, a car, or a ferry. We implemented the study on the GENI testbed. Our study shows that our MQoE models are more accurate in capturing the QoE behavior during transmission than static QoE models. Furthermore, our MQoE_RF model captures the sensitivity due to bitrate switching frequency more effectively while MQoE_SD captures the sensitivity due to the magnitude of the bitrate switching. Either models are suitable for content service providers for monitoring video streaming based on their preference.


In software engineering, software maintenance is the process of correction, updating, and improvement of software products after handed over to the customer. Through offshore software maintenance outsourcing (OSMO) clients can get advantages like reduce cost, save time, and improve quality. In most cases, the OSMO vendor generates considerable revenue. However, the selection of an appropriate proposal among multiple clients is one of the critical problems for OSMO vendors. The purpose of this paper is to suggest an effective machine learning technique that can be used by OSMO vendors to assess or predict the OSMO client’s proposal. The dataset is generated through a survey of OSMO vendors working in a developing country. The results showed that supervised learning-based classifiers like Naïve Bayesian, SMO, Logistics apprehended 69.75 %, 81.81 %, and 87.27 % testing accuracy respectively. This study concludes that supervised learning is the most suitable technique to predict the OSMO client's proposal.


2021 ◽  
Vol 13 (1) ◽  
pp. 1-17
Author(s):  
Zyanya Cordoba ◽  
Riddhi Rana ◽  
Giovanna Rendon ◽  
Justin Thunell ◽  
Abdelrahman Elleithy

The mass adoption of WiFi (IEEE 802.11) technology has increased numbers of devices simultaneously attempting to use high-bandwidth applications such as video streaming in a finite portion of the frequency spectrum. These increasing numbers can be seen in the deployment of highly-dense wireless environments in which performance can be affected due to the intensification of challenges such as co-channel interference (CCI). There are mechanisms in place to try to avoid sources of interference from non-WiFi devices. Still, CCI caused by legitimate WiFi traffic can be equally or even more disruptive, and also though some tools and protocols try to address CCI, these are no longer sufficient for this type of environment. Therefore, this paper investigates the effect of transmit power and direction have on CCI in a high-density environment consisting of multiple access points (APs) and multiple clients. We suggest improvements on publicly- existing documented power control algorithms and techniques by proposing a cooperative approach consisting of the incorporation of feedback from the receiver to the transmitter to allow it to reduce power level where possible, which will minimize the range of CCI for near clients without compromising coverage for the most distant ones.


Author(s):  
Ahmed El-Yahyaoui ◽  
Mohamed Daifr Ech-Cherif El Kettani

Fully homomorphic encryption schemes (FHE) are a type of encryption algorithm dedicated to data security in cloud computing. It allows for performing computations over ciphertext. In addition to this characteristic, a verifiable FHE scheme has the capacity to allow an end user to verify the correctness of the computations done by a cloud server on his encrypted data. Since FHE schemes are known to be greedy in term of processing consumption and slow in terms of runtime execution, it is very useful to look for improvement techniques and tools to improve FHE performance. Parallelizing computations is among the best tools one can use for FHE improvement. Batching is a kind of parallelization of computations when applied to an FHE scheme, it gives it the capacity of encrypting and homomorphically processing a vector of plaintexts as a single ciphertext. This is used in the context of cloud computing to perform a known function on several ciphertexts for multiple clients at the same time. The advantage here is in optimizing resources on the cloud side and improving the quality of services provided by the cloud computing. In this article, the authors will present a detailed survey of different FHE improvement techniques in the literature and apply the batching technique to a promising verifiable FHE (VFHE) recently presented by the authors at the WINCOM17 conference.


Sign in / Sign up

Export Citation Format

Share Document