scholarly journals Lyapunov-based Optimization of Edge Resources for Energy-Efficient Adaptive Federated Learning

Author(s):  
Claudio Battiloro ◽  
Paolo Di Lorenzo ◽  
Mattia Merluzzi ◽  
Sergio Barbarossa

The aim of this paper is to propose a novel dynamic resource allocation strategy for energy-efficient adaptive federated learning at the wireless network edge, with latency and learning performance guarantees. We consider a set of devices collecting local data and uploading processed information to an edge server, which runs stochastic gradient-based algorithms to perform continuous learning and adaptation. Hinging on Lyapunov stochastic optimization tools, we dynamically optimize radio parameters (e.g., set of transmitting devices, transmit powers, bits, and rates) and computation resources (e.g., CPU cycles at devices and at server) in order to strike the best trade-off between power, latency, and performance of the federated learning task. The framework admits both a model-based implementation, where the learning performance metrics are available in closed-form, and a data-driven approach, which works with online estimates of the learning performance of interest. The method is then customized to the case of federated least mean squares (LMS) estimation, and federated training of deep convolutional neural networks. Numerical results illustrate the effectiveness of our strategy to perform energy-efficient, low-latency, adaptive federated learning at the wireless network edge.

2021 ◽  
Author(s):  
Claudio Battiloro ◽  
Paolo Di Lorenzo ◽  
Mattia Merluzzi ◽  
Sergio Barbarossa

The aim of this paper is to propose a novel dynamic resource allocation strategy for energy-efficient adaptive federated learning at the wireless network edge, with latency and learning performance guarantees. We consider a set of devices collecting local data and uploading processed information to an edge server, which runs stochastic gradient-based algorithms to perform continuous learning and adaptation. Hinging on Lyapunov stochastic optimization tools, we dynamically optimize radio parameters (e.g., set of transmitting devices, transmit powers, bits, and rates) and computation resources (e.g., CPU cycles at devices and at server) in order to strike the best trade-off between power, latency, and performance of the federated learning task. The framework admits both a model-based implementation, where the learning performance metrics are available in closed-form, and a data-driven approach, which works with online estimates of the learning performance of interest. The method is then customized to the case of federated least mean squares (LMS) estimation, and federated training of deep convolutional neural networks. Numerical results illustrate the effectiveness of our strategy to perform energy-efficient, low-latency, adaptive federated learning at the wireless network edge.


2021 ◽  
Author(s):  
Priyavrat Misra ◽  
Niranjan Panigrahi

Abstract With the ongoing outbreak of the COVID-19 global pandemic, the research community still struggles to develop early and reliable prediction and detection mechanisms for this infectious disease. The commonly used RT-PCR test is not readily available in areas with limited testing facilities, and it lags in performance and timeliness. This paper proposes a deep transfer learning-based approach to predict and detect COVID-19 from digital chest radiographs. In this study, three pre-trained convolutional neural network-based models (VGG16, ResNet18, and DenseNet121) have been fine tuned to detect COVID-19 infected patients from chest X-rays (CXRs). The most efficient model is further used to identify the affected regions using an unsupervised gradient-based localization technique. The proposed system uses a classification approach (normal vs. COVID-19 vs. pneumonia vs. lung opacity) using three supervised classification algorithms followed by gradient-based localization. The training, validation and testing of the system are performed using 21165 CXR images (10192 normal, 1345 pneumonia, 3616 COVID-19, and 6012 lung opacity). Simulation and evaluation results are presented using standard performance metrics, viz, accuracy, sensitivity, and specificity.


2019 ◽  
Vol 5 (1) ◽  
Author(s):  
Balaji Sesha Sarath Pokuri ◽  
Sambuddha Ghosal ◽  
Apurva Kokate ◽  
Soumik Sarkar ◽  
Baskar Ganapathysubramanian

Abstract The microstructure determines the photovoltaic performance of a thin film organic semiconductor film. The relationship between microstructure and performance is usually highly non-linear and expensive to evaluate, thus making microstructure optimization challenging. Here, we show a data-driven approach for mapping the microstructure to photovoltaic performance using deep convolutional neural networks. We characterize this approach in terms of two critical metrics, its generalizability (has it learnt a reasonable map?), and its intepretability (can it produce meaningful microstructure characteristics that influence its prediction?). A surrogate model that exhibits these two features of generalizability and intepretability is particularly useful for subsequent design exploration. We illustrate this by using the surrogate model for both manual exploration (that verifies known domain insight) as well as automated microstructure optimization. We envision such approaches to be widely applicable to a wide variety of microstructure-sensitive design problems.


Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 229
Author(s):  
Xianzhong Tian ◽  
Juan Zhu ◽  
Ting Xu ◽  
Yanjun Li

The latest results in Deep Neural Networks (DNNs) have greatly improved the accuracy and performance of a variety of intelligent applications. However, running such computation-intensive DNN-based applications on resource-constrained mobile devices definitely leads to long latency and huge energy consumption. The traditional way is performing DNNs in the central cloud, but it requires significant amounts of data to be transferred to the cloud over the wireless network and also results in long latency. To solve this problem, offloading partial DNN computation to edge clouds has been proposed, to realize the collaborative execution between mobile devices and edge clouds. In addition, the mobility of mobile devices is easily to cause the computation offloading failure. In this paper, we develop a mobility-included DNN partition offloading algorithm (MDPO) to adapt to user’s mobility. The objective of MDPO is minimizing the total latency of completing a DNN job when the mobile user is moving. The MDPO algorithm is suitable for both DNNs with chain topology and graphic topology. We evaluate the performance of our proposed MDPO compared to local-only execution and edge-only execution, experiments show that MDPO significantly reduces the total latency and improves the performance of DNN, and MDPO can adjust well to different network conditions.


Sign in / Sign up

Export Citation Format

Share Document