Deep Neural Network Task Partitioning and Offloading for Mobile Edge Computing

Author(s):  
Mingjin Gao ◽  
Wenqi Cui ◽  
Di Gao ◽  
Rujing Shen ◽  
Jun Li ◽  
...  
2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Abdullah Numani ◽  
Zaiwar Ali ◽  
Ziaul Haq Abbas ◽  
Ghulam Abbas ◽  
Thar Baker ◽  
...  

Limited battery life and poor computational resources of mobile terminals are challenging problems for the present and future computation-intensive mobile applications. Wireless powered mobile edge computing is one of the solutions, in which wireless energy transfer technology and cloud server’s capabilities are brought to the edge of cellular networks. In wireless powered mobile edge computing systems, the mobile terminals charge their batteries through radio frequency signals and offload their applications to the nearby hybrid access point in the same time slot to minimize their energy consumption and ensure uninterrupted connectivity with hybrid access point. However, the smart division of application into k subtasks as well as intelligent partitioning of time slot for harvesting energy and offloading data is a complex problem. In this paper, we propose a novel deep-learning-based offloading and time allocation policy (DOTP) for training a deep neural network that divides the computation application into optimal number of subtasks, decides for the subtasks to be offloaded or executed locally (offloading policy), and divides the time slot for data offloading and energy harvesting (time allocation policy). DOTP takes into account the current battery level, energy consumption, and time delay of mobile terminal. A comprehensive cost function is formulated, which uses all the aforementioned metrics to calculate the cost for all k number of subtasks. We propose an algorithm that selects the optimal number of subtasks, partial offloading policy, and time allocation policy to generate a huge dataset for training a deep neural network and hence avoid huge computational overhead in partial offloading. Simulation results are compared with the benchmark schemes of total offloading, local execution, and partial offloading. It is evident from the results that the proposed algorithm outperforms the other schemes in terms of battery life, time delay, and energy consumption, with 75% accuracy of the trained deep neural network. The achieved decrease in total energy consumption of mobile terminal through DOTP is 45.74%, 36.69%, and 30.59% as compared to total offloading, partial offloading, and local offloading schemes, respectively.


2021 ◽  
Vol 11 (23) ◽  
pp. 11530
Author(s):  
Pangwei Wang ◽  
Xiao Liu ◽  
Yunfeng Wang ◽  
Tianren Wang ◽  
Juan Zhang

Real-time and reliable short-term traffic state prediction is one of the most critical technologies in intelligent transportation systems (ITS). However, the traffic state is generally perceived by single sensor in existing studies, which is difficult to satisfy the requirement of real-time prediction in complex traffic networks. In this paper, a short-term traffic prediction model based on complex neural network is proposed under the environment of vehicle-to-everything (V2X) communication systems. Firstly, a traffic perception system of multi-source sensors based on V2X communication is proposed and designed. A mobile edge computing (MEC)-assisted architecture is then introduced in a V2X network to facilitate perceptual and computational abilities of the system. Moreover, the graph convolutional network (GCN), the gated recurrent unit (GRU), and the soft-attention mechanism are combined to extract spatiotemporal features of traffic state and integrate them for future prediction. Finally, an intelligent roadside test platform is demonstrated for perception and computation of real-time traffic state. The comparison experiments show that the proposed method can significantly improve the prediction accuracy by comparing with the existing neural network models, which consider one of the spatiotemporal features. In particular, for comparison results of the traffic state prediction and the error value of root mean squared error (RMSE) is reduced by 39.53%, which is the greatest reduction in error occurrences by comparing with the GCN and GRU models in 5, 10, 15 and 30 minutes respectively.


2021 ◽  
Author(s):  
Laha Ale ◽  
Scott King ◽  
Ning Zhang ◽  
Abdul Sattar ◽  
Janahan Skandaraniyam

<div> Mobile Edge Computing (MEC) has been regarded as a promising paradigm to reduce service latency for data processing in Internet of Things, by provisioning computing resources at network edge. In this work, we jointly optimize the task partitioning and computational power allocation for computation offloading in a dynamic environment with multiple IoT devices and multiple edge servers. We formulate the problem as a Markov decision process with constrained hybrid action space, which cannot be well handled by existing deep reinforcement learning (DRL) algorithms. Therefore, we develop a novel Deep Reinforcement Learning called Dirichlet Deep Deterministic Policy Gradient (D3PG), which </div><div>is built on Deep Deterministic Policy Gradient (DDPG) to solve the problem. The developed model can learn to solve multi-objective optimization, including maximizing the number of tasks processed before expiration and minimizing the energy cost and service latency. More importantly, D3PG can effectively deal with constrained distribution-continuous hybrid action space, where the distribution variables are for the task partitioning and offloading, while the continuous variables are for computational frequency control. Moreover, the D3PG can address many similar issues in MEC and general reinforcement learning problems. Extensive simulation results show that the proposed D3PG outperforms the state-of-art methods.</div><div> Mobile Edge Computing (MEC) has been regarded as a promising paradigm to reduce service latency for data processing in Internet of Things, by provisioning computing resources at network edge. In this work, we jointly optimize the task partitioning and computational power allocation for computation offloading in a dynamic environment with multiple IoT devices and multiple edge servers. We formulate the problem as a Markov decision process with constrained hybrid action space, which cannot be well handled by existing deep reinforcement learning (DRL) algorithms. Therefore, we develop a novel Deep Reinforcement Learning called Dirichlet Deep Deterministic Policy Gradient (D3PG), which is built on Deep Deterministic Policy Gradient (DDPG) to solve the problem. The developed model can learn to solve multi-objective optimization, including maximizing the number of tasks processed before expiration and minimizing the energy cost and service latency. More importantly, D3PG can effectively deal with constrained distribution-continuous hybrid action space, where the distribution variables are for the task partitioning and offloading, while the continuous variables are for computational frequency control. Moreover, the D3PG can address many similar issues in MEC and general reinforcement learning problems. Extensive simulation results show that the proposed D3PG outperforms the state-of-art methods.</div>


Sign in / Sign up

Export Citation Format

Share Document