Deep Reinforcement Learning for Distributed Computation Offloading in Massive-user Mobile Edge Networks

Author(s):  
Bo Jiang ◽  
Kuikui Li ◽  
Bo Zhou ◽  
Meixia Tao ◽  
Zhiyong Chen
2021 ◽  
Vol 103 ◽  
pp. 107108
Author(s):  
Chen Chen ◽  
Yuru Zhang ◽  
Zheng Wang ◽  
Shaohua Wan ◽  
Qingqi Pei

Author(s):  
Xiaoyu Zhu ◽  
Yueyi Luo ◽  
Anfeng Liu ◽  
Md Zakirul Alam Bhuiyan ◽  
Shaobo Zhang

Author(s):  
Jun Long ◽  
Yueyi Luo ◽  
Xiaoyu Zhu ◽  
Entao Luo ◽  
Mingfeng Huang

AbstractWith the developing of Internet of Things (IoT) and mobile edge computing (MEC), more and more sensing devices are widely deployed in the smart city. These sensing devices generate various kinds of tasks, which need to be sent to cloud to process. Usually, the sensing devices do not equip with wireless modules, because it is neither economical nor energy saving. Thus, it is a challenging problem to find a way to offload tasks for sensing devices. However, many vehicles are moving around the city, which can communicate with sensing devices in an effective and low-cost way. In this paper, we propose a computation offloading scheme through mobile vehicles in IoT-edge-cloud network. The sensing devices generate tasks and transmit the tasks to vehicles, then the vehicles decide to compute the tasks in the local vehicle, MEC server or cloud center. The computation offloading decision is made based on the utility function of the energy consumption and transmission delay, and the deep reinforcement learning technique is adopted to make decisions. Our proposed method can make full use of the existing infrastructures to implement the task offloading of sensing devices, the experimental results show that our proposed solution can achieve the maximum reward and decrease delay.


Sign in / Sign up

Export Citation Format

Share Document