QoE-Based Task Offloading With Deep Reinforcement Learning in Edge-Enabled Internet of Vehicles

Author(s):  
Xiaoming He ◽  
Haodong Lu ◽  
Miao Du ◽  
Yingchi Mao ◽  
Kun Wang
Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6058
Author(s):  
Shuo Xiao ◽  
Shengzhi Wang ◽  
Jiayu Zhuang ◽  
Tianyu Wang ◽  
Jiajia Liu

Today, vehicles are increasingly being connected to the Internet of Things, which enables them to obtain high-quality services. However, the numerous vehicular applications and time-varying network status make it challenging for onboard terminals to achieve efficient computing. Therefore, based on a three-stage model of local-edge clouds and reinforcement learning, we propose a task offloading algorithm for the Internet of Vehicles (IoV). First, we establish communication methods between vehicles and their cost functions. In addition, according to the real-time state of vehicles, we analyze their computing requirements and the price function. Finally, we propose an experience-driven offloading strategy based on multi-agent reinforcement learning. The simulation results show that the algorithm increases the probability of success for the task and achieves a balance between the task vehicle delay, expenditure, task vehicle utility and service vehicle utility under various constraints.


2022 ◽  
Author(s):  
Degan Zhang ◽  
Lixiang Cao ◽  
Haoli Zhu ◽  
Ting Zhang ◽  
Jinyu Du ◽  
...  

Author(s):  
Jun Long ◽  
Yueyi Luo ◽  
Xiaoyu Zhu ◽  
Entao Luo ◽  
Mingfeng Huang

AbstractWith the developing of Internet of Things (IoT) and mobile edge computing (MEC), more and more sensing devices are widely deployed in the smart city. These sensing devices generate various kinds of tasks, which need to be sent to cloud to process. Usually, the sensing devices do not equip with wireless modules, because it is neither economical nor energy saving. Thus, it is a challenging problem to find a way to offload tasks for sensing devices. However, many vehicles are moving around the city, which can communicate with sensing devices in an effective and low-cost way. In this paper, we propose a computation offloading scheme through mobile vehicles in IoT-edge-cloud network. The sensing devices generate tasks and transmit the tasks to vehicles, then the vehicles decide to compute the tasks in the local vehicle, MEC server or cloud center. The computation offloading decision is made based on the utility function of the energy consumption and transmission delay, and the deep reinforcement learning technique is adopted to make decisions. Our proposed method can make full use of the existing infrastructures to implement the task offloading of sensing devices, the experimental results show that our proposed solution can achieve the maximum reward and decrease delay.


Author(s):  
Alessio Sacco ◽  
Flavio Esposito ◽  
Guido Marchetto ◽  
Paolo Montuschi

2021 ◽  
Vol 18 (7) ◽  
pp. 58-68
Author(s):  
Xin Liu ◽  
Can Sun ◽  
Mu Zhou ◽  
Bin Lin ◽  
Yuto Lim

Mobile edge computing (MEC) can provide computing services for mobile users (MUs) by offloading computing tasks to edge clouds through wireless access networks. Unmanned aerial vehicles (UAVs) are deployed as supplementary edge clouds to provide effective MEC services for MUs with poor wireless communication condition. In this paper, a joint task offloading and power allocation (TOPA) optimization problem is investigated in UAV-assisted MEC system. Since the joint TOPA problem has a strong non-convex characteristic, a method based on deep reinforcement learning is proposed. Specifically, the joint TOPA problem is modeled as Markov decision process. Then, considering the large state space and continuous action space, a twin delayed deep deterministic policy gradient algorithm is proposed. Simulation results show that the proposed scheme has lower smoothing training cost than other optimization methods.


Sign in / Sign up

Export Citation Format

Share Document