scholarly journals Dynamic Service Migration in Mobile Edge Computing Based on Markov Decision Process

2019 ◽  
Vol 27 (3) ◽  
pp. 1272-1288 ◽  
Author(s):  
Shiqiang Wang ◽  
Rahul Urgaonkar ◽  
Murtaza Zafer ◽  
Ting He ◽  
Kevin Chan ◽  
...  
Electronics ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 190
Author(s):  
Wu Ouyang ◽  
Zhigang Chen ◽  
Jia Wu ◽  
Genghua Yu ◽  
Heng Zhang

As transportation becomes more convenient and efficient, users move faster and faster. When a user leaves the service range of the original edge server, the original edge server needs to migrate the tasks offloaded by the user to other edge servers. An effective task migration strategy needs to fully consider the location of users, the load status of edge servers, and energy consumption, which make designing an effective task migration strategy a challenge. In this paper, we innovatively proposed a mobile edge computing (MEC) system architecture consisting of multiple smart mobile devices (SMDs), multiple unmanned aerial vehicle (UAV), and a base station (BS). Moreover, we establish the model of the Markov decision process with unknown rewards (MDPUR) based on the traditional Markov decision process (MDP), which comprehensively considers the three aspects of the migration distance, the residual energy status of the UAVs, and the load status of the UAVs. Based on the MDPUR model, we propose a advantage-based value iteration (ABVI) algorithm to obtain the effective task migration strategy, which can help the UAV group to achieve load balancing and reduce the total energy consumption of the UAV group under the premise of ensuring user service quality. Finally, the results of simulation experiments show that the ABVI algorithm is effective. In particular, the ABVI algorithm has better performance than the traditional value iterative algorithm. And in a dynamic environment, the ABVI algorithm is also very robust.


2020 ◽  
Vol 2020 ◽  
pp. 1-6 ◽  
Author(s):  
Bingxin Zhang ◽  
Guopeng Zhang ◽  
Weice Sun ◽  
Kun Yang

This paper proposes an efficient computation task offloading mechanism for mobile edge computing (MEC) systems. The studied MEC system consists of multiple user equipment (UEs) and multiple radio interfaces. In order to maximize the number of UEs benefitting from the MEC, the task offloading and power control strategy for a UE is optimized in a joint manner. However, the problem of finding the optimal solution is NP-hard. We then reformulate the problem as a Markov decision process (MDP) and develop a reinforcement learning- (RL-) based algorithm to solve the MDP. Simulation results show that the proposed RL-based algorithm achieves a near-optimal performance compared to the exhaustive search algorithm, and it also outperforms the received signal strength- (RSS-) based method no matter from the standpoint of the system (as it leads to a larger number of beneficial UEs) or an individual (as it generates a lower computation overhead for a UE).


Author(s):  
Guisong Yang ◽  
Ling Hou ◽  
Xingyu He ◽  
Daojing He ◽  
Sammy Chan ◽  
...  

Author(s):  
Bingxin Yao ◽  
Bin Wu ◽  
Siyun Wu ◽  
Yin Ji ◽  
Danggui Chen ◽  
...  

In this paper, an offloading algorithm based on Markov Decision Process (MDP) is proposed to solve the multi-objective offloading decision problem in Mobile Edge Computing (MEC) system. The feature of the algorithm is that MDP is used to make offloading decision. The number of tasks in the task queue, the number of accessible edge clouds and Signal-Noise-Ratio (SNR) of the wireless channel are taken into account in the state space of the MDP model. The offloading delay and energy consumption are considered to define the value function of the MDP model, i.e. the objective function. To maximize the value function, Value Iteration Algorithm is used to obtain the optimal offloading policy. According to the policy, tasks of mobile terminals (MTs) are offloaded to the edge cloud or central cloud, or executed locally. The simulation results show that the proposed algorithm can effectively reduce the offloading delay and energy consumption.


2021 ◽  
Author(s):  
Anhua Ma ◽  
Su Pan ◽  
Shuai Tao ◽  
Weiwei Zhou

Abstract With the rapid development of mobile internet cloud computing, the traditional network structure becomes non-suitable for advanced network traffic requirements. A service migration decision algorithm is proposed in the Software Defined Network(SDN) to satisfy differential Quality of Service(QoS) requirements. We divide services into real-time ones and non-real-time ones due to their different requirements on time delay and transmission rates, and construct the revenue function on two QoS attributes i.e. time delay and available transmission rates. We use the Markov decision process to maximize the overall benefits of users and network system to achieve the best user experience. The simulation results show that our proposed algorithm achieves better performance in terms of overall benefits than the exiting algorithms only considering single service and single QoS attribute.


Sign in / Sign up

Export Citation Format

Share Document