A Reinforcement Learning Algorithm for Resource Provisioning in Mobile Edge Computing Network

Author(s):  
Huynh Thi Thanh Binh ◽  
Nguyen Phi Le ◽  
Nguyen Binh Minh ◽  
Trinh Thu Hai ◽  
Ngo Quang Minh ◽  
...  
2020 ◽  
Vol 309 ◽  
pp. 03026
Author(s):  
Xia Gao ◽  
Fangqin Xu

With the rapid development of Internet technology and mobile terminals, users’ demand for high-speed networks is increasing. Mobile edge computing proposes a distributed caching approach to deal with the impact of massive data traffic on communication networks, in order to reduce network latency and improve user service quality. In this paper, a deep reinforcement learning algorithm is proposed to solve the task unloading problem of multi-service nodes. The simulation platform iFogSim and data set Google Cluster Trace are used to carry out experiments. The final results show that the task offloading strategy based on DDQN algorithm has a good effect on energy consumption and cost, it has verified the application prospect of deep reinforcement learning algorithm in mobile edge computing.


2021 ◽  
Vol 7 ◽  
pp. e755
Author(s):  
Abdullah Alharbi ◽  
Hashem Alyami ◽  
Poongodi M ◽  
Hafiz Tayyab Rauf ◽  
Seifedine Kadry

The proposed research motivates the 6G cellular networking for the Internet of Everything’s (IoE) usage empowerment that is currently not compatible with 5G. For 6G, more innovative technological resources are required to be handled by Mobile Edge Computing (MEC). Although the demand for change in service from different sectors, the increase in IoE, the limitation of available computing resources of MEC, and intelligent resource solutions are getting much more significant. This research used IScaler, an effective model for intelligent service placement solutions and resource scaling. IScaler is considered to be made for MEC in Deep Reinforcement Learning (DRL). The paper has considered several requirements for making service placement decisions. The research also highlights several challenges geared by architectonics that submerge an Intelligent Scaling and Placement module.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 372
Author(s):  
Dongji Li ◽  
Shaoyi Xu ◽  
Pengyu Li

With the rapid development of vehicular networks, vehicle-to-everything (V2X) communications have huge number of tasks to be calculated, which brings challenges to the scarce network resources. Cloud servers can alleviate the terrible situation regarding the lack of computing abilities of vehicular user equipment (VUE), but the limited resources, the dynamic environment of vehicles, and the long distances between the cloud servers and VUE induce some potential issues, such as extra communication delay and energy consumption. Fortunately, mobile edge computing (MEC), a promising computing paradigm, can ameliorate the above problems by enhancing the computing abilities of VUE through allocating the computational resources to VUE. In this paper, we propose a joint optimization algorithm based on a deep reinforcement learning algorithm named the double deep Q network (double DQN) to minimize the cost constituted of energy consumption, the latency of computation, and communication with the proper policy. The proposed algorithm is more suitable for dynamic scenarios and requires low-latency vehicular scenarios in the real world. Compared with other reinforcement learning algorithms, the algorithm we proposed algorithm improve the performance in terms of convergence, defined cost, and speed by around 30%, 15%, and 17%.


Sign in / Sign up

Export Citation Format

Share Document