scholarly journals Deep Reinforcement Learning-Empowered Resource Allocation for Mobile Edge Computing in Cellular V2X Networks

Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 372
Author(s):  
Dongji Li ◽  
Shaoyi Xu ◽  
Pengyu Li

With the rapid development of vehicular networks, vehicle-to-everything (V2X) communications have huge number of tasks to be calculated, which brings challenges to the scarce network resources. Cloud servers can alleviate the terrible situation regarding the lack of computing abilities of vehicular user equipment (VUE), but the limited resources, the dynamic environment of vehicles, and the long distances between the cloud servers and VUE induce some potential issues, such as extra communication delay and energy consumption. Fortunately, mobile edge computing (MEC), a promising computing paradigm, can ameliorate the above problems by enhancing the computing abilities of VUE through allocating the computational resources to VUE. In this paper, we propose a joint optimization algorithm based on a deep reinforcement learning algorithm named the double deep Q network (double DQN) to minimize the cost constituted of energy consumption, the latency of computation, and communication with the proper policy. The proposed algorithm is more suitable for dynamic scenarios and requires low-latency vehicular scenarios in the real world. Compared with other reinforcement learning algorithms, the algorithm we proposed algorithm improve the performance in terms of convergence, defined cost, and speed by around 30%, 15%, and 17%.

Sensors ◽  
2019 ◽  
Vol 19 (14) ◽  
pp. 3231 ◽  
Author(s):  
Jiuyun Xu ◽  
Zhuangyuan Hao ◽  
Xiaoting Sun

Mobile edge computing (MEC) has become more popular both in academia and industry. Currently, with the help of edge servers and cloud servers, it is one of the substantial technologies to overcome the latency between cloud server and wireless device, computation capability and storage shortage of wireless devices. In mobile edge computing, wireless devices take responsibility with input data. At the same time, edge servers and cloud servers take charge of computation and storage. However, until now, how to balance the power consumption of edge devices and time delay has not been well addressed in mobile edge computing. In this paper, we focus on strategies of the task offloading decision and the influence analysis of offloading decisions on different environments. Firstly, we propose a system model considering both energy consumption and time delay and formulate it into an optimization problem. Then, we employ two algorithms—Enumerating and Branch-and-Bound—to get the optimal or near-optimal decision for minimizing the system cost including the time delay and energy consumption. Furthermore, we compare the performance between two algorithms and draw the conclusion that the comprehensive performance of Branch-and-Bound algorithm is better than that of the other. Finally, we analyse the influence factors of optimal offloading decisions and the minimum cost in detail by changing key parameters.


2021 ◽  
Author(s):  
Laha Ale ◽  
Scott King ◽  
Ning Zhang ◽  
Abdul Sattar ◽  
Janahan Skandaraniyam

<div> Mobile Edge Computing (MEC) has been regarded as a promising paradigm to reduce service latency for data processing in Internet of Things, by provisioning computing resources at network edge. In this work, we jointly optimize the task partitioning and computational power allocation for computation offloading in a dynamic environment with multiple IoT devices and multiple edge servers. We formulate the problem as a Markov decision process with constrained hybrid action space, which cannot be well handled by existing deep reinforcement learning (DRL) algorithms. Therefore, we develop a novel Deep Reinforcement Learning called Dirichlet Deep Deterministic Policy Gradient (D3PG), which </div><div>is built on Deep Deterministic Policy Gradient (DDPG) to solve the problem. The developed model can learn to solve multi-objective optimization, including maximizing the number of tasks processed before expiration and minimizing the energy cost and service latency. More importantly, D3PG can effectively deal with constrained distribution-continuous hybrid action space, where the distribution variables are for the task partitioning and offloading, while the continuous variables are for computational frequency control. Moreover, the D3PG can address many similar issues in MEC and general reinforcement learning problems. Extensive simulation results show that the proposed D3PG outperforms the state-of-art methods.</div><div> Mobile Edge Computing (MEC) has been regarded as a promising paradigm to reduce service latency for data processing in Internet of Things, by provisioning computing resources at network edge. In this work, we jointly optimize the task partitioning and computational power allocation for computation offloading in a dynamic environment with multiple IoT devices and multiple edge servers. We formulate the problem as a Markov decision process with constrained hybrid action space, which cannot be well handled by existing deep reinforcement learning (DRL) algorithms. Therefore, we develop a novel Deep Reinforcement Learning called Dirichlet Deep Deterministic Policy Gradient (D3PG), which is built on Deep Deterministic Policy Gradient (DDPG) to solve the problem. The developed model can learn to solve multi-objective optimization, including maximizing the number of tasks processed before expiration and minimizing the energy cost and service latency. More importantly, D3PG can effectively deal with constrained distribution-continuous hybrid action space, where the distribution variables are for the task partitioning and offloading, while the continuous variables are for computational frequency control. Moreover, the D3PG can address many similar issues in MEC and general reinforcement learning problems. Extensive simulation results show that the proposed D3PG outperforms the state-of-art methods.</div>


2020 ◽  
Vol 309 ◽  
pp. 03026
Author(s):  
Xia Gao ◽  
Fangqin Xu

With the rapid development of Internet technology and mobile terminals, users’ demand for high-speed networks is increasing. Mobile edge computing proposes a distributed caching approach to deal with the impact of massive data traffic on communication networks, in order to reduce network latency and improve user service quality. In this paper, a deep reinforcement learning algorithm is proposed to solve the task unloading problem of multi-service nodes. The simulation platform iFogSim and data set Google Cluster Trace are used to carry out experiments. The final results show that the task offloading strategy based on DDQN algorithm has a good effect on energy consumption and cost, it has verified the application prospect of deep reinforcement learning algorithm in mobile edge computing.


Webology ◽  
2021 ◽  
Vol 18 (2) ◽  
pp. 856-874
Author(s):  
S. Anoop ◽  
Dr.J. Amar Pratap Singh

Mobile technologies is evolving so rapidly in every aspect, utilizing every single resource in the form of applications which creates advancement in day to day life. This technological advancements overcomes the traditional computing methods which increases communication delay, energy consumption for mobile devices. In today’s world, Mobile Edge Computing is evolving as a scenario for improving in these limitations so as to provide better output to end users. This paper proposed a secure and energy-efficient computational offloading scheme using LSTM. The prediction of the computational tasks done using the LSTM algorithm. A strategy for computation offloading based on the prediction of tasks, and the migration of tasks for the scheme of edge cloud scheduling based on a reinforcement learning routing algorithm help to optimize the edge computing offloading model. Experimental results show that our proposed algorithm Intelligent Energy Efficient Offloading Algorithm (IEEOA), can efficiently decrease total task delay and energy consumption, and bring much security to the devices due to the firewall nature of LSTM.


2021 ◽  
Author(s):  
Laha Ale ◽  
Scott King ◽  
Ning Zhang ◽  
Abdul Sattar ◽  
Janahan Skandaraniyam

<div> Mobile Edge Computing (MEC) has been regarded as a promising paradigm to reduce service latency for data processing in Internet of Things, by provisioning computing resources at network edge. In this work, we jointly optimize the task partitioning and computational power allocation for computation offloading in a dynamic environment with multiple IoT devices and multiple edge servers. We formulate the problem as a Markov decision process with constrained hybrid action space, which cannot be well handled by existing deep reinforcement learning (DRL) algorithms. Therefore, we develop a novel Deep Reinforcement Learning called Dirichlet Deep Deterministic Policy Gradient (D3PG), which </div><div>is built on Deep Deterministic Policy Gradient (DDPG) to solve the problem. The developed model can learn to solve multi-objective optimization, including maximizing the number of tasks processed before expiration and minimizing the energy cost and service latency. More importantly, D3PG can effectively deal with constrained distribution-continuous hybrid action space, where the distribution variables are for the task partitioning and offloading, while the continuous variables are for computational frequency control. Moreover, the D3PG can address many similar issues in MEC and general reinforcement learning problems. Extensive simulation results show that the proposed D3PG outperforms the state-of-art methods.</div><div> Mobile Edge Computing (MEC) has been regarded as a promising paradigm to reduce service latency for data processing in Internet of Things, by provisioning computing resources at network edge. In this work, we jointly optimize the task partitioning and computational power allocation for computation offloading in a dynamic environment with multiple IoT devices and multiple edge servers. We formulate the problem as a Markov decision process with constrained hybrid action space, which cannot be well handled by existing deep reinforcement learning (DRL) algorithms. Therefore, we develop a novel Deep Reinforcement Learning called Dirichlet Deep Deterministic Policy Gradient (D3PG), which is built on Deep Deterministic Policy Gradient (DDPG) to solve the problem. The developed model can learn to solve multi-objective optimization, including maximizing the number of tasks processed before expiration and minimizing the energy cost and service latency. More importantly, D3PG can effectively deal with constrained distribution-continuous hybrid action space, where the distribution variables are for the task partitioning and offloading, while the continuous variables are for computational frequency control. Moreover, the D3PG can address many similar issues in MEC and general reinforcement learning problems. Extensive simulation results show that the proposed D3PG outperforms the state-of-art methods.</div>


2022 ◽  
Vol 2022 ◽  
pp. 1-13
Author(s):  
Ping Qi

Traditional intent recognition algorithms of intelligent prosthesis often use deep learning technology. However, deep learning’s high accuracy comes at the expense of high computational and energy consumption requirements. Mobile edge computing is a viable solution to meet the high computation and real-time execution requirements of deep learning algorithm on mobile device. In this paper, we consider the computation offloading problem of multiple heterogeneous edge servers in intelligent prosthesis scenario. Firstly, we present the problem definition and the detail design of MEC-based task offloading model for deep neural network. Then, considering the mobility of amputees, the mobility-aware energy consumption model and latency model are proposed. By deploying the deep learning-based motion intent recognition algorithm on intelligent prosthesis in a real-world MEC environment, the effectiveness of the task offloading and scheduling strategy is demonstrated. The experimental results show that the proposed algorithms can always find the optimal task offloading and scheduling decision.


2020 ◽  
Vol 12 (18) ◽  
pp. 7661
Author(s):  
Beibei Pang ◽  
Fei Hao ◽  
Doo-Soon Park ◽  
Carmen De Maio

The development of mobile edge computing (MEC) is accelerating the popularity of 5G applications. In the 5G era, aiming to reduce energy consumption and latency, most applications or services are conducted on both edge cloud servers and cloud servers. However, the existing multi-cloud composition recommendation approaches are studied in the context of resources provided by a single cloud or multiple clouds. Hence, these approaches cannot cope with services requested by the composition of multiple clouds and edge clouds jointly in MEC. To this end, this paper firstly expands the structure of the multi-cloud service system and further constructs a multi-cloud multi-edge cloud (MCMEC) environment. Technically, we model this problem with formal concept analysis (FCA) by building the service–provider lattice and provider–cloud lattice, and select the candidate cloud composition that satisfies the user’s requirements. In order to obtain an optimized cloud combination that can efficiently reduce the energy consumption, money cost, and network latency, the skyline query mechanism is utilized for extracting the optimized cloud composition. We evaluate our approach by comparing the proposed algorithm to the random-based service composition approach. A case study is also conducted for demonstrating the effectiveness and superiority of our proposed approach.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6499
Author(s):  
Shuyang Li ◽  
Xiaohui Hu ◽  
Yongwen Du

Computation offloading technology extends cloud computing to the edge of the access network close to users, bringing many benefits to terminal devices with limited battery and computational resources. Nevertheless, the existing computation offloading approaches are challenging to apply to specific scenarios, such as the dense distribution of end-users and the sparse distribution of network infrastructure. The technological revolution in the unmanned aerial vehicle (UAV) and chip industry has granted UAVs more computing resources and promoted the emergence of UAV-assisted mobile edge computing (MEC) technology, which could be applied to those scenarios. However, in the MEC system with multiple users and multiple servers, making reasonable offloading decisions and allocating system resources is still a severe challenge. This paper studies the offloading decision and resource allocation problem in the UAV-assisted MEC environment with multiple users and servers. To ensure the quality of service for end-users, we set the weighted total cost of delay, energy consumption, and the size of discarded tasks as our optimization objective. We further formulate the joint optimization problem as a Markov decision process and apply the soft actor–critic (SAC) deep reinforcement learning algorithm to optimize the offloading policy. Numerical simulation results show that the offloading policy optimized by our proposed SAC-based dynamic computing offloading (SACDCO) algorithm effectively reduces the delay, energy consumption, and size of discarded tasks for the UAV-assisted MEC system. Compared with the fixed local-UAV scheme in the specific simulation setting, our proposed approach reduces system delay and energy consumption by approximately 50% and 200%, respectively.


Sign in / Sign up

Export Citation Format

Share Document