A Resource Allocation Scheme for Joint Optimizing Energy-Consumption and Delay in Collaborative Edge Computing-Based Industrial IoT

Author(s):  
Zilong Jin ◽  
Chengbo Zhang ◽  
Yuanfeng Jin ◽  
Lejun Zhang ◽  
Jian Su
Author(s):  
Yiwei Zhang ◽  
Min Zhang ◽  
Caixia Fan ◽  
Fuqiang Li ◽  
Baofang Li

AbstractWith the emergence and development of 5G technology, Mobile Edge Computing (MEC) has been closely integrated with Internet of Vehicles (IoV) technology, which can effectively support and improve network performance in IoV. However, the high-speed mobility of vehicles and diversity of communication quality make computing task offloading strategies more complex. To solve the problem, this paper proposes a computing resource allocation scheme based on deep reinforcement learning network for mobile edge computing scenarios in IoV. Firstly, the task resource allocation model for IoV in corresponding edge computing scenario is determined regarding the computing capacity of service nodes and vehicle moving speed as constraints. Besides, the mathematical model for task offloading and resource allocation is established with the minimum total computing cost as objective function. Then, deep Q-learning network based on deep reinforcement learning network is proposed to solve the mathematical model of resource allocation. Moreover, experience replay method is used to solve the instability of nonlinear approximate function neural network, which can avoid falling into dimension disaster and ensure the low-overhead and low-latency operation requirements of resource allocation. Finally, simulation results show that proposed scheme can effectively allocate the computing resources of IoV in edge computing environment. When the number of user uploaded data is 10K bits and the number of terminals is 15, it still shows the excellent network performance of low-overhead and low-latency.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Shanjun Zhan ◽  
Lisu Yu ◽  
Zhen Wang ◽  
Yichen Du ◽  
Yan Yu ◽  
...  

With the explosive growth of ubiquitous mobile services and the advent of the 5G era, ultra-dense wireless network (UDN) architectures have entered daily production and life. However, the massive access capacity provided by 5G networks and the dense deployment of micro base stations also bring challenges such as high energy consumption, high maintenance costs, and inflexibility. Fiber-based visible light communication (FVLC) has the advantages of large bandwidth and high speed, which provides an efficient connection option for UDN. Thus, in order to make up for the poor flexibility of UDN, we propose a new FVLC-UDN architecture based on software-defined networks (SDNs). Specifically, SDN decouples the data plane and the control plane of the device and centralizes the control of the LED in the cell through a unified control plane, which can not only improve the resource allocation ability of the network but also transmit the data only as the data plane, reducing the manufacturing and implementation costs of the LED. In order to get a better resource allocation scheme, this paper proposes a model for predicting cell traffic based on convolutional neural networks. By predicting the traffic of each cell in the control domain, the traffic trend and cells’ status in the future period of time in the control domain can be obtained, so that a much more efficient resource allocation scheme can be formulated proactively to reduce energy consumption and balance communication loads. The experimental results show that on the real cell traffic dataset, this method is better than the existing prediction methods when the size of training dataset is limited.


2021 ◽  
Author(s):  
Yiwei Zhang ◽  
Min Zhang ◽  
Caixia Fan ◽  
Fuqiang Li ◽  
Baofang Li

Abstract With the emergence and development of 5G technology, Mobile Edge Computing (MEC) has been closely integrated with Internet of Vehicles (IoV) technology, which can effectively support and improve network performance in IoV. However, the high-speed mobility of vehicles and diversity of communication quality make computing task offloading strategies more complex. To solve the problem, this paper proposes a computing resource allocation scheme based on deep reinforcement learning network for mobile edge computing scenarios in IoV. Firstly, the task resource allocation model for IoV in corresponding edge computing scenario is determined regarding the computing capacity of service nodes and vehicle moving speed as constraints. Besides, the mathematical model for task offloading and resource allocation is established with the minimum total computing cost as objective function. Then, deep Q-learning network based on deep reinforcement learning network is proposed to solve the mathematical model of resource allocation. Moreover, experience replay method is used to solve the instability of nonlinear approximate function neural network, which can avoid falling into dimension disaster and ensure the low-overhead and low-latency operation requirements of resource allocation. Finally, simulation results show that proposed scheme can effectively allocate the computing resources of IoV in edge computing environment.


Sign in / Sign up

Export Citation Format

Share Document