scholarly journals A Cluster-Based Optimal Computation Offloading Decision Mechanism Using RL in the IIoT Field

2021 ◽  
Vol 12 (1) ◽  
pp. 384
Author(s):  
Seolwon Koo ◽  
Yujin Lim

In the Industrial Internet of Things (IIoT), various tasks are created dynamically because of the small quantity batch production. Hence, it is difficult to execute tasks only with devices that have limited battery lives and computation capabilities. To solve this problem, we adopted the mobile edge computing (MEC) paradigm. However, if there are numerous tasks to be processed on the MEC server (MECS), it may not be suitable to deal with all tasks in the server within a delay constraint owing to the limited computational capability and high network overhead. Therefore, among cooperative computing techniques, we focus on task offloading to nearby devices using device-to-device (D2D) communication. Consequently, we propose a method that determines the optimal offloading strategy in an MEC environment with D2D communication. We aim to minimize the energy consumption of the devices and task execution delay under certain delay constraints. To solve this problem, we adopt a Q-learning algorithm that is part of reinforcement learning (RL). However, if one learning agent determines whether to offload tasks from all devices, the computing complexity of that agent increases tremendously. Thus, we cluster the nearby devices that comprise the job shop, where each cluster’s head determines the optimal offloading strategy for the tasks that occur within its cluster. Simulation results show that the proposed algorithm outperforms the compared methods in terms of device energy consumption, task completion rate, task blocking rate, and throughput.

2021 ◽  
Vol 13 (5) ◽  
pp. 128
Author(s):  
Jun Liu ◽  
Xiaohui Lian ◽  
Chang Liu

In Space–Air–Ground Integrated Networks (SAGIN), computation offloading technology is a new way to improve the processing efficiency of node tasks and improve the limitation of computing storage resources. To solve the problem of large delay and energy consumption cost of task computation offloading, which caused by the complex and variable network offloading environment and a large amount of offloading tasks, a computation offloading decision scheme based on Markov and Deep Q Networks (DQN) is proposed. First, we select the optimal offloading network based on the characteristics of the movement of the task offloading process in the network. Then, the task offloading process is transformed into a Markov state transition process to build a model of the computational offloading decision process. Finally, the delay and energy consumption weights are introduced into the DQN algorithm to update the computation offloading decision process, and the optimal offloading decision under the low cost is achieved according to the task attributes. The simulation results show that compared with the traditional Lyapunov-based offloading decision scheme and the classical Q-learning algorithm, the delay and energy consumption are respectively reduced by 68.33% and 11.21%, under equal weights when the offloading task volume exceeds 500 Mbit. Moreover, compared with offloading to edge nodes or backbone nodes of the network alone, the proposed mixed offloading model can satisfy more than 100 task requests with low energy consumption and low delay. It can be seen that the computation offloading decision proposed in this paper can effectively reduce the delay and energy consumption during the task computation offloading in the Space–Air–Ground Integrated Network environment, and can select the optimal offloading sites to execute the tasks according to the characteristics of the task itself.


2021 ◽  
Vol 13 (23) ◽  
pp. 13016
Author(s):  
Rami Naimi ◽  
Maroua Nouiri ◽  
Olivier Cardin

The flexible job shop problem (FJSP) has been studied in recent decades due to its dynamic and uncertain nature. Responding to a system’s perturbation in an intelligent way and with minimum energy consumption variation is an important matter. Fortunately, thanks to the development of artificial intelligence and machine learning, a lot of researchers are using these new techniques to solve the rescheduling problem in a flexible job shop. Reinforcement learning, which is a popular approach in artificial intelligence, is often used in rescheduling. This article presents a Q-learning rescheduling approach to the flexible job shop problem combining energy and productivity objectives in a context of machine failure. First, a genetic algorithm was adopted to generate the initial predictive schedule, and then rescheduling strategies were developed to handle machine failures. As the system should be capable of reacting quickly to unexpected events, a multi-objective Q-learning algorithm is proposed and trained to select the optimal rescheduling methods that minimize the makespan and the energy consumption variation at the same time. This approach was conducted on benchmark instances to evaluate its performance.


2022 ◽  
Vol 2022 ◽  
pp. 1-13
Author(s):  
Ping Qi

Traditional intent recognition algorithms of intelligent prosthesis often use deep learning technology. However, deep learning’s high accuracy comes at the expense of high computational and energy consumption requirements. Mobile edge computing is a viable solution to meet the high computation and real-time execution requirements of deep learning algorithm on mobile device. In this paper, we consider the computation offloading problem of multiple heterogeneous edge servers in intelligent prosthesis scenario. Firstly, we present the problem definition and the detail design of MEC-based task offloading model for deep neural network. Then, considering the mobility of amputees, the mobility-aware energy consumption model and latency model are proposed. By deploying the deep learning-based motion intent recognition algorithm on intelligent prosthesis in a real-world MEC environment, the effectiveness of the task offloading and scheduling strategy is demonstrated. The experimental results show that the proposed algorithms can always find the optimal task offloading and scheduling decision.


Author(s):  
Siqi Mu ◽  
Zhangdui Zhong

AbstractWith the diversity of the communication technology and the heterogeneity of the computation resources at network edge, both the edge cloud and peer devices (collaborators) can be scavenged to provide computation resources for the resource-limited Internet-of-Things (IoT) devices. In this paper, a novel cooperative computing paradigm is proposed, in which the computation resources of IoT device, opportunistically idle collaborators and dedicated edge cloud are fully exploited. Computation/offloading assistance is provided by collaborators at idle/busy states, respectively. Considering the channel randomness and opportunistic computation resource share of collaborators, we study the stochastic offloading control for an IoT device, regarding how much computation load is processed locally, offloaded to the edge cloud and a collaborator. The problem is formulated into a finite horizon Markov decision problem with the objective of minimizing the expected total energy consumption of the IoT device and the collaborator, subject to satisfying the hard computation deadline constraint. Optimal offloading policy is derived based on the stochastic optimization theory, which demonstrates that the energy consumption can be reduced by a proportional factor through the cooperative computing. More energy saving is achieved with better wireless channel condition or higher computation energy efficiency of collaborators. Simulation results validate the optimality of the proposed policy and the efficiency of the cooperative computing between end devices and edge cloud, compared to several other offloading schemes.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6499
Author(s):  
Shuyang Li ◽  
Xiaohui Hu ◽  
Yongwen Du

Computation offloading technology extends cloud computing to the edge of the access network close to users, bringing many benefits to terminal devices with limited battery and computational resources. Nevertheless, the existing computation offloading approaches are challenging to apply to specific scenarios, such as the dense distribution of end-users and the sparse distribution of network infrastructure. The technological revolution in the unmanned aerial vehicle (UAV) and chip industry has granted UAVs more computing resources and promoted the emergence of UAV-assisted mobile edge computing (MEC) technology, which could be applied to those scenarios. However, in the MEC system with multiple users and multiple servers, making reasonable offloading decisions and allocating system resources is still a severe challenge. This paper studies the offloading decision and resource allocation problem in the UAV-assisted MEC environment with multiple users and servers. To ensure the quality of service for end-users, we set the weighted total cost of delay, energy consumption, and the size of discarded tasks as our optimization objective. We further formulate the joint optimization problem as a Markov decision process and apply the soft actor–critic (SAC) deep reinforcement learning algorithm to optimize the offloading policy. Numerical simulation results show that the offloading policy optimized by our proposed SAC-based dynamic computing offloading (SACDCO) algorithm effectively reduces the delay, energy consumption, and size of discarded tasks for the UAV-assisted MEC system. Compared with the fixed local-UAV scheme in the specific simulation setting, our proposed approach reduces system delay and energy consumption by approximately 50% and 200%, respectively.


2009 ◽  
Vol 28 (12) ◽  
pp. 3268-3270
Author(s):  
Chao WANG ◽  
Jing GUO ◽  
Zhen-qiang BAO

Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 229
Author(s):  
Xianzhong Tian ◽  
Juan Zhu ◽  
Ting Xu ◽  
Yanjun Li

The latest results in Deep Neural Networks (DNNs) have greatly improved the accuracy and performance of a variety of intelligent applications. However, running such computation-intensive DNN-based applications on resource-constrained mobile devices definitely leads to long latency and huge energy consumption. The traditional way is performing DNNs in the central cloud, but it requires significant amounts of data to be transferred to the cloud over the wireless network and also results in long latency. To solve this problem, offloading partial DNN computation to edge clouds has been proposed, to realize the collaborative execution between mobile devices and edge clouds. In addition, the mobility of mobile devices is easily to cause the computation offloading failure. In this paper, we develop a mobility-included DNN partition offloading algorithm (MDPO) to adapt to user’s mobility. The objective of MDPO is minimizing the total latency of completing a DNN job when the mobile user is moving. The MDPO algorithm is suitable for both DNNs with chain topology and graphic topology. We evaluate the performance of our proposed MDPO compared to local-only execution and edge-only execution, experiments show that MDPO significantly reduces the total latency and improves the performance of DNN, and MDPO can adjust well to different network conditions.


Author(s):  
Jun Long ◽  
Yueyi Luo ◽  
Xiaoyu Zhu ◽  
Entao Luo ◽  
Mingfeng Huang

AbstractWith the developing of Internet of Things (IoT) and mobile edge computing (MEC), more and more sensing devices are widely deployed in the smart city. These sensing devices generate various kinds of tasks, which need to be sent to cloud to process. Usually, the sensing devices do not equip with wireless modules, because it is neither economical nor energy saving. Thus, it is a challenging problem to find a way to offload tasks for sensing devices. However, many vehicles are moving around the city, which can communicate with sensing devices in an effective and low-cost way. In this paper, we propose a computation offloading scheme through mobile vehicles in IoT-edge-cloud network. The sensing devices generate tasks and transmit the tasks to vehicles, then the vehicles decide to compute the tasks in the local vehicle, MEC server or cloud center. The computation offloading decision is made based on the utility function of the energy consumption and transmission delay, and the deep reinforcement learning technique is adopted to make decisions. Our proposed method can make full use of the existing infrastructures to implement the task offloading of sensing devices, the experimental results show that our proposed solution can achieve the maximum reward and decrease delay.


Author(s):  
Qingzhu Wang ◽  
Xiaoyun Cui

As mobile devices become more and more powerful, applications generate a large number of computing tasks, and mobile devices themselves cannot meet the needs of users. This article proposes a computation offloading model in which execution units including mobile devices, edge server, and cloud server. Previous studies on joint optimization only considered tasks execution time and the energy consumption of mobile devices, and ignored the energy consumption of edge and cloud server. However, edge server and cloud server energy consumption have a significant impact on the final offloading decision. This paper comprehensively considers execution time and energy consumption of three execution units, and formulates task offloading decision as a single-objective optimization problem. Genetic algorithm with elitism preservation and random strategy is adopted to obtain optimal solution of the problem. At last, simulation experiments show that the proposed computation offloading model has lower fitness value compared with other computation offloading models.


2018 ◽  
Vol 32 (34n36) ◽  
pp. 1840112 ◽  
Author(s):  
Xiaoxing Zhang ◽  
Zhicheng Ji ◽  
Yan Wang

In this paper, a multi-objective flexible job shop scheduling problem (MOFJSP) was studied systematically. A novel energy-saving scheduling model was established based on considering makespan and total energy consumption simultaneously. Different from previous studies, four types of energy consumption were considered in this model, including processing energy, idle energy, transport energy, and turn-on/off energy. In addition, a turn-off strategy is adopted for energy-saving. A modified shuffled frog-leaping algorithm (SFLA) was applied to solve this model. Moreover, operators of multi-point crossover and neighborhood search were both employed to obtain optimal solutions. Experiments were conducted to verify the performance of the SFLA compared with a non-dominated sorting genetic algorithm with blood variation (BVNSGA-II). The results show that this algorithm and strategy are very effective.


Sign in / Sign up

Export Citation Format

Share Document