scholarly journals Energy-Optimal Latency-Constrained Application Offloading in Mobile-Edge Computing

Sensors ◽  
2020 ◽  
Vol 20 (11) ◽  
pp. 3064 ◽  
Author(s):  
Xiaohui Gu ◽  
Chen Ji ◽  
Guoan Zhang

Mobile-edge computation offloading (MECO) is a promising emerging technology for battery savings in mobile devices (MD) and/or in latency reduction in the execution of applications by (either total or partial) offloading highly demanding applications from MDs to nearby servers such as base stations. In this paper, we provide an offloading strategy for the joint optimization of the communication and computational resources by considering the blue trade-off between energy consumption and latency. The strategy is formulated as the solution to an optimization problem that minimizes the total energy consumption while satisfying the execution delay limit (or deadline). In the solution, the optimal transmission power and rate and the optimal fraction of the task to be offloaded are analytically derived to meet the optimization objective. We further establish the conditions under which the binary decisions (full-offloading and no offloading) are optimal. We also explore how such system parameters as the latency constraint, task complexity, and local computing power affect the offloading strategy. Finally, the simulation results demonstrate the behavior of the proposed strategy and verify its energy efficiency.

Author(s):  
Zhuofan Liao ◽  
Jingsheng Peng ◽  
Bing Xiong ◽  
Jiawei Huang

AbstractWith the combination of Mobile Edge Computing (MEC) and the next generation cellular networks, computation requests from end devices can be offloaded promptly and accurately by edge servers equipped on Base Stations (BSs). However, due to the densified heterogeneous deployment of BSs, the end device may be covered by more than one BS, which brings new challenges for offloading decision, that is whether and where to offload computing tasks for low latency and energy cost. This paper formulates a multi-user-to-multi-servers (MUMS) edge computing problem in ultra-dense cellular networks. The MUMS problem is divided and conquered by two phases, which are server selection and offloading decision. For the server selection phases, mobile users are grouped to one BS considering both physical distance and workload. After the grouping, the original problem is divided into parallel multi-user-to-one-server offloading decision subproblems. To get fast and near-optimal solutions for these subproblems, a distributed offloading strategy based on a binary-coded genetic algorithm is designed to get an adaptive offloading decision. Convergence analysis of the genetic algorithm is given and extensive simulations show that the proposed strategy significantly reduces the average latency and energy consumption of mobile devices. Compared with the state-of-the-art offloading researches, our strategy reduces the average delay by 56% and total energy consumption by 14% in the ultra-dense cellular networks.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Yawen Zhang ◽  
Yifeng Miao ◽  
Shujia Pan ◽  
Siguang Chen

In order to effectively extend the lifetime of Internet of Things (IoT) devices, improve the energy efficiency of task processing, and build a self-sustaining and green edge computing system, this paper proposes an efficient and energy-saving computation offloading mechanism with energy harvesting for IoT. Specifically, based on the comprehensive consideration of local computing resource, time allocation ratio of energy harvesting, and offloading decision, an optimization problem that minimizes the total energy consumption of all user devices is formulated. In order to solve such optimization problem, a deep learning-based efficient and energy-saving offloading decision and resource allocation algorithm is proposed. The design of deep neural network architecture incorporating regularization method and the employment of the stochastic gradient descent method can accelerate the convergence rate of the developed algorithm and improve its generalization performance. Furthermore, it can minimize the total energy consumption of task processing by integrating the momentum gradient descent to solve the resource optimization allocation problem. Finally, the simulation results show that the mechanism proposed in this paper has significant advantage in convergence rate and can achieve an optimal offloading and resource allocation strategy that is close to the solution of greedy algorithm.


Author(s):  
Siqi Mu ◽  
Zhangdui Zhong

AbstractWith the diversity of the communication technology and the heterogeneity of the computation resources at network edge, both the edge cloud and peer devices (collaborators) can be scavenged to provide computation resources for the resource-limited Internet-of-Things (IoT) devices. In this paper, a novel cooperative computing paradigm is proposed, in which the computation resources of IoT device, opportunistically idle collaborators and dedicated edge cloud are fully exploited. Computation/offloading assistance is provided by collaborators at idle/busy states, respectively. Considering the channel randomness and opportunistic computation resource share of collaborators, we study the stochastic offloading control for an IoT device, regarding how much computation load is processed locally, offloaded to the edge cloud and a collaborator. The problem is formulated into a finite horizon Markov decision problem with the objective of minimizing the expected total energy consumption of the IoT device and the collaborator, subject to satisfying the hard computation deadline constraint. Optimal offloading policy is derived based on the stochastic optimization theory, which demonstrates that the energy consumption can be reduced by a proportional factor through the cooperative computing. More energy saving is achieved with better wireless channel condition or higher computation energy efficiency of collaborators. Simulation results validate the optimality of the proposed policy and the efficiency of the cooperative computing between end devices and edge cloud, compared to several other offloading schemes.


Author(s):  
Bing Lin ◽  
Kai Lin ◽  
Changhang Lin ◽  
Yu Lu ◽  
Ziqing Huang ◽  
...  

AbstractConnected and Automated Vehicle (CAV) is a transformative technology that has great potential to improve urban traffic and driving safety. Electric Vehicle (EV) is becoming the key subject of next-generation CAVs by virtue of its advantages in energy saving. Due to the limited endurance and computing capacity of EVs, it is challenging to meet the surging demand for computing-intensive and delay-sensitive in-vehicle intelligent applications. Therefore, computation offloading has been employed to extend a single vehicle’s computing capacity. Although various offloading strategies have been proposed to achieve good computing performace in the Vehicular Edge Computing (VEC) environment, it remains challenging to jointly optimize the offloading failure rate and the total energy consumption of the offloading process. To address this challenge, in this paper, we establish a computation offloading model based on Markov Decision Process (MDP), taking into consideration task dependencies, vehicle mobility, and different computing resources for task offloading. We then design a computation offloading strategy based on deep reinforcement learning, and leverage the Deep Q-Network based on Simulated Annealing (SA-DQN) algorithm to optimize the joint objectives. Experimental results show that the proposed strategy effectively reduces the offloading failure rate and the total energy consumption for application offloading.


2022 ◽  
Author(s):  
Liping Qian

<div>The integration of Maritime Internet of Things (M-IoT) technology and unmanned aerial/surface vehicles (UAVs/USVs) has been emerging as a promising navigational information technique in intelligent ocean systems. With the unprecedented increase of computation-intensive yet latency sensitive marine mobile Internet services, mobile edge computing (MEC) and non-orthogonal multiple access (NOMA) have been envisioned as promising approaches to providing with the low-latency as well as reliable computing services and ultra-dense connectivity. In this paper, we investigate the energy consumption minimization based energy-efficient MEC via cooperative NOMA for the UAV-assisted M-IoT networks. We consider that USVs offload their computation-workload to the UAV equipped with the edge-computing server subject to the UAV mobility. To improve the energy efficiency of offloading transmission and workload computation, we focus on minimizing the total energy consumption by jointly optimizing the USVs’ offloaded workload, transmit power, computation resource allocation as well as the UAV trajectory subject to the USVs’ latency requirements. Despite the nature of mixed discrete and non-convex programming of the formulated problem, we exploit the vertical decomposition and propose a two-layered algorithm for solving it efficiently. Specifically, the top-layered algorithm is proposed to solve the problem of optimizing the UAV trajectory based on the idea of Deep Reinforcement Learning (DRL), and the underlying algorithm is proposed to optimize the underlying multi-domain resource allocation problem based on the idea of the Lagrangian multiplier method. Numerical results are provided to validate the effectiveness of our proposed algorithms as well as the performance advantage of NOMA-enabled computation offloading in terms of overall energy consumption.</div>


2021 ◽  
Author(s):  
Xue Chen ◽  
Hongbo Xu ◽  
Guoping Zhang ◽  
Yun Chen ◽  
Ruijie Li

Abstract Mobile edge computation (MEC) is a potential technology to reduce the energy consumption and task execution delay for tackling computation-intensive tasks on mobile device (MD). The resource allocation of MEC is an optimization problem, however, the existing large amount of computation may hinder its practical application. In this work, we propose a multiuser MEC framework based on unsupervised deep learning (DL) to reduce energy consumption and computation by offloading tasks to edge servers. The binary offloading decision and resource allocation are jointly optimized to minimize energy consumption of MDs under latency constraint and transmit power constraint. This joint optimization problem is a mixed integer nonconvex problem which result in the gradient vanishing problem in backpropagation. To address this, we propose a novel binary computation offloading scheme (BCOS), in which a deep neural network (DNN) with an auxiliary network is designed. By using the auxiliary network as a teacher network, the student network can obtain the lossless gradient information in joint training phase. As a result, the sub-optimal solution of the optimization problem can be acquired by the learning-based BCOS. Simulation results demonstrate that the BCOS is effective to solve the binary offloading problem by the trained network with low complexity.


2020 ◽  
Vol 8 (1) ◽  
pp. 221-239
Author(s):  
Haniah Mahmudah ◽  
Okkie Puspitorini ◽  
Ari Wijayanti ◽  
Nur Adi Siswandari ◽  
Yetik Dwi Kusumaningrum

Over time, cellular communication technology developed significantly from year to year. This is due to increasing the number of users and the higher needed. To overcome this problem, many providers increase the number of new base station installations to fill up the customer's needed. The increase number of base stations does not take into account the amount of power consumption produced, where in the cellular network Base Stations (BS) are the most dominant energy consuming equipment estimated at 60% - 80% of the total energy consumption in the cellular industry. In addition, energy waste often occurs in the BS where the emission power will always remain even if the number of users is small. Power consumption and energy savings are important issues at this time because they will affect CO2 emissions in the air. This paper proposes to save energy consumption from BS by turning off BS (sleep mode) if the number of users is small and distributed to other BS (neighboring BS) which is called cell zooming technique. The cell size can zoom out when the load traffic is high and zoom in when the load traffic is low. To determine the central BS and neighboring BS, a sequential to better signal (SBS) scheme is used where this scheme sorts neighboring BS based on the SINR value received (user). The results of this research, base station can be able to save energy 29.12% and reduce CO2 emission around 3580 kg/year.  It means saving energy consumption which is also reducing air pollution occurs and this term can be named as green cellular network. 


2022 ◽  
Author(s):  
Liping Qian

<div>The integration of Maritime Internet of Things (M-IoT) technology and unmanned aerial/surface vehicles (UAVs/USVs) has been emerging as a promising navigational information technique in intelligent ocean systems. With the unprecedented increase of computation-intensive yet latency sensitive marine mobile Internet services, mobile edge computing (MEC) and non-orthogonal multiple access (NOMA) have been envisioned as promising approaches to providing with the low-latency as well as reliable computing services and ultra-dense connectivity. In this paper, we investigate the energy consumption minimization based energy-efficient MEC via cooperative NOMA for the UAV-assisted M-IoT networks. We consider that USVs offload their computation-workload to the UAV equipped with the edge-computing server subject to the UAV mobility. To improve the energy efficiency of offloading transmission and workload computation, we focus on minimizing the total energy consumption by jointly optimizing the USVs’ offloaded workload, transmit power, computation resource allocation as well as the UAV trajectory subject to the USVs’ latency requirements. Despite the nature of mixed discrete and non-convex programming of the formulated problem, we exploit the vertical decomposition and propose a two-layered algorithm for solving it efficiently. Specifically, the top-layered algorithm is proposed to solve the problem of optimizing the UAV trajectory based on the idea of Deep Reinforcement Learning (DRL), and the underlying algorithm is proposed to optimize the underlying multi-domain resource allocation problem based on the idea of the Lagrangian multiplier method. Numerical results are provided to validate the effectiveness of our proposed algorithms as well as the performance advantage of NOMA-enabled computation offloading in terms of overall energy consumption.</div>


Sign in / Sign up

Export Citation Format

Share Document