scholarly journals Computation offloading to edge cloud and dynamically resource-sharing collaborators in Internet of Things

Author(s):  
Siqi Mu ◽  
Zhangdui Zhong

AbstractWith the diversity of the communication technology and the heterogeneity of the computation resources at network edge, both the edge cloud and peer devices (collaborators) can be scavenged to provide computation resources for the resource-limited Internet-of-Things (IoT) devices. In this paper, a novel cooperative computing paradigm is proposed, in which the computation resources of IoT device, opportunistically idle collaborators and dedicated edge cloud are fully exploited. Computation/offloading assistance is provided by collaborators at idle/busy states, respectively. Considering the channel randomness and opportunistic computation resource share of collaborators, we study the stochastic offloading control for an IoT device, regarding how much computation load is processed locally, offloaded to the edge cloud and a collaborator. The problem is formulated into a finite horizon Markov decision problem with the objective of minimizing the expected total energy consumption of the IoT device and the collaborator, subject to satisfying the hard computation deadline constraint. Optimal offloading policy is derived based on the stochastic optimization theory, which demonstrates that the energy consumption can be reduced by a proportional factor through the cooperative computing. More energy saving is achieved with better wireless channel condition or higher computation energy efficiency of collaborators. Simulation results validate the optimality of the proposed policy and the efficiency of the cooperative computing between end devices and edge cloud, compared to several other offloading schemes.

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Yawen Zhang ◽  
Yifeng Miao ◽  
Shujia Pan ◽  
Siguang Chen

In order to effectively extend the lifetime of Internet of Things (IoT) devices, improve the energy efficiency of task processing, and build a self-sustaining and green edge computing system, this paper proposes an efficient and energy-saving computation offloading mechanism with energy harvesting for IoT. Specifically, based on the comprehensive consideration of local computing resource, time allocation ratio of energy harvesting, and offloading decision, an optimization problem that minimizes the total energy consumption of all user devices is formulated. In order to solve such optimization problem, a deep learning-based efficient and energy-saving offloading decision and resource allocation algorithm is proposed. The design of deep neural network architecture incorporating regularization method and the employment of the stochastic gradient descent method can accelerate the convergence rate of the developed algorithm and improve its generalization performance. Furthermore, it can minimize the total energy consumption of task processing by integrating the momentum gradient descent to solve the resource optimization allocation problem. Finally, the simulation results show that the mechanism proposed in this paper has significant advantage in convergence rate and can achieve an optimal offloading and resource allocation strategy that is close to the solution of greedy algorithm.


2020 ◽  
pp. 1-19
Author(s):  
Ping Qi ◽  
Hong Shu ◽  
Qiang Zhu

Computation offloading is a key computing paradigm used in mobile edge computing. The principle of computation offloading is to leverage powerful infrastructures to augment the computing capability of less powerful devices. However, the most existing computation offloading algorithms assume that the mobile device is not moving, and these algorithms do not take into account the reliability of task execution. In this paper, we firstly present the formalized description of the workflow, the wireless signal, the wisdom medical scenario and the moving path. Then, inspired by the Bayesian cognitive model, a trust evaluation model is presented to reduce the probability of failure for task execution based on the reliable behaviors of multiply computation resources. According to the location and the velocity of the mobile device, the execution time and the energy consumption model based on the moving path are constructed, task deferred execution and task migration are introduced to guarantee the service continuity. On this basis, considering the whole scheduling process from a global viewpoint, the genetic algorithm is used to solve the energy consumption optimization problem with the constraint of response time. Experimental results show that the proposed algorithm optimizes the workflow under the mobile edge environment by increasing 20.4% of successful execution probability and decreasing 21.5% of energy consumption compared with traditional optimization algorithms.


Sensors ◽  
2020 ◽  
Vol 20 (11) ◽  
pp. 3064 ◽  
Author(s):  
Xiaohui Gu ◽  
Chen Ji ◽  
Guoan Zhang

Mobile-edge computation offloading (MECO) is a promising emerging technology for battery savings in mobile devices (MD) and/or in latency reduction in the execution of applications by (either total or partial) offloading highly demanding applications from MDs to nearby servers such as base stations. In this paper, we provide an offloading strategy for the joint optimization of the communication and computational resources by considering the blue trade-off between energy consumption and latency. The strategy is formulated as the solution to an optimization problem that minimizes the total energy consumption while satisfying the execution delay limit (or deadline). In the solution, the optimal transmission power and rate and the optimal fraction of the task to be offloaded are analytically derived to meet the optimization objective. We further establish the conditions under which the binary decisions (full-offloading and no offloading) are optimal. We also explore how such system parameters as the latency constraint, task complexity, and local computing power affect the offloading strategy. Finally, the simulation results demonstrate the behavior of the proposed strategy and verify its energy efficiency.


2021 ◽  
Vol 12 (1) ◽  
pp. 384
Author(s):  
Seolwon Koo ◽  
Yujin Lim

In the Industrial Internet of Things (IIoT), various tasks are created dynamically because of the small quantity batch production. Hence, it is difficult to execute tasks only with devices that have limited battery lives and computation capabilities. To solve this problem, we adopted the mobile edge computing (MEC) paradigm. However, if there are numerous tasks to be processed on the MEC server (MECS), it may not be suitable to deal with all tasks in the server within a delay constraint owing to the limited computational capability and high network overhead. Therefore, among cooperative computing techniques, we focus on task offloading to nearby devices using device-to-device (D2D) communication. Consequently, we propose a method that determines the optimal offloading strategy in an MEC environment with D2D communication. We aim to minimize the energy consumption of the devices and task execution delay under certain delay constraints. To solve this problem, we adopt a Q-learning algorithm that is part of reinforcement learning (RL). However, if one learning agent determines whether to offload tasks from all devices, the computing complexity of that agent increases tremendously. Thus, we cluster the nearby devices that comprise the job shop, where each cluster’s head determines the optimal offloading strategy for the tasks that occur within its cluster. Simulation results show that the proposed algorithm outperforms the compared methods in terms of device energy consumption, task completion rate, task blocking rate, and throughput.


2021 ◽  
Vol 13 (23) ◽  
pp. 4853
Author(s):  
Dawei Wei ◽  
Ning Xi ◽  
Jianfeng Ma ◽  
Lei He

Unmanned aerial vehicle (UAV) plays a more and more important role in Internet of Things (IoT) for remote sensing and device interconnecting. Due to the limitation of computing capacity and energy, the UAV cannot handle complex tasks. Recently, computation offloading provides a promising way for the UAV to handle complex tasks by deep reinforcement learning (DRL)-based methods. However, existing DRL-based computation offloading methods merely protect usage pattern privacy and location privacy. In this paper, we consider a new privacy issue in UAV-assisted IoT, namely computation offloading preference leakage, which lacks through study. To cope with this issue, we propose a novel privacy-preserving online computation offloading method for UAV-assisted IoT. Our method integrates the differential privacy mechanism into deep reinforcement learning (DRL), which can protect UAV’s offloading preference. We provide the formal analysis on security and utility loss of our method. Extensive real-world experiments are conducted. Results demonstrate that, compared with baseline methods, our method can learn cost-efficient computation offloading policy without preference leakage and a priori knowledge of the wireless channel model.


2021 ◽  
Vol 17 (12) ◽  
pp. 155014772110648
Author(s):  
Zengrong Zhang

In order to meet the demand for efficient computing services in big data scenarios, a cloud edge collaborative computing allocation strategy based on deep reinforcement learning by combining the powerful computing capabilities of cloud is proposed. First, based on the comprehensive consideration of computing resources, bandwidth, and migration decisions, an optimization problem is constructed that minimizes the sum of all user task execution delays and energy consumption weights. Second, a dynamic offloading scheduling algorithm based on Q-learning is proposed based on the optimization problem. This algorithm makes full use of the computing power for cloud and edge, which effectively meets the demand for efficient computing services in Internet of Things’ scenarios. Finally, facing the environment dynamic changes of edge nodes in edge cloud, the algorithm can adaptively adjust the migration strategy. Experiments show that when the number of Internet of Things’ devices is 30, the total energy consumption of Internet of Things’ devices of proposed algorithm is reduced by 24.67% and 19.44%, respectively, compared with other algorithms. The experimental results show that proposed algorithm can effectively improve the success rate of task offloading and execution, which can reduce the local energy consumption.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Anwen Wang ◽  
Xianjia Meng ◽  
Lvju Wang ◽  
Xiang Ji ◽  
Hao Chen ◽  
...  

Wireless sensor networks as the base support for the Internet of things have been a large number of popularity and application. Such as intelligent agriculture, we have to use the sensor network to obtain the growing environment data of crops and others. However, the difficulty of power supply of wireless nodes has seriously hindered the application and development of Internet of things. In order to solve this problem, people use low-power sleep scheduling and other energy-saving methods on the nodes. Although these methods can prolong the working time of nodes, they will eventually become invalid because of the exhaustion of energy. The use of solar energy, wind energy, and wireless signals in the environment to obtain energy is another way to solve the energy problem of nodes. However, these methods are affected by weather, environment, and other factors, and they are unstable. Thus, the discontinuity work of the node is caused. In recent years, the development of wireless power transfer (WPT) has brought another solution to this problem. In this paper, a three-layer framework is proposed for mobile station data collection in rechargeable wireless sensor networks to keep the node running forever, named TLFW which includes the sensor layer, cluster head layer, and mobile station layer. And the framework can minimize the total energy consumption of the system. The simulation results show that the scheme can reduce the energy consumption of the entire system, compared with a Mobile Station in a Rechargeable Sensor Network (MSiRSN).


2015 ◽  
Vol 2015 ◽  
pp. 1-15 ◽  
Author(s):  
Kottarathil Eashy Mary Reena ◽  
Abraham Theckethil Mathew ◽  
Lillykutty Jacob

Cyber-physical system (CPS) includes the class of Intelligent Building Automation System (IBAS) which increasingly utilizes advanced technologies for long term stability, economy, longevity, and user comfort. However, there are diverse issues associated with wireless interconnection of the sensors, controllers, and power consuming physical end devices. In this paper, a novel architecture of CPS for wireless networked IBAS with priority-based access mechanism is proposed for zones in a large building with dynamically varying occupancy. Priority status of zones based on occupancy is determined using fuzzy inference engine. Nondominated Sorting Genetic Algorithm-II (NSGA-II) is used to solve the optimization problem involving conflicting demands of minimizing total energy consumption and maximizing occupant comfort levels in building. An algorithm is proposed for power scheduling in sensor nodes to reduce their energy consumption. Wi-Fi with Elimination-Yield Nonpreemptive Multiple Access (EY-NPMA) scheme is used for assigning priority among nodes for wireless channel access. Controller design techniques are also proposed for ensuring the stability of the closed loop control of IBAS in the presence of packet dropouts due to unreliable network links.


Author(s):  
Bing Lin ◽  
Kai Lin ◽  
Changhang Lin ◽  
Yu Lu ◽  
Ziqing Huang ◽  
...  

AbstractConnected and Automated Vehicle (CAV) is a transformative technology that has great potential to improve urban traffic and driving safety. Electric Vehicle (EV) is becoming the key subject of next-generation CAVs by virtue of its advantages in energy saving. Due to the limited endurance and computing capacity of EVs, it is challenging to meet the surging demand for computing-intensive and delay-sensitive in-vehicle intelligent applications. Therefore, computation offloading has been employed to extend a single vehicle’s computing capacity. Although various offloading strategies have been proposed to achieve good computing performace in the Vehicular Edge Computing (VEC) environment, it remains challenging to jointly optimize the offloading failure rate and the total energy consumption of the offloading process. To address this challenge, in this paper, we establish a computation offloading model based on Markov Decision Process (MDP), taking into consideration task dependencies, vehicle mobility, and different computing resources for task offloading. We then design a computation offloading strategy based on deep reinforcement learning, and leverage the Deep Q-Network based on Simulated Annealing (SA-DQN) algorithm to optimize the joint objectives. Experimental results show that the proposed strategy effectively reduces the offloading failure rate and the total energy consumption for application offloading.


2022 ◽  
Author(s):  
Liping Qian

<div>The integration of Maritime Internet of Things (M-IoT) technology and unmanned aerial/surface vehicles (UAVs/USVs) has been emerging as a promising navigational information technique in intelligent ocean systems. With the unprecedented increase of computation-intensive yet latency sensitive marine mobile Internet services, mobile edge computing (MEC) and non-orthogonal multiple access (NOMA) have been envisioned as promising approaches to providing with the low-latency as well as reliable computing services and ultra-dense connectivity. In this paper, we investigate the energy consumption minimization based energy-efficient MEC via cooperative NOMA for the UAV-assisted M-IoT networks. We consider that USVs offload their computation-workload to the UAV equipped with the edge-computing server subject to the UAV mobility. To improve the energy efficiency of offloading transmission and workload computation, we focus on minimizing the total energy consumption by jointly optimizing the USVs’ offloaded workload, transmit power, computation resource allocation as well as the UAV trajectory subject to the USVs’ latency requirements. Despite the nature of mixed discrete and non-convex programming of the formulated problem, we exploit the vertical decomposition and propose a two-layered algorithm for solving it efficiently. Specifically, the top-layered algorithm is proposed to solve the problem of optimizing the UAV trajectory based on the idea of Deep Reinforcement Learning (DRL), and the underlying algorithm is proposed to optimize the underlying multi-domain resource allocation problem based on the idea of the Lagrangian multiplier method. Numerical results are provided to validate the effectiveness of our proposed algorithms as well as the performance advantage of NOMA-enabled computation offloading in terms of overall energy consumption.</div>


Sign in / Sign up

Export Citation Format

Share Document