scholarly journals Linked-Object Dynamic Offloading (LODO) for the Cooperation of Data and Tasks on Edge Computing Environment

Electronics ◽  
2021 ◽  
Vol 10 (17) ◽  
pp. 2156
Author(s):  
Svetlana Kim ◽  
Jieun Kang ◽  
YongIk Yoon

With the evolution of the Internet of Things (IoT), edge computing technology is using to process data rapidly increasing from various IoT devices efficiently. Edge computing offloading reduces data processing time and bandwidth usage by processing data in real-time on the device where the data is generating or on a nearby server. Previous studies have proposed offloading between IoT devices through local-edge collaboration from resource-constrained edge servers. However, they did not consider nearby edge servers in the same layer with computing resources. Consequently, quality of service (QoS) degrade due to restricted resources of edge computing and higher execution latency due to congestion. To handle offloaded tasks in a rapidly changing dynamic environment, finding an optimal target server is still challenging. Therefore, a new cooperative offloading method to control edge computing resources is needed to allocate limited resources between distributed edges efficiently. This paper suggests the LODO (linked-object dynamic offloading) algorithm that provides an ideal balance between edges by considering the ready state or running state. LODO algorithm carries out tasks in the list in the order of high correlation between data and tasks through linked objects. Furthermore, dynamic offloading considers the running status of all cooperative terminals and decides to schedule task distribution. That can decrease the average delayed time and average power consumption of terminals. In addition, the resource shortage problem can settle by reducing task processing using its distributions.

Author(s):  
Pallepati Vasavi ◽  
G Raja Ramesh

As per need of recent applications, new research aspects related to scalability, heterogeneity, and power consumption have been arisen. These problems are supposed to be fixed for better utilization of MANETs. MANET nodes interact through multi-hop routing. AODV is a commonly used on-demand protocol for routing in MANETs. In the existing literature, AODV has been analyzed a number of times but heterogeneity of the nodes has not been addressed. Heterogeneity may be defined as diversity among the nodes in resources or capability. The environment is usually heterogeneous in case of constraint fluid dynamic environment of MANET. In this paper we are analyzing the routing performance as well as energy efficient behavior of AODV routing protocol in both homogeneous and heterogeneous MANETs (H-MANETs), using performance parameters like ratio of delivered packets, throughput, average delay, average power consumption, energy of alive nodes, etc. Heterogeneity has been introduced in terms of different initial energy for all the nodes, unlike the homogeneous scenario. The simulation work has been done using network simulator (NS-2). This work will be helpful to get insight of effects of heterogeneity on energy efficiency and other performance metrics of AODV.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Zhenxing Wang ◽  
Wanbo Zheng ◽  
Peng Chen ◽  
Yong Ma ◽  
Yunni Xia ◽  
...  

Recently, mobile edge computing (MEC) is widely believed to be a promising and powerful paradigm for bringing enterprise applications closer to data sources such as IoT devices or local edge servers. It is capable of energizing novel mobile applications, especially the ultra-latency-sensitive ones, by providing powerful local computing capabilities and lower end-to-end delays. Nevertheless, various challenges, especially the reliability-guaranteed scheduling of multitask business processes in terms of, e.g., workflows, upon distributed edge resources and servers, are yet to be carefully addressed. In this paper, we propose a novel edge-environment-based multi-workflow scheduling method, which incorporates a reliability estimation model for edge-workflows and a coevolutionary algorithm for yielding scheduling decisions. The proposed approach aims at maximizing the reliability, in terms of success rates, of services deployed upon edge infrastructures while minimizing service invocation cost for users. We conduct simulative experimental case studies based on multiple well-known scientific workflow templates and a well-known dataset of edge resource locations as well. Simulative results clearly suggest that our proposed approach outperforms traditional ones in terms of workflow success rate and monetary cost.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1484
Author(s):  
Md Delowar Hossain ◽  
Tangina Sultana ◽  
Md Alamgir Hossain ◽  
Md Imtiaz Hossain ◽  
Luan N. T. Huynh ◽  
...  

Multi-access edge computing (MEC) is a new leading technology for meeting the demands of key performance indicators (KPIs) in 5G networks. However, in a rapidly changing dynamic environment, it is hard to find the optimal target server for processing offloaded tasks because we do not know the end users’ demands in advance. Therefore, quality of service (QoS) deteriorates because of increasing task failures and long execution latency from congestion. To reduce latency and avoid task failures from resource-constrained edge servers, vertical offloading between mobile devices with local-edge collaboration or with local edge-remote cloud collaboration have been proposed in previous studies. However, they ignored the nearby edge server in the same tier that has excess computing resources. Therefore, this paper introduces a fuzzy decision-based cloud-MEC collaborative task offloading management system called FTOM, which takes advantage of powerful remote cloud-computing capabilities and utilizes neighboring edge servers. The main objective of the FTOM scheme is to select the optimal target node for task offloading based on server capacity, latency sensitivity, and the network’s condition. Our proposed scheme can make dynamic decisions where local or nearby MEC servers are preferred for offloading delay-sensitive tasks, and delay-tolerant high resource-demand tasks are offloaded to a remote cloud server. Simulation results affirm that our proposed FTOM scheme significantly improves the rate of successfully executing offloaded tasks by approximately 68.5%, and reduces task completion time by 66.6%, when compared with a local edge offloading (LEO) scheme. The improved and reduced rates are 32.4% and 61.5%, respectively, when compared with a two-tier edge orchestration-based offloading (TTEO) scheme. They are 8.9% and 47.9%, respectively, when compared with a fuzzy orchestration-based load balancing (FOLB) scheme, approximately 3.2% and 49.8%, respectively, when compared with a fuzzy workload orchestration-based task offloading (WOTO) scheme, and approximately 38.6%% and 55%, respectively, when compared with a fuzzy edge-orchestration based collaborative task offloading (FCTO) scheme.


2021 ◽  
Vol 21 (3) ◽  
pp. 1-21
Author(s):  
Laha Ale ◽  
Ning Zhang ◽  
Scott A. King ◽  
Jose Guardiola

A smart city improves operational efficiency and comfort of living by harnessing techniques such as the Internet of Things (IoT) to collect and process data for decision-making. To better support smart cities, data collected by IoT should be stored and processed appropriately. However, IoT devices are often task-specialized and resource-constrained, and thus, they heavily rely on online resources in terms of computing and storage to accomplish various tasks. Moreover, these cloud-based solutions often centralize the resources and are far away from the end IoTs and cannot respond to users in time due to network congestion when massive numbers of tasks offload through the core network. Therefore, by decentralizing resources spatially close to IoT devices, mobile edge computing (MEC) can reduce latency and improve service quality for a smart city, where service requests can be fulfilled in proximity. As the service demands exhibit spatial-temporal features, deploying MEC servers at optimal locations and allocating MEC resources play an essential role in efficiently meeting service requirements in a smart city. In this regard, it is essential to learn the distribution of resource demands in time and space. In this work, we first propose a spatio-temporal Bayesian hierarchical learning approach to learn and predict the distribution of MEC resource demand over space and time to facilitate MEC deployment and resource management. Second, the proposed model is trained and tested on real-world data, and the results demonstrate that the proposed method can achieve very high accuracy. Third, we demonstrate an application of the proposed method by simulating task offloading. Finally, the simulated results show that resources allocated based upon our models’ predictions are exploited more efficiently than the resources are equally divided into all servers in unobserved areas.


Sensors ◽  
2019 ◽  
Vol 19 (22) ◽  
pp. 4944 ◽  
Author(s):  
Mamta Agiwal ◽  
Mukesh Kumar Maheshwari ◽  
Hu Jin

Sensors enabled Internet of things (IoT) has become an integral part of the modern, digital and connected ecosystem. Narrowband IoT (NB-IoT) technology is one of its economical versions preferable when low power and resource limited sensors based applications are considered. One of the major characteristics of NB-IoT technology is its offer of reliable coverage enhancement (CE) which is achieved by repeating the transmission of signals. This repeated transmission of the same signal challenges power saving in low complexity NB-IoT devices. Additionally, the NB-IoT devices are expected to suffer from congestion due to simultaneous random access procedures (RAPs) from an enormous number of devices. Multiple RAP reattempts would further reduce the power saving in NB-IoT devices. We propose a novel power efficient RAP (PE-RAP) for reducing power consumption of NB-IoT devices in a highly congested environment. The existing RAP do not differentiate the failures due to poor channel conditions or due to collision. After the RAP failure either due to collision or poor channel, the devices can apply power ramping or can transit to a higher CE level with higher repetition configuration. In the proposed PE-RAP, the NB-IoT devices can re-ascertain the channel conditions after an RAP attempt failure such that the impediments due to poor channel are reduced. The power increments and repetition enhancements are applied only when necessary. We probabilistically obtain the chances of RAP reattempts. Subsequently, we evaluate the average power consumption by devices in different CE levels for different repetition configurations. We validate our analysis by simulation studies.


Author(s):  
Jaber Almutairi ◽  
Mohammad Aldossary

AbstractRecently, the number of Internet of Things (IoT) devices connected to the Internet has increased dramatically as well as the data produced by these devices. This would require offloading IoT tasks to release heavy computation and storage to the resource-rich nodes such as Edge Computing and Cloud Computing. Although Edge Computing is a promising enabler for latency-sensitive related issues, its deployment produces new challenges. Besides, different service architectures and offloading strategies have a different impact on the service time performance of IoT applications. Therefore, this paper presents a novel approach for task offloading in an Edge-Cloud system in order to minimize the overall service time for latency-sensitive applications. This approach adopts fuzzy logic algorithms, considering application characteristics (e.g., CPU demand, network demand and delay sensitivity) as well as resource utilization and resource heterogeneity. A number of simulation experiments are conducted to evaluate the proposed approach with other related approaches, where it was found to improve the overall service time for latency-sensitive applications and utilize the edge-cloud resources effectively. Also, the results show that different offloading decisions within the Edge-Cloud system can lead to various service time due to the computational resources and communications types.


2020 ◽  
Vol 11 (1) ◽  
pp. 129
Author(s):  
Po-Yu Kuo ◽  
Ming-Hwa Sheu ◽  
Chang-Ming Tsai ◽  
Ming-Yan Tsai ◽  
Jin-Fa Lin

The conventional shift register consists of master and slave (MS) latches with each latch receiving the data from the previous stage. Therefore, the same data are stored in two latches separately. It leads to consuming more electrical power and occupying more layout area, which is not satisfactory to most circuit designers. To solve this issue, a novel cross-latch shift register (CLSR) scheme is proposed. It significantly reduced the number of transistors needed for a 256-bit shifter register by 48.33% as compared with the conventional MS latch design. To further verify its functions, this CLSR was implemented by using TSMC 40 nm CMOS process standard technology. The simulation results reveal that the proposed CLSR reduced the average power consumption by 36%, cut the leakage power by 60.53%, and eliminated layout area by 34.76% at a supply voltage of 0.9 V with an operating frequency of 250 MHz, as compared with the MS latch.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1021
Author(s):  
Zhanserik Nurlan ◽  
Tamara Zhukabayeva ◽  
Mohamed Othman

Wireless sensor networks (WSN) are networks of thousands of nodes installed in a defined physical environment to sense and monitor its state condition. The viability of such a network is directly dependent and limited by the power of batteries supplying the nodes of these networks, which represents a disadvantage of such a network. To improve and extend the life of WSNs, scientists around the world regularly develop various routing protocols that minimize and optimize the energy consumption of sensor network nodes. This article, introduces a new heterogeneous-aware routing protocol well known as Extended Z-SEP Routing Protocol with Hierarchical Clustering Approach for Wireless Heterogeneous Sensor Network or EZ-SEP, where the connection of nodes to a base station (BS) is done via a hybrid method, i.e., a certain amount of nodes communicate with the base station directly, while the remaining ones form a cluster to transfer data. Parameters of the field are unknown, and the field is partitioned into zones depending on the node energy. We reviewed the Z-SEP protocol concerning the election of the cluster head (CH) and its communication with BS and presented a novel extended mechanism for the selection of the CH based on remaining residual energy. In addition, EZ-SEP is weighted up using various estimation schemes such as base station repositioning, altering the field density, and variable nodes energy for comparison with the previous parent algorithm. EZ-SEP was executed and compared to routing protocols such as Z-SEP, SEP, and LEACH. The proposed algorithm performed using the MATLAB R2016b simulator. Simulation results show that our proposed extended version performs better than Z-SEP in the stability period due to an increase in the number of active nodes by 48%, in efficiency of network by the high packet delivery coefficient by 16% and optimizes the average power consumption compared to by 34.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4798
Author(s):  
Fangni Chen ◽  
Anding Wang ◽  
Yu Zhang ◽  
Zhengwei Ni ◽  
Jingyu Hua

With the increasing deployment of IoT devices and applications, a large number of devices that can sense and monitor the environment in IoT network are needed. This trend also brings great challenges, such as data explosion and energy insufficiency. This paper proposes a system that integrates mobile edge computing (MEC) technology and simultaneous wireless information and power transfer (SWIPT) technology to improve the service supply capability of WSN-assisted IoT applications. A novel optimization problem is formulated to minimize the total system energy consumption under the constraints of data transmission rate and transmitting power requirements by jointly considering power allocation, CPU frequency, offloading weight factor and energy harvest weight factor. Since the problem is non-convex, we propose a novel alternate group iteration optimization (AGIO) algorithm, which decomposes the original problem into three subproblems, and alternately optimizes each subproblem using the group interior point iterative algorithm. Numerical simulations validate that the energy consumption of our proposed design is much lower than the two benchmark algorithms. The relationship between system variables and energy consumption of the system is also discussed.


2020 ◽  
Vol 2 (1) ◽  
pp. 92
Author(s):  
Rahim Rahmani ◽  
Ramin Firouzi ◽  
Sachiko Lim ◽  
Mahbub Alam

The major challenges of operating data-intensive of Distributed Ledger Technology (DLT) are (1) to reach consensus on the main chain as a set of validators cast public votes to decide on which blocks to finalize and (2) scalability on how to increase the number of chains which will be running in parallel. In this paper, we introduce a new proximal algorithm that scales DLT in a large-scale Internet of Things (IoT) devices network. We discuss how the algorithm benefits the integrating DLT in IoT by using edge computing technology, taking the scalability and heterogeneous capability of IoT devices into consideration. IoT devices are clustered dynamically into groups based on proximity context information. A cluster head is used to bridge the IoT devices with the DLT network where a smart contract is deployed. In this way, the security of the IoT is improved and the scalability and latency are solved. We elaborate on our mechanism and discuss issues that should be considered and implemented when using the proposed algorithm, we even show how it behaves with varying parameters like latency or when clustering.


Sign in / Sign up

Export Citation Format

Share Document