scholarly journals Flexible computation offloading in a fuzzy-based mobile edge orchestrator for IoT applications

Author(s):  
VanDung Nguyen ◽  
Tran Trong Khanh ◽  
Tri D. T. Nguyen ◽  
Choong Seon Hong ◽  
Eui-Nam Huh

AbstractIn the Internet of Things (IoT) era, the capacity-limited Internet and uncontrollable service delays for various new applications, such as video streaming analysis and augmented reality, are challenges. Cloud computing systems, also known as a solution that offloads energy-consuming computation of IoT applications to a cloud server, cannot meet the delay-sensitive and context-aware service requirements. To address this issue, an edge computing system provides timely and context-aware services by bringing the computations and storage closer to the user. The dynamic flow of requests that can be efficiently processed is a significant challenge for edge and cloud computing systems. To improve the performance of IoT systems, the mobile edge orchestrator (MEO), which is an application placement controller, was designed by integrating end mobile devices with edge and cloud computing systems. In this paper, we propose a flexible computation offloading method in a fuzzy-based MEO for IoT applications in order to improve the efficiency in computational resource management. Considering the network, computation resources, and task requirements, a fuzzy-based MEO allows edge workload orchestration actions to decide whether to offload a mobile user to local edge, neighboring edge, or cloud servers. Additionally, increasing packet sizes will affect the failed-task ratio when the number of mobile devices increases. To reduce failed tasks because of transmission collisions and to improve service times for time-critical tasks, we define a new input crisp value, and a new output decision for a fuzzy-based MEO. Using the EdgeCloudSim simulator, we evaluate our proposal with four benchmark algorithms in augmented reality, healthcare, compute-intensive, and infotainment applications. Simulation results show that our proposal provides better results in terms of WLAN delay, service times, the number of failed tasks, and VM utilization.

Electronics ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 686
Author(s):  
JongBeom Lim ◽  
DaeWon Lee

As current data centers and servers are growing in size by orders of magnitude when needed, load balancing is a great concern in scalable computing systems, including mobile edge cloud computing environments. In mobile edge cloud computing systems, a mobile user can offload its tasks to nearby edge servers to support real-time applications. However, when users are located in a hot spot, several edge servers can be overloaded due to suddenly offloaded tasks from mobile users. In this paper, we present a load balancing algorithm for mobile devices in edge cloud computing environments. The proposed load balancing technique features an efficient complexity by a graph coloring-based implementation based on a genetic algorithm. The aim of the proposed load balancing algorithm is to distribute offloaded tasks to nearby edge servers in an efficient way. Performance results show that the proposed load balancing algorithm outperforms previous techniques and increases the average CPU usage of virtual machines, which indicates a high utilization of edge servers.


Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 229
Author(s):  
Xianzhong Tian ◽  
Juan Zhu ◽  
Ting Xu ◽  
Yanjun Li

The latest results in Deep Neural Networks (DNNs) have greatly improved the accuracy and performance of a variety of intelligent applications. However, running such computation-intensive DNN-based applications on resource-constrained mobile devices definitely leads to long latency and huge energy consumption. The traditional way is performing DNNs in the central cloud, but it requires significant amounts of data to be transferred to the cloud over the wireless network and also results in long latency. To solve this problem, offloading partial DNN computation to edge clouds has been proposed, to realize the collaborative execution between mobile devices and edge clouds. In addition, the mobility of mobile devices is easily to cause the computation offloading failure. In this paper, we develop a mobility-included DNN partition offloading algorithm (MDPO) to adapt to user’s mobility. The objective of MDPO is minimizing the total latency of completing a DNN job when the mobile user is moving. The MDPO algorithm is suitable for both DNNs with chain topology and graphic topology. We evaluate the performance of our proposed MDPO compared to local-only execution and edge-only execution, experiments show that MDPO significantly reduces the total latency and improves the performance of DNN, and MDPO can adjust well to different network conditions.


Life ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 310
Author(s):  
Shih-Chia Chang ◽  
Ming-Tsang Lu ◽  
Tzu-Hui Pan ◽  
Chiao-Shan Chen

Although the electronic health (e-health) cloud computing system is a promising innovation, its adoption in the healthcare industry has been slow. This study investigated the adoption of e-health cloud computing systems in the healthcare industry and considered security functions, management, cloud service delivery, and cloud software for e-health cloud computing systems. Although numerous studies have determined factors affecting e-health cloud computing systems, few comprehensive reviews of factors and their relations have been conducted. Therefore, this study investigated the relations between the factors affecting e-health cloud computing systems by using a multiple criteria decision-making technique, in which decision-making trial and evaluation laboratory (DEMATEL), DANP (DEMATEL-based Analytic Network Process), and modified VIKOR (VlseKriterijumska Optimizacija I Kompromisno Resenje) approaches were combined. The intended level of adoption of an e-health cloud computing system could be determined by using the proposed approach. The results of a case study performed on the Taiwanese healthcare industry indicated that the cloud management function must be primarily enhanced and that cost effectiveness is the most significant factor in the adoption of e-health cloud computing. This result is valuable for allocating resources to decrease performance gaps in the Taiwanese healthcare industry.


Cloud computing is being heavily used for implementing different kinds of applications. Many of the client applications are being migrated to cloud for the reasons of cost and elasticity. Cloud computing is generally implemented on distributing computing wherein the Physical servers are heavily distributed considering both hardware and software, the connectivity among which is established through Internet. The cloud computing systems as such have many physical servers which contain many resources. The resources can be made to be shared among many users who are the tenants to the cloud computing system. The resources can be virtualized so as to provide shared resources to the clients. Scheduling is one of the most important task of a cloud computing system which is concerned with task scheduling, resource scheduling and scheduling Virtual Machin Migration. It is important to understand the issue of scheduling within a cloud computing system more in-depth so that any improvements with reference to scheduling can be investigated and implemented. For carrying in depth research, an OPEN source based cloud computing system is needed. OPEN STACK is one such OPEN source based cloud computing system that can be considered for experimenting the research findings that are related to cloud computing system. In this paper an overview on the way the Scheduling aspect per say has been implemented within OPEN STACK cloud computing system


Author(s):  
Yong Xiao ◽  
Ling Wei ◽  
Junhao Feng ◽  
Wang En

Edge computing has emerged for meeting the ever-increasing computation demands from delay-sensitive Internet of Things (IoT) applications. However, the computing capability of an edge device, including a computing-enabled end user and an edge server, is insufficient to support massive amounts of tasks generated from IoT applications. In this paper, we aim to propose a two-tier end-edge collaborative computation offloading policy to support as much as possible computation-intensive tasks while making the edge computing system strongly stable. We formulate the two-tier end-edge collaborative offloading problem with the objective of minimizing the task processing and offloading cost constrained to the stability of queue lengths of end users and edge servers. We perform analysis of the Lyapunov drift-plus-penalty properties of the problem. Then, a cost-aware computation offloading (CACO) algorithm is proposed to find out optimal two-tier offloading decisions so as to minimize the cost while making the edge computing system stable. Our simulation results show that the proposed CACO outperforms the benchmarked algorithms, especially under various number of end users and edge servers.


Author(s):  
Atta ur Rehman Khan ◽  
Abdul Nasir Khan

Mobile devices are gaining high popularity due to support for a wide range of applications. However, the mobile devices are resource constrained and many applications require high resources. To cater to this issue, the researchers envision usage of mobile cloud computing technology which offers high performance computing, execution of resource intensive applications, and energy efficiency. This chapter highlights importance of mobile devices, high performance applications, and the computing challenges of mobile devices. It also provides a brief introduction to mobile cloud computing technology, its architecture, types of mobile applications, computation offloading process, effective offloading challenges, and high performance computing application on mobile devises that are enabled by mobile cloud computing technology.


Author(s):  
Pierre Kirisci ◽  
Ernesto Morales Kluge ◽  
Emanuel Angelescu ◽  
Klaus-Dieter Thoben

During the last two decades a lot of methodology research has been conducted for the design of software user interfaces (Kirisci, Thoben 2009). Despite the numerous contributions in this area, comparatively few efforts have been dedicated to the advancement of methods for the design of context-aware mobile platforms, such as wearable computing systems. This chapter investigates the role of context, particularly in future industrial environments, and elaborates how context can be incorporated in a design method in order to support the design process of wearable computing systems. The chapter is initiated by an overview of basic research in the area of context-aware mobile computing. The aim is to identify the main context elements which have an impact upon the technical properties of a wearable computing system. Therefore, we describe a systematic and quantitative study of the advantages of context recognition, specifically task tracking, for a wearable maintenance assistance system. Based upon the experiences from this study, a context reference model is proposed, which can be considered supportive for the design of wearable computing systems in industrial settings, thus goes beyond existing context models, e.g. for context-aware mobile computing. The final part of this chapter discusses the benefits of applying model-based approaches during the early design stages of wearable computing systems. Existing design methods in the area of wearable computing are critically examined and their shortcomings highlighted. Based upon the context reference model, a design approach is proposed through the realization of a model-driven software tool which supports the design process of a wearable computing system while taking advantage of concise experience manifested in a well-defined context model.


Author(s):  
Mamata Rath ◽  
Bibudhendu Pati

Adoption of Internet of Things (IoT) and Cloud of Things (CoT) in the current developing technology era are expected to be more and more invasive, making them important mechanism of the future Internet-based communication systems. Cloud of Things and Internet of Things (IoT) are two emerging as well as diversified advanced domains that are diversified in current technological scenario. Paradigm where Cloud and IoT are merged together is foreseen as disruptive and as an enabler of a large number of application scenarios. Due to the adoption of the Cloud and IoT paradigm a number of applications are gaining important technical attention. In the future, it is going to be more complicated a setup to handle security in technology. Information till now will severely get changed and it will be very tough to keep up with varying technology. Organisations will have to repeatedly switch over to new skill-based technology with respect to higher expenditure. Latest tools, methods and enough expertise are highly essential to control threats and vulnerability to computing systems. Keeping in view the integration of Cloud computing and IoT in the new domain of Cloud of things, the said article provides an up-to-date eminence of Cloud-based IoT applications and Cloud of Things with a focus on their security and application-oriented challenges. These challenges are then synthesized in detail to present a technical survey on various issues related to IoT security, concerns, adopted mechanisms and their positive security assurance using Cloud of Things.


Author(s):  
Javier Gonzalez-Sanchez ◽  
Quincy Conley ◽  
Maria-Elena Chavez-Echeagaray ◽  
Robert K. Atkinson

The assembly process is often very complex and involved, collecting and managing a significant number of parts in an intricate manner. Because the quality of a product is in large part impacted by the assembly process, intuitive and carefully scaffolded guidelines can make a difference in how fast and how accurate an assembler can complete the assembly process. To this end, the authors propose an innovative system that leverages three current and emerging technologies; augmented reality (AR), cloud computing, and mobile devices, to create an Augmented Reality Product Assembly (ARPA) system. This paper describes the total framework for creating the ARPA system. They also discuss how the system leverages augmented reality visualizations for repurposing user-generated assembly guidelines by incorporating cloud-based computing. Although the authors situate ARPA’s use in an industrial setting, it is domain-independent and able to support a wide range of practical and educational applications.


Sign in / Sign up

Export Citation Format

Share Document