scholarly journals Optimized Task Group Aggregation-Based Overflow Handling on Fog Computing Environment Using Neural Computing

Mathematics ◽  
2021 ◽  
Vol 9 (19) ◽  
pp. 2522
Author(s):  
Harwant Singh Arri ◽  
Ramandeep Singh Khosa ◽  
Sudan Jha ◽  
Deepak Prashar ◽  
Gyanendra Prasad Joshi ◽  
...  

It is a non-deterministic challenge on a fog computing network to schedule resources or jobs in a manner that increases device efficacy and throughput, diminishes reply period, and maintains the system well-adjusted. Using Machine Learning as a component of neural computing, we developed an improved Task Group Aggregation (TGA) overflow handling system for fog computing environments. As a result of TGA usage in conjunction with an Artificial Neural Network (ANN), we may assess the model’s QoS characteristics to detect an overloaded server and then move the model’s data to virtual machines (VMs). Overloaded and underloaded virtual machines will be balanced according to parameters, such as CPU, memory, and bandwidth to control fog computing overflow concerns with the help of ANN and the machine learning concept. Additionally, the Artificial Bee Colony (ABC) algorithm, which is a neural computing system, is employed as an optimization technique to separate the services and users depending on their individual qualities. The response time and success rate were both enhanced using the newly proposed optimized ANN-based TGA algorithm. Compared to the present work’s minimal reaction time, the total improvement in average success rate is about 3.6189 percent, and Resource Scheduling Efficiency has improved by 3.9832 percent. In terms of virtual machine efficiency for resource scheduling, average success rate, average task completion success rate, and virtual machine response time are improved. The proposed TGA-based overflow handling on a fog computing domain enhances response time compared to the current approaches. Fog computing, for example, demonstrates how artificial intelligence-based systems can be made more efficient.

2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Mahfooz Alam ◽  
Mahak ◽  
Raza Abbas Haidri ◽  
Dileep Kumar Yadav

Purpose Cloud users can access services at anytime from anywhere in the world. On average, Google now processes more than 40,000 searches every second, which is approximately 3.5 billion searches per day. The diverse and vast amounts of data are generated with the development of next-generation information technologies such as cryptocurrency, internet of things and big data. To execute such applications, it is needed to design an efficient scheduling algorithm that considers the quality of service parameters like utilization, makespan and response time. Therefore, this paper aims to propose a novel Efficient Static Task Allocation (ESTA) algorithm, which optimizes average utilization. Design/methodology/approach Cloud computing provides resources such as virtual machine, network, storage, etc. over the internet. Cloud computing follows the pay-per-use billing model. To achieve efficient task allocation, scheduling algorithm problems should be interacted and tackled through efficient task distribution on the resources. The methodology of ESTA algorithm is based on minimum completion time approach. ESTA intelligently maps the batch of independent tasks (cloudlets) on heterogeneous virtual machines and optimizes their utilization in infrastructure as a service cloud computing. Findings To evaluate the performance of ESTA, the simulation study is compared with Min-Min, load balancing strategy with migration cost, Longest job in the fastest resource-shortest job in the fastest resource, sufferage, minimum completion time (MCT), minimum execution time and opportunistic load balancing on account of makespan, utilization and response time. Originality/value The simulation result reveals that the ESTA algorithm consistently superior performs under varying of batch independent of cloudlets and the number of virtual machines’ test conditions.


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Jun Guo ◽  
Shu Liu ◽  
Bin Zhang ◽  
Yongming Yan

Cloud application provides access to large pool of virtual machines for building high-quality applications to satisfy customers’ requirements. A difficult issue is how to predict virtual machine response time because it determines when we could adjust dynamic scalable virtual machines. To address the critical issue, this paper proposes a prediction virtual machine response time method which is based on genetic algorithm-back propagation (GA-BP) neural network. First of all, we predict component response time by the past virtual machine component usage experience data: the number of concurrent requests and response time. Then, we could predict virtual machines service response time. The results of large-scale experiments show the effectiveness and feasibility of our method.


2019 ◽  
Vol 16 (4) ◽  
pp. 627-637
Author(s):  
Sanaz Hosseinzadeh Sabeti ◽  
Maryam Mollabgher

Goal: Load balancing policies often map workloads on virtual machines, and are being sought to achieve their goals by creating an almost equal level of workload on any virtual machine. In this research, a hybrid load balancing algorithm is proposed with the aim of reducing response time and processing time. Design / Methodology / Approach: The proposed algorithm performs load balancing using a table including the status indicators of virtual machines and the task list allocated to each virtual machine. The evaluation results of response time and processing time in data centers from four algorithms, ESCE, Throttled, Round Robin and the proposed algorithm is done. Results: The overall response time and data processing time in the proposed algorithm data center are shorter than other algorithms and improve the response time and data processing time in the data center. The results of the overall response time for all algorithms show that the response time of the proposed algorithm is 12.28%, compared to the Round Robin algorithm, 9.1% compared to the Throttled algorithm, and 4.86% of the ESCE algorithm. Limitations of the investigation: Due to time and technical limitations, load balancing has not been achieved with more goals, such as lowering costs and increasing productivity. Practical implications: The implementation of a hybrid load factor policy can improve the response time and processing time. The use of load balancing will cause the traffic load between virtual machines to be properly distributed and prevent bottlenecks. This will be effective in increasing customer responsiveness. And finally, improving response time increases the satisfaction of cloud users and increases the productivity of computing resources. Originality/Value: This research can be effective in optimizing the existing algorithms and will take a step towards further research in this regard.


Cloud computing becoming one of the most advanced and promising technologies in these days for information technology era. It has also helped to reduce the cost of small and medium enterprises based on cloud provider services. Resource scheduling with load balancing is one of the primary and most important goals of the cloud computing scheduling process. Resource scheduling in cloud is a non-deterministic problem and is responsible for assigning tasks to virtual machines (VMs) by a servers or service providers in a way that increases the resource utilization and performance, reduces response time, and keeps the whole system balanced. So in this paper, we presented a model deep learning based resource scheduling and load balancing using multidimensional queuing load optimization (MQLO) algorithm with the concept of for cloud environment Multidimensional Resource Scheduling and Queuing Network (MRSQN) is used to detect the overloaded server and migrate them to VMs. Here, ANN is used as deep learning concept as a classifier that helps to identify the overloaded or under loaded servers or VMs and balanced them based on their basis parameters such as CPU, memory and bandwidth. In particular, the proposed ANN-based MQLO algorithm has improved the response time as well success rate. The simulation results show that the proposed ANN-based MQLO algorithm has improved the response time compared to the existing algorithms in terms of Average Success Rate, Resource Scheduling Efficiency, Energy Consumption and Response Time.


2019 ◽  
Vol 8 (2S11) ◽  
pp. 4071-4075

Cloud computing is defined as the resource that can be delivered or accessed by the local host from the remote server via the internet. Cloud providers typically use a "pay-as-you-go" model. The evolution of cloud computing has led to the evolution of modern environment due to abundance and advancement of computing and communication infrastructure. During user request, and system response generation, an amount load will be assigned in the cloud computing, where it may be over or under load. Due to heavy load, power consumption and energy management problems are created, and it makes system failure and lead data loss. Though, an efficient load balancing method is compulsory to overcome all mentioned problems. The objective of this work is to develop a metaheuristic load balancing algorithm to migrate multi-server for load balancing and machine learning techniques is used to increase the cloud resource utilization and minimize the make-span time of the task. Using an unsupervised machine learning technique, it is possible to predict the correct response time and waiting time of the servers by getting the prior knowledge about the virtual machines and its clusters. And this work involves to calculate the accuracy rate of the Round-Robin load balancing algorithm and then compared it with a proposed algorithm. By this work, the response time and waiting time can be minimized and also it increases the resource utilization and minimize the make- span time of the task.


2014 ◽  
Vol 5 (1) ◽  
pp. 24-43 ◽  
Author(s):  
T.R.V. Anandharajan ◽  
M.A. Bhagyaveni

Infrastructure as a Service is an important component in the cloud building block. The authors present a Cloud Simulation experience with an objective to handle the performance and energy tradeoff in an Infrastructure as a Service (IaaS) environment. The authors present the orchestration of statistical, machine learning and energy model based Minimum Power Performance (MPP) algorithm to validate simulation using PlanetLab VMs real world traces from real systems. Their proposed algorithm consolidates virtual machines (VMs) in the Processing Elements (PE or Host or Server) and is 39% better than the legacy algorithms.


2021 ◽  
Author(s):  
Praneeth Sakhamuri

Deploying and managing high availability-tiered application in the cloud is challenging, as it requires providing necessary virtual machines to make the application available. An application is available if it works and responds in a timely manner for varying workloads. Application service providers need to allocate specified number of working virtual machine copies for each server with at least a given minimum computing power, to meet the response time requirement. Otherwise, we may end up with response time failures. This thesis formulates an optimization problem that determines the number and type of virtual machines needed for each server to minimize the cost and at the same time guarantees the availability SLA (Service-Level Agreement) for different workloads. The results demonstrate that a diverse approach is more cost-effective than running on a single type of virtual machine, and buying only the cheapest virtual machines for an application is not always economical.


2020 ◽  
Vol 17 (9) ◽  
pp. 4055-4060
Author(s):  
L. Girish ◽  
Sridhar K. N. Rao

Virtualized data centers bring lot of benefits with respect to the reducing the high usage of physical hardware. But nowadays, as the usage of cloud infrastructures are rapidly increasing in all the fields to provide proper services on demand. In cloud data center, achieving efficient resource sharing between virtual machine and physical machines are very important. To achieve efficient resource sharing performance degradation of virtual machine and quantifying the sensitivity of virtual machine must be modeled, predicted correctly. In this work we use machine learning techniques like decision tree, K nearest neighbor and logistic regression to calculate the sensitivity of virtual machine. The dataset used for the experiment was collected using collected from open stack cloud environment. We execute two scenarios in this experiment to evaluate performance of the three mentioned classifiers based on precision, recall, sensitivity and specificity. We achieved good results using decision tree classifier with precision 88.8%, recall 80% and accuracy of 97.30%.


2021 ◽  
Author(s):  
Praneeth Sakhamuri

Deploying and managing high availability-tiered application in the cloud is challenging, as it requires providing necessary virtual machines to make the application available. An application is available if it works and responds in a timely manner for varying workloads. Application service providers need to allocate specified number of working virtual machine copies for each server with at least a given minimum computing power, to meet the response time requirement. Otherwise, we may end up with response time failures. This thesis formulates an optimization problem that determines the number and type of virtual machines needed for each server to minimize the cost and at the same time guarantees the availability SLA (Service-Level Agreement) for different workloads. The results demonstrate that a diverse approach is more cost-effective than running on a single type of virtual machine, and buying only the cheapest virtual machines for an application is not always economical.


Author(s):  
Gurpreet Singh ◽  
Manish Mahajan ◽  
Rajni Mohana

BACKGROUND: Cloud computing is considered as an on-demand service resource with the applications towards data center on pay per user basis. For allocating the resources appropriately for the satisfaction of user needs, an effective and reliable resource allocation method is required. Because of the enhanced user demand, the allocation of resources has now considered as a complex and challenging task when a physical machine is overloaded, Virtual Machines share its load by utilizing the physical machine resources. Previous studies lack in energy consumption and time management while keeping the Virtual Machine at the different server in turned on state. AIM AND OBJECTIVE: The main aim of this research work is to propose an effective resource allocation scheme for allocating the Virtual Machine from an ad hoc sub server with Virtual Machines. EXECUTION MODEL: The execution of the research has been carried out into two sections, initially, the location of Virtual Machines and Physical Machine with the server has been taken place and subsequently, the cross-validation of allocation is addressed. For the sorting of Virtual Machines, Modified Best Fit Decreasing algorithm is used and Multi-Machine Job Scheduling is used while the placement process of jobs to an appropriate host. Artificial Neural Network as a classifier, has allocated jobs to the hosts. Measures, viz. Service Level Agreement violation and energy consumption are considered and fruitful results have been obtained with a 37.7 of reduction in energy consumption and 15% improvement in Service Level Agreement violation.


Sign in / Sign up

Export Citation Format

Share Document