scholarly journals An Research on the Effectiveness of the Different Algorithms and the Scheduling in Improving the Performance of Cloud Computing by using Cloud Analyst Simulator

Cloud computing can be defined as a computing paradigm, where the various systems and large pool are connected to each other in private or public networks. The aim for that is to provide a dynamically scalable infrastructure, where it is used for applications, data and file storage. Cloud computing reduced the cost of computation and application hosting so that content storage and delivering services are handled faster and more flexibility. Load balancing is one of the challenges that affect the performance of cloud computing and the overcome it leads to better resource utilization and response time. The service broker policy plays an important role in accelerating the response time of customer requests by locating data centers or optimize the pattern of access to them. The contribution of this paper investigates the effectiveness of using the different algorithms and the approaches to improve the performance of cloud computing as it has been shown that there is a possibility to increase the performance of cloud computing by relying on certain criteria described in this paper. The results, which are presented in this paper were obtained using the cloud analyst simulator, where this simulator contains (Time duration, Load balancing algorithms, and Service Broker Algorithms, etc).

Author(s):  
Er. Ruchi ◽  
Harish Kumar

Cloud computing is referred to as biggest technology of today’s environment that provide access to distributed resources on the basis of pay-per-use. Everyone try to use cloud to reduce the cost and maintenance of infrastructure due to which lots of load is increasing day by day. Therefore, there is need to balance that load since resources of cloud are limited but usage is increasing at every moment. This paper discuss how the resources are allocated and how the tasks are scheduled among those resources. Task scheduling mainly focuses on enhancing the utilization of resources and hence reduction in response time. There are various static and dynamic load balancing algorithms to balance the load, this paper discusses comparative study of these algorithms.


2019 ◽  
Vol 151 ◽  
pp. 992-997 ◽  
Author(s):  
Zakaria Benlalia ◽  
Abderahim Beni-hssane ◽  
Karim Abouelmehdi ◽  
Abdellah Ezati

Author(s):  
Mohammed Radi ◽  
Ali Alwan ◽  
Abedallah Abualkishik ◽  
Adam Marks ◽  
Yonis Gulzar

Cloud computing has become a practical solution for processing big data. Cloud service providers have heterogeneous resources and offer a wide range of services with various processing capabilities. Typically, cloud users set preferences when working on a cloud platform. Some users tend to prefer the cheapest services for the given tasks, whereas other users prefer solutions that ensure the shortest response time or seek solutions that produce services ensuring an acceptable response time at a reasonable cost. The main responsibility of the cloud service broker is identifying the best data centre to be used for processing user requests. Therefore, to maintain a high level of quality of service, it is necessity to develop a service broker policy that is capable of selecting the best data centre, taking into consideration user preferences (e.g. cost, response time). This paper proposes an efficient and cost-effective plan for a service broker policy in a cloud environment based on the concept of VIKOR. The proposed solution relies on a multi-criteria decision-making technique aimed at generating an optimized solution that incorporates user preferences. The simulation results show that the proposed policy outperforms most recent policies designed for the cloud environment in many aspects, including processing time, response time, and processing cost. KEYWORDS Cloud computing, data centre selection, service broker, VIKOR, user priorities


Author(s):  
G. Soniya Priyatharsini ◽  
N. Malarvizhi

Cloud computing is a service model in internet that provides virtualized resources to its clients. These types of servicing give a lot of benefits to the cloud users where they can pay as per their use. Even though they have benefits, they also face some problems like receiving computing resources, which is guaranteed on time. This time delay may affect the service time and the makespan. Thus, to reduce such problems, it is necessary to schedule the resources and then allocate it to using an optimized hypervisor. Here, the proposed method is used to do the above-mentioned problem. First, the available resources are clustered with respect to their characteristics. Then the resources are scheduled using this method. Finally, with respect to that of the clients request the resources, the resources are allocated. Here, the cost is the fitness of the allocation.


Author(s):  
Mais Haj Qasem ◽  
Alaa Abu-Srhan ◽  
Hutaf Natoureah ◽  
Esra Alzaghoul

Fog-computing is a new network architecture and computing paradigm that uses user or near-users devices (network edge) to carry out some processing tasks. Accordingly, it extends the cloud computing with more flexibility the one found in the ubiquitous networks. A smart city based on the concept of fog-computing with flexible hierarchy is proposed in this paper. The aim of the proposed design is to overcome the limitations of the previous approaches, which depends on using various network architectures, such as cloud-computing, autonomic network architecture and ubiquitous network architecture. Accordingly, the proposed approach achieves a reduction of the latency of data processing and transmission with enabled real-time applications, distribute the processing tasks over edge devices in order to reduce the cost of data processing and allow collaborative data exchange among the applications of the smart city. The design is made up of five major layers, which can be increased or merged according to the amount of data processing and transmission in each application. The involved layers are connection layer, real-time processing layer, neighborhood linking layer, main-processing layer, data server layer. A case study of a novel smart public car parking, traveling and direction advisor is implemented using IFogSim and the results showed that reduce the delay of real-time application significantly, reduce the cost and network usage compared to the cloud-computing paradigm. Moreover, the proposed approach, although, it increases the scalability and reliability of the users’ access, it does not sacrifice much time, nor cost and network usage compared to fixed fog-computing design.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Mahfooz Alam ◽  
Mahak ◽  
Raza Abbas Haidri ◽  
Dileep Kumar Yadav

Purpose Cloud users can access services at anytime from anywhere in the world. On average, Google now processes more than 40,000 searches every second, which is approximately 3.5 billion searches per day. The diverse and vast amounts of data are generated with the development of next-generation information technologies such as cryptocurrency, internet of things and big data. To execute such applications, it is needed to design an efficient scheduling algorithm that considers the quality of service parameters like utilization, makespan and response time. Therefore, this paper aims to propose a novel Efficient Static Task Allocation (ESTA) algorithm, which optimizes average utilization. Design/methodology/approach Cloud computing provides resources such as virtual machine, network, storage, etc. over the internet. Cloud computing follows the pay-per-use billing model. To achieve efficient task allocation, scheduling algorithm problems should be interacted and tackled through efficient task distribution on the resources. The methodology of ESTA algorithm is based on minimum completion time approach. ESTA intelligently maps the batch of independent tasks (cloudlets) on heterogeneous virtual machines and optimizes their utilization in infrastructure as a service cloud computing. Findings To evaluate the performance of ESTA, the simulation study is compared with Min-Min, load balancing strategy with migration cost, Longest job in the fastest resource-shortest job in the fastest resource, sufferage, minimum completion time (MCT), minimum execution time and opportunistic load balancing on account of makespan, utilization and response time. Originality/value The simulation result reveals that the ESTA algorithm consistently superior performs under varying of batch independent of cloudlets and the number of virtual machines’ test conditions.


Author(s):  
Minakshi Sharma ◽  
Rajneesh Kumar ◽  
Anurag Jain

Cloud load balancing is done to persist the services in the cloud environment along with quality of service (QoS) parameters. An efficient load balancing algorithm should be based on better optimization of these QoS parameters which results in efficient scheduling. Most of the load balancing algorithms which exist consider response time or resource utilization constraints but an efficient algorithm must consider both perspectives from the user side and cloud service provider side. This article presents a load balancing strategy that efficiently allocates tasks to virtualized resources to get maximum resource utilization in minimum response time. The proposed approach, join minimum loaded queue (JMLQ), is based on the existing join idle queue (JIQ) model that has been modified by replacing idle servers in the I-queues with servers having one task in execution list. The results of simulation in CloudSim verify that the proposed approach efficiently maximizes resource utilization by reducing the response time in comparison to its other variants.


2017 ◽  
Vol 8 (3) ◽  
pp. 53-73
Author(s):  
Raza Abbas Haidri ◽  
Chittaranjan Padmanabh Katti ◽  
Prem Chandra Saxena

The emerging cloud computing technology is the attention of both commercial and academic spheres. Generally, the cost of the faster resource is more than the slower ones, therefore, there is a trade-off between deadline and cost. In this paper, the authors propose a receiver initiated deadline aware load balancing strategy (RDLBS) which tries to meet the deadline of the requests and optimizes the rate of revenue. RDLBS balances the load among the virtual machines (VMs) by migrating the request from the overloaded VMs to underloaded VMs. Turnaround time is also computed for the performance evaluation. The experiments are conducted by using CloudSim simulator and results are compared with existing state of art algorithms with similar objectives.


Sign in / Sign up

Export Citation Format

Share Document