scholarly journals A new service broker algorithm optimizing the cost and response time for cloud computing

2019 ◽  
Vol 151 ◽  
pp. 992-997 ◽  
Author(s):  
Zakaria Benlalia ◽  
Abderahim Beni-hssane ◽  
Karim Abouelmehdi ◽  
Abdellah Ezati

Cloud computing can be defined as a computing paradigm, where the various systems and large pool are connected to each other in private or public networks. The aim for that is to provide a dynamically scalable infrastructure, where it is used for applications, data and file storage. Cloud computing reduced the cost of computation and application hosting so that content storage and delivering services are handled faster and more flexibility. Load balancing is one of the challenges that affect the performance of cloud computing and the overcome it leads to better resource utilization and response time. The service broker policy plays an important role in accelerating the response time of customer requests by locating data centers or optimize the pattern of access to them. The contribution of this paper investigates the effectiveness of using the different algorithms and the approaches to improve the performance of cloud computing as it has been shown that there is a possibility to increase the performance of cloud computing by relying on certain criteria described in this paper. The results, which are presented in this paper were obtained using the cloud analyst simulator, where this simulator contains (Time duration, Load balancing algorithms, and Service Broker Algorithms, etc).


Author(s):  
Mohammed Radi ◽  
Ali Alwan ◽  
Abedallah Abualkishik ◽  
Adam Marks ◽  
Yonis Gulzar

Cloud computing has become a practical solution for processing big data. Cloud service providers have heterogeneous resources and offer a wide range of services with various processing capabilities. Typically, cloud users set preferences when working on a cloud platform. Some users tend to prefer the cheapest services for the given tasks, whereas other users prefer solutions that ensure the shortest response time or seek solutions that produce services ensuring an acceptable response time at a reasonable cost. The main responsibility of the cloud service broker is identifying the best data centre to be used for processing user requests. Therefore, to maintain a high level of quality of service, it is necessity to develop a service broker policy that is capable of selecting the best data centre, taking into consideration user preferences (e.g. cost, response time). This paper proposes an efficient and cost-effective plan for a service broker policy in a cloud environment based on the concept of VIKOR. The proposed solution relies on a multi-criteria decision-making technique aimed at generating an optimized solution that incorporates user preferences. The simulation results show that the proposed policy outperforms most recent policies designed for the cloud environment in many aspects, including processing time, response time, and processing cost. KEYWORDS Cloud computing, data centre selection, service broker, VIKOR, user priorities


Author(s):  
Er. Ruchi ◽  
Harish Kumar

Cloud computing is referred to as biggest technology of today’s environment that provide access to distributed resources on the basis of pay-per-use. Everyone try to use cloud to reduce the cost and maintenance of infrastructure due to which lots of load is increasing day by day. Therefore, there is need to balance that load since resources of cloud are limited but usage is increasing at every moment. This paper discuss how the resources are allocated and how the tasks are scheduled among those resources. Task scheduling mainly focuses on enhancing the utilization of resources and hence reduction in response time. There are various static and dynamic load balancing algorithms to balance the load, this paper discusses comparative study of these algorithms.


Author(s):  
Wan Nurazieelin Wan Abd Manan ◽  
Mohamad Aizi Salamat

<span>Reduction of dynamic data redundancy in cloud computing is one of the best ways to maintain the storage capacity from being fully utilized. Cloud storage is a part of cloud computing technology which holds a high demand in any organization for reducing the cost of purchasing and maintaining storage infrastructures. Increase in the number of users will require a larger storage capacity for storing their data. Reduction of dynamic data redundancy allows service providers to be energy savvy and minimize maintenance cost. Recent researches focus more on static data nature despite its limited capability as compared to dynamic data characteristic in cloud storage. Therefore, this paper theoretically compares various techniques for reduction of redundant dynamic data in cloud computing and suggests the best technique for completing the task in terms of response time.</span>


2021 ◽  
Vol 11 (4) ◽  
pp. 136-151
Author(s):  
Thura Al-Azzawi ◽  
Tugberk Kaya

The use of cloud computing has remarkable advantages in business performance. It is related especially in the portion of the organizational environment, such as organizational culture and organizational agility. Organizational agility provides an easier process to search and retrieve knowledge and allow the businesses to utilize and apply this knowledge to get high-quality services. Agility and culture factors can help organizations to enhance their cloud computing adoption. The achievement of any organization is dependent upon human resources. With human resources, the organization can develop its employees by sharing knowledge, skill, and experience of personnel. Expert cloud has a significant impact on and direct relation with human resources as it facilitates the communication among human resources better, more efficiently, and reduces the cost of the service. In this paper, the authors discuss the relationship between expert cloud and human resources to enhance the organizational performance through the assistance of organizational agility and culture.


1996 ◽  
Vol 33 (1) ◽  
pp. 147-157 ◽  
Author(s):  
Henrik A. Thomsen ◽  
Kenneth Kisbye

State-of-the-art on-line meters for determination of ammonium, nitrate and phosphate are presented. The on-line meters employ different measuring principles and are available in many different designs differing with respect to size, calibration and cleaning principle, user-friendliness, response time, reagent and sample consumption. A study of Danish experiences on several plants has been conducted. The list price of an on-line meter is between USD 8000 and USD 35,000. To this should be added the cost of sample preparation, design, installation and running-in. The yearly operating for one meter are in the range of USD 200-2500 and the manpower consumption is in the range of 1-5 hours/month. The accuracy obtained is only slightly smaller than the accuracy on collaborative laboratory analyses, which is sufficient for most control purposes.


2021 ◽  
Vol 34 (1) ◽  
pp. 66-85
Author(s):  
Yiannis Verginadis ◽  
Dimitris Apostolou ◽  
Salman Taherizadeh ◽  
Ioannis Ledakis ◽  
Gregoris Mentzas ◽  
...  

Fog computing extends multi-cloud computing by enabling services or application functions to be hosted close to their data sources. To take advantage of the capabilities of fog computing, serverless and the function-as-a-service (FaaS) software engineering paradigms allow for the flexible deployment of applications on multi-cloud, fog, and edge resources. This article reviews prominent fog computing frameworks and discusses some of the challenges and requirements of FaaS-enabled applications. Moreover, it proposes a novel framework able to dynamically manage multi-cloud, fog, and edge resources and to deploy data-intensive applications developed using the FaaS paradigm. The proposed framework leverages the FaaS paradigm in a way that improves the average service response time of data-intensive applications by a factor of three regardless of the underlying multi-cloud, fog, and edge resource infrastructure.


Author(s):  
Federico Larumbe ◽  
Brunilde Sansò

This chapter addresses a set of optimization problems that arise in cloud computing regarding the location and resource allocation of the cloud computing entities: the data centers, servers, software components, and virtual machines. The first problem is the location of new data centers and the selection of current ones since those decisions have a major impact on the network efficiency, energy consumption, Capital Expenditures (CAPEX), Operational Expenditures (OPEX), and pollution. The chapter also addresses the Virtual Machine Placement Problem: which server should host which virtual machine. The number of servers used, the cost, and energy consumption depend strongly on those decisions. Network traffic between VMs and users, and between VMs themselves, is also an important factor in the Virtual Machine Placement Problem. The third problem presented in this chapter is the dynamic provisioning of VMs to clusters, or auto scaling, to minimize the cost and energy consumption while satisfying the Service Level Agreements (SLAs). This important feature of cloud computing requires predictive models that precisely anticipate workload dimensions. For each problem, the authors describe and analyze models that have been proposed in the literature and in the industry, explain advantages and disadvantages, and present challenging future research directions.


Sign in / Sign up

Export Citation Format

Share Document