A new popularity-based data replication strategy in cloud systems

2021 ◽  
Vol 17 (2) ◽  
pp. 159-177
Author(s):  
Abdenour Lazeb ◽  
Riad Mokadem ◽  
Ghalem Belalem

Data-intensive cloud computing systems are growing year by year due to the increasing volume of data. In this context, data replication technique is frequently used to ensure a Quality of service, e.g., performance. However, most of the existing data replication strategies just reproduce the same number of replicas on some nodes, which is certainly not enough for more accurate results. To solve these problems, we propose a new data Replication and Placement strategy based on popularity of User Requests Group (RPURG). It aims to reduce the tenant response time and maximize benefit for the cloud provider while satisfying the Service Level Agreement (SLA). We demonstrate the validity of our strategy in a performance evaluation study. The result of experimentation shown robustness of RPURG.

Author(s):  
Linlin Wu ◽  
Rajkumar Buyya

In recent years, extensive research has been conducted in the area of Service Level Agreement (SLA) for utility computing systems. An SLA is a formal contract used to guarantee that consumers’ service quality expectation can be achieved. In utility computing systems, the level of customer satisfaction is crucial, making SLAs significantly important in these environments. Fundamental issue is the management of SLAs, including SLA autonomy management or trade off among multiple Quality of Service (QoS) parameters. Many SLA languages and frameworks have been developed as solutions; however, there is no overall classification for these extensive works. Therefore, the aim of this chapter is to present a comprehensive survey of how SLAs are created, managed and used in utility computing environment. We discuss existing use cases from Grid and Cloud computing systems to identify the level of SLA realization in state-of-art systems and emerging challenges for future research.


Author(s):  
Abdenour Lazeb ◽  
Riad Mokadem ◽  
Ghalem Belalem

Applications produce huge volumes of data that are distributed on remote and heterogeneous sites. This generates problems related to access and sharing data. As a result, managing data in large-scale environments is a real challenge. In this context, large-scale data management systems often use data replication, a well-known technique that treats generated problems by storing multiple copies of data, called replicas, across multiple nodes. Most of the replication strategies in these environments are difficult to adapt to cloud environments. They aim to achieve the best performance of the system without meeting the important objectives of the cloud provider. This article proposes a new dynamic replication strategy. The proposed algorithm significantly improves provider gain without neglecting customer satisfaction.


2020 ◽  
Author(s):  
Chunlin Li ◽  
Yihan Zhang ◽  
Xiaomei Qu ◽  
Youlong Luo

Abstract In recent years, with the continuous development of internet of things and cloud computing technologies, data intensive applications have gotten more and more attention. In the distributed cloud environment, the access of massive data is often the bottleneck of its performance. It is very significant to propose a suitable data deployment algorithm for improving the utilization of cloud server and the efficiency of task scheduling. In order to reduce data access cost and data deployment time, an optimal data deployment algorithm is proposed in this paper. By modeling and analyzing the data deployment problem, the problem is solved by using the improved genetic algorithm. After the data are well deployed, aiming at improving the efficiency of task scheduling, a task progress aware scheduling algorithm is proposed in this paper in order to make the speculative execution mechanism more accurate. Firstly, the threshold to detect the slow tasks and fast nodes are set. Then, the slow tasks and fast nodes are detected by calculating the remaining time of the tasks and the real-time processing ability of the nodes, respectively. Finally, the backup execution of the slow tasks is performed on the fast nodes. While satisfying the load balancing of the system, the experimental results show that the proposed algorithms can obviously reduce data access cost, service-level agreement (SLA) default rate and the execution time of the system and optimize data deployment for improving scheduling efficiency in distributed clouds.


2017 ◽  
Vol 17 (2) ◽  
pp. 83-96
Author(s):  
G. Arun Kumar ◽  
Aravind Sundaresan ◽  
Snehanshu Saha ◽  
Bidisha Goswami ◽  
Shakti Mishra

Abstract Cloud computing offers scalable services to the user where computing resources are owned by a cloud provider. The resources are offered to clients on pay-per-use basis. However, since multiple clients share the cloud’s resources, they could potentially interfere with each others’ task during peak load instances. The environment changes every instant of time with a new set of job requests demanding resource while another set of jobs relieving another set of resources. A major challenge among the service providers is to maintain a balance without compromising Service Level Agreement (SLA). In case of peak load, when each client strives for a particular resource in minimal time, the resource allocation problem becomes more challenging. The important issue is to fulfil the SLA criterion without delaying the resource allocation. The paper proposes a n-player game-based Machine learning strategy that would forecast outcome using a priori information available and measure/estimate existing parameters such as utilization and delay in an optimal load-balanced paradigm. The simulation validates the conclusion of the theorem by showing that average delay is low and stays in that range as the number of job requests increase. In future, we shall extend this work to multi-resource, multi-user environment.


Author(s):  
Gaurav Sharma ◽  
Urvashi Garg, A.P ◽  
Arun Jain, A.P ◽  
Loveleena Mukhija, A.P

Cloud computing is the combination of distributed computing, grid computing and parallel technologies which define the shape of a new era. In this technology client data is stored and maintain in the data center of a cloud provider like Google, Amazon and Microsoft etc. It has inherited the legacy technology and including unique ideas. Industries, such as education, banking and healthcare are moving towards the cloud due to the efficiency of services such as transactions carried out, processing power used, bandwidth consumed, data transferred etc. There are various challenges  for adopting cloud computing such as privacy ,interoperability, managed service level agreement (SLA) and reliability. In this paper we survey challenges in resources allocation and the security issues of the cloud environment.


2019 ◽  
Vol 11 (7) ◽  
pp. 142 ◽  
Author(s):  
Loretta Mastroeni ◽  
Alessandro Mazzoccoli ◽  
Maurizio Naldi

Service Level Agreements are employed to set availability commitments in cloud services. When a violation occurs as in an outage, cloud providers may be called to compensate customers for the losses incurred. Such compensation may be so large as to erode cloud providers’ profit margins. Insurance may be used to protect cloud providers against such a danger. In this paper, closed formulas are provided through the expected utility paradigm to set the insurance premium under different outage models and QoS metrics (no. of outages, no. of long outages, and unavailability). When the cloud service is paid through a fixed fee, we also provide the maximum unit compensation that a cloud provider can offer so as to meet constraints on its profit loss. The unit compensation is shown to vary approximately as the inverse square of the service fee.


2018 ◽  
Vol 7 (3.27) ◽  
pp. 223
Author(s):  
A Banushri ◽  
R A. Karthika

To enjoy the full benefits of cloud computing, it is necessary to have built-in security, privacy, compliance and legal requirements for the cloud implementation and use. Each industry has its own risk levels that it could work within. A company, which is planning to use public cloud services, should be conscious of the regulations and industry risks and need to monitor it and abide by the same. This is due to multi-tenant and open to all nature of public cloud. A thorough risk analysis must be done initially with a public cloud provider. The main objective is to identify the existing vulnerabilities and to implement the measures to counter those threats. There are a variety of risks such as vendor lock-in, non-compliance, poor provisioning, unauthorized access, loss of control, Service Level Agreement (SLA) violations, Internet attacks, etc. To alleviate the risks, there are several measures that could be deployed. This paper deals with mitigation mechanism, security requirements and various risks associated with public cloud.  


2021 ◽  
Author(s):  
Sebastian Perez-Salazar ◽  
Ishai Menache ◽  
Mohit Singh ◽  
Alejandro Toriello

Motivated by maximizing spot instances in cloud shared systems, in this work, we consider the problem of taking advantage of unused resources in highly dynamic cloud environments while preserving users’ performance. We introduce an online model for sharing resources that captures basic properties of cloud systems, such as unpredictable users’ demand patterns, very limited feedback from the system, and service level agreement (SLA) between the users and the cloud provider. We provide a simple and efficient algorithm for the single-resource case. For any demand patterns, our algorithm guarantees near-optimal resource utilization as well as high users’ performance compared with their SLA baseline. In addition to this, we validate empirically the performance of our algorithm using synthetic data and data obtained from Microsoft’s systems.


Sign in / Sign up

Export Citation Format

Share Document