service level agreement
Recently Published Documents


TOTAL DOCUMENTS

639
(FIVE YEARS 231)

H-INDEX

21
(FIVE YEARS 5)

2022 ◽  
Vol 54 (8) ◽  
pp. 1-38
Author(s):  
Alexandre H. T. Dias ◽  
Luiz. H. A. Correia ◽  
Neumar Malheiros

Virtual machine consolidation has been a widely explored topic in recent years due to Cloud Data Centers’ effect on global energy consumption. Thus, academia and companies made efforts to achieve green computing, reducing energy consumption to minimize environmental impact. By consolidating Virtual Machines into a fewer number of Physical Machines, resource provisioning mechanisms can shutdown idle Physical Machines to reduce energy consumption and improve resource utilization. However, there is a tradeoff between reducing energy consumption while assuring the Quality of Service established on the Service Level Agreement. This work introduces a Systematic Literature Review of one year of advances in virtual machine consolidation. It provides a discussion on methods used in each step of the virtual machine consolidation, a classification of papers according to their contribution, and a quantitative and qualitative analysis of datasets, scenarios, and metrics.


2022 ◽  
Vol 40 (1) ◽  
pp. 1-32
Author(s):  
Joel Mackenzie ◽  
Matthias Petri ◽  
Alistair Moffat

Inverted indexes continue to be a mainstay of text search engines, allowing efficient querying of large document collections. While there are a number of possible organizations, document-ordered indexes are the most common, since they are amenable to various query types, support index updates, and allow for efficient dynamic pruning operations. One disadvantage with document-ordered indexes is that high-scoring documents can be distributed across the document identifier space, meaning that index traversal algorithms that terminate early might put search effectiveness at risk. The alternative is impact-ordered indexes, which primarily support top- disjunctions but also allow for anytime query processing, where the search can be terminated at any time, with search quality improving as processing latency increases. Anytime query processing can be used to effectively reduce high-percentile tail latency that is essential for operational scenarios in which a service level agreement (SLA) imposes response time requirements. In this work, we show how document-ordered indexes can be organized such that they can be queried in an anytime fashion, enabling strict latency control with effective early termination. Our experiments show that processing document-ordered topical segments selected by a simple score estimator outperforms existing anytime algorithms, and allows query runtimes to be accurately limited to comply with SLA requirements.


Dynamic resource allocation of cloud data centers is implemented with the use of virtual machine migration. Selected virtual machines (VM) should be migrated on appropriate destination servers. This is a critical step and should be performed according to several criteria. It is proposed to use the criteria of minimum resource wastage and service level agreement violation. The optimization problem of the VM placement according to two criteria is formulated, which is equivalent to the well-known main assignment problem in terms of the structure, necessary conditions, and the nature of variables. It is suggested to use the Hungarian method or to reduce the problem to a closed transport problem. This allows the exact solution to be obtained in real time. Simulation has shown that the proposed approach outperforms widely used bin-packing heuristics in both criteria.


2022 ◽  
Vol 12 (1) ◽  
pp. 0-0

Cloud datacenters consume enormous energy and generate heat, which affects the environment. Hence, there must be proper management of resources in the datacenter for optimum usage of energy. Virtualization enabled computing improves the performance of the datacenters in terms of these parameters. Therefore, Virtual Machines (VMs) management is a required activity in the datacenter, which selects the VMs from the overloaded host for migration, VM migration from the underutilized host, and VM placement in the suitable host. In this paper, a method (SMA-LinR) has been developed using the Simple Moving Average (SMA) integrated with Linear Regression (LinR), which predicts the CPU utilization and determines the overloading of the host. Further, this predicted value is used to place the VMs in the appropriate PM. The main aim of this research is to reduce energy consumption (EC) and service level agreement violations (SLAV). Extensive simulations have been performed on real workload data, and simulation results indicate that SMA-LinR provides better EC and service quality improvements.


2022 ◽  
pp. 803-824
Author(s):  
Basetty Mallikarjuna

In this article, the proposed feedback-based resource management approach provides data processing, huge computation, large storage, and networking services between Internet of Things (IoT)-based Cloud data centers and the end-users. The real-time applications of IoT, such as smart city, smart home, health care management systems, traffic management systems, and transportation management systems, require less response time and latency to process the huge amount of data. The proposed feedback-based resource management plan provides a novel resource management technique, consisting of an integrated architecture and maintains the service-level agreement (SLA). It can optimize energy consumption, response time, network bandwidth, security, and reduce latency. The experimental results are tested with the IFogSim tool kit and have proved that the proposed approach is effective and suitable for smart communication in IoT-based cloud.


2022 ◽  
Vol 12 (1) ◽  
pp. 0-0

Resource allocation and scheduling algorithms are the two essential factors that determine the satisfaction of cloud users. The major cloud resources involved here are servers, storage, network, databases, software and so on based on requirements of customers. In the competitive scenario, each service provider tries to use factors like optimal configuration of resources, pricing, Quality of Service (QoS) parameters and Service Level Agreement (SLA) in order to benefit cloud users and service providers. Since, many researchers have proposed different scheduling algorithms and resource allocation strategies, it becomes a cumbersome task to conclude which ones really benefit customers and service providers. Hence, this paper analyses and presents the most relevant considerations that would help the cloud researchers in achieving their goals in terms of mapping of tasks to cloud resources.


2021 ◽  
Vol 19 (6) ◽  
pp. 676-693
Author(s):  
Behailu Getachew Wolde ◽  
Abiot Sinamo Boltana

Cloud offers many ready-made REST services for the end users. This offer realizes the service composition through implementation somewhere on internet based on Service Level Agreement (SLA). For ensuring this SLA, a software testing is a useful means for attesting a non-functional requirement that guarantees quality assurance from end user's perspective. However, test engineer experiences only what goes in and out through an interface that contains a high level behaviors separated from its underlying details. Testing with these behaviors become an issue for classical testing procedures. So, REST API through composition is an alternative new promising approach for modeling behaviors with parameters against the cloud. This new approach helps to devise test effectiveness in terms of REST based behavior-driven implementation. It aims to understand functional behaviors through API methods based on input domain modeling (IDM) on the standard keyboard pattern. By making an effective REST design the test engineer sends complete test inputs to its API directly on application, and gets test responses from the infrastructure. We consider NEMo mobility API specification to design an IDM, which represents pattern match of mobility search URL API path scope. With this scope, sample mobility REST API service compositions are used. Then, the test assertions are implemented to validate each path resource to test the components and the end-to-end integration on the specified service.


2021 ◽  
Vol 15 (3) ◽  
pp. 216-238
Author(s):  
Rajeshwari B S ◽  
M. Dakshayini ◽  
H.S. Guruprasad

The federated cloud is the future generation of cloud computing, allowing sharing of computing and storage resources, and servicing of user tasks among cloud providers through a centralized control mechanism. However, a great challenge lies in the efficient management of such federated clouds and fair distribution of the load among heterogeneous cloud providers. In our proposed approach, called QPFS_MASG, at the federated cloud level, the incoming tasks queue are partitioned in order to achieve a fair distribution of the load among all cloud providers of the federated cloud. Then, at the cloud level, task scheduling using the Modified Activity Selection by Greedy (MASG) technique assigns the tasks to different virtual machines (VMs), considering the task deadline as the key factor in achieving good quality of service (QoS). The proposed approach takes care of servicing tasks within their deadline, reducing service level agreement (SLA) violations, improving the response time of user tasks as well as achieving fair distribution of the load among all participating cloud providers. The QPFS_MASG was implemented using CloudSim and the evaluation result revealed a guaranteed degree of fairness in service distribution among the cloud providers with reduced response time and SLA violations compared to existing approaches. Also, the evaluation results showed that the proposed approach serviced the user tasks with minimum number of VMs.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Xiangli Chang ◽  
Hailang Cui

With the increasing popularity of a large number of Internet-based services and a large number of services hosted on cloud platforms, a more powerful back-end storage system is needed to support these services. At present, it is very difficult or impossible to implement a distributed storage to meet all the above assumptions. Therefore, the focus of research is to limit different characteristics to design different distributed storage solutions to meet different usage scenarios. Economic big data should have the basic requirements of high storage efficiency and fast retrieval speed. The large number of small files and the diversity of file types make the storage and retrieval of economic big data face severe challenges. This paper is oriented to the application requirements of cross-modal analysis of economic big data. According to the source and characteristics of economic big data, the data types are analyzed and the database storage architecture and data storage structure of economic big data are designed. Taking into account the spatial, temporal, and semantic characteristics of economic big data, this paper proposes a unified coding method based on the spatiotemporal data multilevel division strategy combined with Geohash and Hilbert and spatiotemporal semantic constraints. A prototype system was constructed based on Mongo DB, and the performance of the multilevel partition algorithm proposed in this paper was verified by the prototype system based on the realization of data storage management functions. The Wiener distributed memory based on the principle of Wiener filter is used to store the workload of each workload distributed storage window in a distributed manner. For distributed storage workloads, this article adopts specific types of workloads. According to its periodicity, the workload is divided into distributed storage windows of specific duration. At the beginning of each distributed storage window, distributed storage is distributed to the next distributed storage window. Experiments and tests have verified the distributed storage strategy proposed in this article, which proves that the Wiener distributed storage solution can save platform resources and configuration costs while ensuring Service Level Agreement (SLA).


Sign in / Sign up

Export Citation Format

Share Document