Optimizing the Performance of Containerized Cloud Software Systems Using Adaptive PID Controllers

2021 ◽  
Vol 15 (3) ◽  
pp. 1-27
Author(s):  
Mikael Sabuhi ◽  
Nima Mahmoudi ◽  
Hamzeh Khazaei

Control theory has proven to be a practical approach for the design and implementation of controllers, which does not inherit the problems of non-control theoretic controllers due to its strong mathematical background. State-of-the-art auto-scaling controllers suffer from one or more of the following limitations: (1) lack of a reliable performance model, (2) using a performance model with low scalability, tractability, or fidelity, (3) being application- or architecture-specific leading to low extendability, and (4) no guarantee on their efficiency. Consequently, in this article, we strive to mitigate these problems by leveraging an adaptive controller, which is composed of a neural network as the performance model and a Proportional-Integral-Derivative (PID) controller as the scaling engine. More specifically, we design, implement, and analyze different flavours of these adaptive and non-adaptive controllers, and we compare and contrast them against each other to find the most suitable one for managing containerized cloud software systems at runtime. The controller’s objective is to maintain the response time of the controlled software system in a pre-defined range, and meeting the Service-level Agreements, while leading to efficient resource provisioning.

Author(s):  
Bahar Asgari ◽  
Mostafa Ghobaei Arani ◽  
Sam Jabbehdari

<p>Cloud services have become more popular among users these days. Automatic resource provisioning for cloud services is one of the important challenges in cloud environments. In the cloud computing environment, resource providers shall offer required resources to users automatically without any limitations. It means whenever a user needs more resources, the required resources should be dedicated to the users without any problems. On the other hand, if resources are more than user’s needs extra resources should be turn off temporarily and turn back on whenever they needed. In this paper, we propose an automatic resource provisioning approach based on reinforcement learning for auto-scaling resources according to Markov Decision Process (MDP). Simulation Results show that the rate of Service Level Agreement (SLA) violation and stability that the proposed approach better performance compared to the similar approaches.</p>


2019 ◽  
Vol 19 (3) ◽  
pp. 94-117
Author(s):  
K. Bhargavi ◽  
B. Sathish Babu

Abstract Efficiently provisioning the resources in a large computing domain like cloud is challenging due to uncertainty in resource demands and computation ability of the cloud resources. Inefficient provisioning of the resources leads to several issues in terms of the drop in Quality of Service (QoS), violation of Service Level Agreement (SLA), over-provisioning of resources, under-provisioning of resources and so on. The main objective of the paper is to formulate optimal resource provisioning policies by efficiently handling the uncertainties in the jobs and resources with the application of Neutrosophic Soft-Set (NSS) and Fuzzy Neutrosophic Soft-Set (FNSS). The performance of the proposed work compared to the existing fuzzy auto scaling work achieves the throughput of 80% with the learning rate of 75% on homogeneous and heterogeneous workloads by considering the RUBiS, RUBBoS, and Olio benchmark applications.


Author(s):  
Bahar Asgari ◽  
Mostafa Ghobaei Arani ◽  
Sam Jabbehdari

<p>Cloud services have become more popular among users these days. Automatic resource provisioning for cloud services is one of the important challenges in cloud environments. In the cloud computing environment, resource providers shall offer required resources to users automatically without any limitations. It means whenever a user needs more resources, the required resources should be dedicated to the users without any problems. On the other hand, if resources are more than user’s needs extra resources should be turn off temporarily and turn back on whenever they needed. In this paper, we propose an automatic resource provisioning approach based on reinforcement learning for auto-scaling resources according to Markov Decision Process (MDP). Simulation Results show that the rate of Service Level Agreement (SLA) violation and stability that the proposed approach better performance compared to the similar approaches.</p>


2017 ◽  
Vol 3 ◽  
pp. e141 ◽  
Author(s):  
Christoph Hochreiner ◽  
Michael Vögler ◽  
Stefan Schulte ◽  
Schahram Dustdar

The continuous increase of unbound streaming data poses several challenges to established data stream processing engines. One of the most important challenges is the cost-efficient enactment of stream processing topologies under changing data volume. These data volume pose different loads to stream processing systems whose resource provisioning needs to be continuously updated at runtime. First approaches already allow for resource provisioning on the level of virtual machines (VMs), but this only allows for coarse resource provisioning strategies. Based on current advances and benefits for containerized software systems, we have designed a cost-efficient resource provisioning approach and integrated it into the runtime of the Vienna ecosystem for elastic stream processing. Our resource provisioning approach aims to maximize the resource usage for VMs obtained from cloud providers. This strategy only releases processing capabilities at the end of the VMs minimal leasing duration instead of releasing them eagerly as soon as possible as it is the case for threshold-based approaches. This strategy allows us to improve the service level agreement compliance by up to 25% and a reduction for the operational cost of up to 36%.


Author(s):  
Min Mao ◽  
Norman M. Wereley ◽  
Alan L. Browne

Feasibility of a sliding seat utilizing adaptive control of a magnetorheological (MR) energy absorber (MREA) to minimize loads imparted to a payload mass in a ground vehicle for frontal impact speeds as high as 7 m/s (15.7 mph) is investigated. The crash pulse for a given impact speed was assumed to be a rectangular deceleration pulse having a prescribed magnitude and duration. The adaptive control objective is to bring the payload (occupant plus seat) mass to a stop using the available stroke, while simultaneously accommodating changes in impact velocity and occupant mass ranging from a 5th percentile female to a 95th percentile male. The payload is first treated as a single-degree-of-freedom (SDOF) rigid lumped mass, and two adaptive control algorithms are developed: (1) constant Bingham number control, and (2) constant force control. To explore the effects of occupant compliance on adaptive controller performance, a multi-degree-of-freedom (MDOF) lumped mass biodynamic occupant model was integrated with the seat mass. The same controllers were used for both the SDOF and MDOF cases based on SDOF controller analysis because the biodynamic degrees of freedom are neither controllable nor observable. The designed adaptive controllers successfully controlled load-stroke profiles to bring payload mass to rest in the available stroke and reduced payload decelerations. Analysis showed extensive coupling between the seat structures and occupant biodynamic response, although minor adjustments to the control gains enabled full use of the available stroke.


Algorithms ◽  
2018 ◽  
Vol 11 (12) ◽  
pp. 190
Author(s):  
Peter Nghiem

Considering the recent exponential growth in the amount of information processed in Big Data, the high energy consumed by data processing engines in datacenters has become a major issue, underlining the need for efficient resource allocation for more energy-efficient computing. We previously proposed the Best Trade-off Point (BToP) method, which provides a general approach and techniques based on an algorithm with mathematical formulas to find the best trade-off point on an elbow curve of performance vs. resources for efficient resource provisioning in Hadoop MapReduce. The BToP method is expected to work for any application or system which relies on a trade-off elbow curve, non-inverted or inverted, for making good decisions. In this paper, we apply the BToP method to the emerging cluster computing framework, Apache Spark, and show that its performance and energy consumption are better than Spark with its built-in dynamic resource allocation enabled. Our Spark-Bench tests confirm the effectiveness of using the BToP method with Spark to determine the optimal number of executors for any workload in production environments where job profiling for behavioral replication will lead to the most efficient resource provisioning.


1988 ◽  
Vol 110 (1) ◽  
pp. 62-69 ◽  
Author(s):  
M. Tomizuka ◽  
R. Horowitz ◽  
G. Anwar ◽  
Y. L. Jia

This paper is concerned with the digital implementation and experimental evaluation of two adaptive controllers for robotic manipulators. The first is a continuous time model reference adaptive controller, and the second is a discrete time adaptive controller. The primary purpose of these adaptive controllers is to compensate for inertial variations due to changes in configuration and payload, as well as disturbances, such as Coulomb friction and/or gravitational forces. Experimental results are obtained from a laboratory test stand, which emulates an one-axis direct drive robot arm with variable inertia, as well as a Toshiba TSR-500V industrial robot. Experimental results from the test stand indicate that these adaptive control schemes are promising for the control of direct drive robot arms. Friction forces arising from the harmonic gear of the Toshiba robot were detrimental if not properly compensated. Because of a high gearing ratio, the advantage of adaptive control for the Toshiba arm could be shown only by detuning the controller.


Sign in / Sign up

Export Citation Format

Share Document