A Cost-Efficient Container Orchestration Strategy in Kubernetes-Based Cloud Computing Infrastructures with Heterogeneous Resources

2020 ◽  
Vol 20 (2) ◽  
pp. 1-24 ◽  
Author(s):  
Zhiheng Zhong ◽  
Rajkumar Buyya
2020 ◽  
Vol 11 (1) ◽  
pp. 149
Author(s):  
Wu-Chun Chung ◽  
Tsung-Lin Wu ◽  
Yi-Hsuan Lee ◽  
Kuo-Chan Huang ◽  
Hung-Chang Hsiao ◽  
...  

Resource allocation is vital for improving system performance in big data processing. The resource demand for various applications can be heterogeneous in cloud computing. Therefore, a resource gap occurs while some resource capacities are exhausted and other resource capacities on the same server are still available. This phenomenon is more apparent when the computing resources are more heterogeneous. Previous resource-allocation algorithms paid limited attention to this situation. When such an algorithm is applied to a server with heterogeneous resources, resource allocation may result in considerable resource wastage for the available but unused resources. To reduce resource wastage, a resource-allocation algorithm, called the minimizing resource gap (MRG) algorithm, for heterogeneous resources is proposed in this study. In MRG, the gap between resource usages for each server in cloud computing and the resource demands among various applications are considered. When an application is launched, MRG calculates resource usage and allocates resources to the server with the minimized usage gap to reduce the amount of available but unused resources. To demonstrate MRG performance, the MRG algorithm was implemented in Apache Spark. CPU- and memory-intensive applications were applied as benchmarks with different resource demands. Experimental results proved the superiority of the proposed MRG approach for improving the system utilization to reduce the overall completion time by up to 24.7% for heterogeneous servers in cloud computing.


2021 ◽  
Vol 12 (11) ◽  
pp. 1523-1533
Author(s):  
Bidush Kumar Sahoo , Et. al.

Cloud computing is built upon the advancement of virtualization and distributed computing to support cost-efficient usage of computing resources and to provide on demand services. After methodical analysis on various factors affecting fault tolerance during load balancing is performed and it is concluded that the factors influencing fault tolerance in load balancing are cloud security, adaptability etc. in comparatively more software firms. In this paper, we have created a model for various IT industries for checking the fault tolerance during Load balancing. An exploration is done with the help of some renowned IT farms and industries in South India. This work consists of 20 hypotheses which may affect the fault tolerance during load balancing in South India. It is verified by using potential statistical analysis tool i.e. Statistical Package for Social Science (SPSS).


Author(s):  
Holger Schrödl ◽  
Stefan Wind

In industrial practice, cloud computing is becoming increasingly established as an option for formulating cost-efficient and needs-oriented information systems. Despite the increasing acceptance of cloud computing within the industry, many fundamental questions remain unanswered, or are answered only partially. Besides issues relating to the best architectures, legal issues, and pricing models, suppliers of cloud-based solutions are faced with the issue of appropriate requirements engineering. This means eliciting optimal understanding of the customer’s requirements and implementing this into appropriate requirements of the solution to be realised. This chapter examines selected, established requirements engineering methods in order to study the extent to which they can be applied to the specific requirements of cloud-based solutions. Furthermore, it develops a comparison framework containing the features of cloud computing. This comparison framework is applied to four established process models for requirements engineering. Recommendations for a requirements engineering process adapted to cloud computing are derived.


Energies ◽  
2020 ◽  
Vol 13 (21) ◽  
pp. 5706
Author(s):  
Muhammad Shuaib Qureshi ◽  
Muhammad Bilal Qureshi ◽  
Muhammad Fayaz ◽  
Muhammad Zakarya ◽  
Sheraz Aslam ◽  
...  

Cloud computing is the de facto platform for deploying resource- and data-intensive real-time applications due to the collaboration of large scale resources operating in cross-administrative domains. For example, real-time systems are generated by smart devices (e.g., sensors in smart homes that monitor surroundings in real-time, security cameras that produce video streams in real-time, cloud gaming, social media streams, etc.). Such low-end devices form a microgrid which has low computational and storage capacity and hence offload data unto the cloud for processing. Cloud computing still lacks mature time-oriented scheduling and resource allocation strategies which thoroughly deliberate stringent QoS. Traditional approaches are sufficient only when applications have real-time and data constraints, and cloud storage resources are located with computational resources where the data are locally available for task execution. Such approaches mainly focus on resource provision and latency, and are prone to missing deadlines during tasks execution due to the urgency of the tasks and limited user budget constraints. The timing and data requirements exacerbate the efficient task scheduling and resource allocation problems. To cope with the aforementioned gaps, we propose a time- and cost-efficient resource allocation strategy for smart systems that periodically offload computational and data-intensive load to the cloud. The proposed strategy minimizes the data files transfer overhead to computing resources by selecting appropriate pairs of computing and storage resources. The celebrated results show the effectiveness of the proposed technique in terms of resource selection and tasks processing within time and budget constraints when compared with the other counterparts.


Cloud computing is a computing tool for humankind. In recent years, it is using to generate IT services, appliances for higher activities computing and outsourcing in a cost-efficient and flexible way. In modern times, a variety of types of bandwidth eater are growing speedily Cloud computing is growing phenomenal gradually to supply the different kinds of cloud services and applications to the internet-based customer. Cloud computing utilizes Internet applications to execute the large-scale jobs. The most important objective of cloud computing is to allocate and calculate different services transparently throughout a scalable network of machines. Load balancing is one of the significant issues in Cloud Computing. Loads should be divided as CPU load, the capacity of memory and system load which is the measurement of work that a computation system performs. Load balancing is a modern method where the load is being shared amongst several machines of a distributed system to enhance the utilization of various applications and response time of multiple tasks and prevent overloading situation and under loading situation. In or approach, we developed an algorithm, LBMMS, which combines all least completion time. For this study, LBMMS presents the proficient deployment of various resources in cloud computing


Sign in / Sign up

Export Citation Format

Share Document