Research and Application of Artificial Intelligence Based Cloud Computing for Resource Allocation

Author(s):  
Xiaoxia LI
Author(s):  
Jing Chen ◽  
Tiantian Du ◽  
Gongyi Xiao

AbstractCloud resource demands, especially some unclear and emergent resource demands, are growing rapidly with the development of cloud computing, big data and artificial intelligence. The traditional cloud resource allocation methods do not support the emergent mode in guaranteeing the timeliness and optimization of resource allocation. This paper proposes a resource allocation algorithm for emergent demands in cloud computing. After building the priority of resource allocation and the matching distances of resource performance and resource proportion to respond to emergent resource demands, a multi-objective optimization model of cloud resource allocation is established based on the minimum number of the physical servers used and the minimum matching distances of resource performance and resource proportion. Then, an improved evolutionary algorithm, RAA-PI-NSGAII, is presented to solve the multi-objective optimization model, which not only improves the quality and distribution uniformity of the solution set but also accelerates the solving speed. The experimental results show that our algorithm can not only allocate resources quickly and optimally for emergent demands but also balance the utilization of all kinds of resources.


Author(s):  
Gurpreet Singh ◽  
Manish Mahajan ◽  
Rajni Mohana

BACKGROUND: Cloud computing is considered as an on-demand service resource with the applications towards data center on pay per user basis. For allocating the resources appropriately for the satisfaction of user needs, an effective and reliable resource allocation method is required. Because of the enhanced user demand, the allocation of resources has now considered as a complex and challenging task when a physical machine is overloaded, Virtual Machines share its load by utilizing the physical machine resources. Previous studies lack in energy consumption and time management while keeping the Virtual Machine at the different server in turned on state. AIM AND OBJECTIVE: The main aim of this research work is to propose an effective resource allocation scheme for allocating the Virtual Machine from an ad hoc sub server with Virtual Machines. EXECUTION MODEL: The execution of the research has been carried out into two sections, initially, the location of Virtual Machines and Physical Machine with the server has been taken place and subsequently, the cross-validation of allocation is addressed. For the sorting of Virtual Machines, Modified Best Fit Decreasing algorithm is used and Multi-Machine Job Scheduling is used while the placement process of jobs to an appropriate host. Artificial Neural Network as a classifier, has allocated jobs to the hosts. Measures, viz. Service Level Agreement violation and energy consumption are considered and fruitful results have been obtained with a 37.7 of reduction in energy consumption and 15% improvement in Service Level Agreement violation.


2020 ◽  
Vol 11 (1) ◽  
pp. 149
Author(s):  
Wu-Chun Chung ◽  
Tsung-Lin Wu ◽  
Yi-Hsuan Lee ◽  
Kuo-Chan Huang ◽  
Hung-Chang Hsiao ◽  
...  

Resource allocation is vital for improving system performance in big data processing. The resource demand for various applications can be heterogeneous in cloud computing. Therefore, a resource gap occurs while some resource capacities are exhausted and other resource capacities on the same server are still available. This phenomenon is more apparent when the computing resources are more heterogeneous. Previous resource-allocation algorithms paid limited attention to this situation. When such an algorithm is applied to a server with heterogeneous resources, resource allocation may result in considerable resource wastage for the available but unused resources. To reduce resource wastage, a resource-allocation algorithm, called the minimizing resource gap (MRG) algorithm, for heterogeneous resources is proposed in this study. In MRG, the gap between resource usages for each server in cloud computing and the resource demands among various applications are considered. When an application is launched, MRG calculates resource usage and allocates resources to the server with the minimized usage gap to reduce the amount of available but unused resources. To demonstrate MRG performance, the MRG algorithm was implemented in Apache Spark. CPU- and memory-intensive applications were applied as benchmarks with different resource demands. Experimental results proved the superiority of the proposed MRG approach for improving the system utilization to reduce the overall completion time by up to 24.7% for heterogeneous servers in cloud computing.


Sign in / Sign up

Export Citation Format

Share Document