scholarly journals Cloud Resource Demand Prediction using Machine Learning in the Context of QoS Parameters

2021 ◽  
Vol 19 (2) ◽  
Author(s):  
Piotr Nawrocki ◽  
Patryk Osypanka

AbstractPredicting demand for computing resources in any system is a vital task since it allows the optimized management of resources. To some degree, cloud computing reduces the urgency of accurate prediction as resources can be scaled on demand, which may, however, result in excessive costs. Numerous methods of optimizing cloud computing resources have been proposed, but such optimization commonly degrades system responsiveness which results in quality of service deterioration. This paper presents a novel approach, using anomaly detection and machine learning to achieve cost-optimized and QoS-constrained cloud resource configuration. The utilization of these techniques enables our solution to adapt to different system characteristics and different QoS constraints. Our solution was evaluated using a system located in Microsoft’s Azure cloud environment, and its efficiency in other providers’ computing clouds was estimated as well. Experiment results demonstrate a cost reduction ranging from 51% to 85% (for PaaS/IaaS) over the tested period.

2020 ◽  
Vol 8 (6) ◽  
pp. 3591-3596

One of the biggest challenges cloud computing faces is forecasting correctly the resource use for future demands. Consumption of cloud resources is consistently changing, making it difficult for algorithms to forecast to make precise predictions. Using of the machine learning in cloud computing leads to many benefits. Such as chances of the enhancement in the quality of the service via forecasting future burden of works and responding automatically with dynamic scaling. This motivates the work presented in this paper to predict CPU use of host machines for a single time and multiple times. This paper uses three supervised machine-learning algorithms to classify and predict CPU utilization because of their capability to keep data and predict accurate time series issues. It is tried to forecast CPU usage with better accuracy while comparing to traditional methods


2020 ◽  
Vol 8 (1) ◽  
pp. 65-81 ◽  
Author(s):  
Pradeep Kumar Tiwari ◽  
Sandeep Joshi

It has already been proven that VMs are over-utilized in the initial stages and are underutilized in the later stages. Due to the random utilization of the CPU, resources are sometimes heavily loaded whereas other resources are idle. Load imbalance causes service level agreement (SLA) violations resulting in poor quality of service (QoS) aided by the imperfect management of resources. An effective load balancing mechanism helps to achieve balanced utilization, which maximizes the throughput, availability, and reliability and reduces the response and migration time. The proposed algorithm can effectively minimize the response and the migration time and maximize reliability, and throughput. This research also helps to understand the load balancing policies and analysis of other research works.


2020 ◽  
Vol 37 ◽  
pp. 59-68
Author(s):  
Maheta Ashish ◽  
Samrat V.O. Khanna

Cloud computing is provides resource allocation which facilitates the cloud resource provider responsible to the cloud consumers. The main objective of resource manager is to assign the dynamic resource to the task in the execution and measures response time, execution cost, resource utilization and system performance. The resource manager is optimizing the resource and measure the completion time for assign resource. The resource manager is also measure to execute the resource in the optimal way to complete the task in minimum completion time. The virtualization is techniques mandatory to allocate the dynamic resource depends on the users need. There are also green computing techniques involved for enhanced the no of server. The skewness is basically used to enhance the quality of service using the various parameters. The proposed algorithms are considered to allocate the cloud resource as per the users requirement. The advantage of proposed algorithm is to view the analysis of cpu utilization and also reduced the memory usage.


2019 ◽  
Vol 7 (2) ◽  
pp. 9-20 ◽  
Author(s):  
Selvakumar A. ◽  
Gunasekaran G.

Cloud computing is a model for conveying data innovation benefits in which assets are recovered from the web through online devices and applications, instead of an immediate association with a server. Clients can set up and boot the required assets and they need to pay just for the required assets. Subsequently, later on giving a component to a productive asset administration and the task will be a vital target of Cloud computing. Load balancing is one of the major concerns in cloud computing, and the main purpose of it is to satisfy the requirements of users by distributing the load evenly among all servers in the cloud to maximize the utilization of resources, to increase throughput, provide good response time and to reduce energy consumption. To optimize resource allocation and ensure the quality of service, this article proposes a novel approach for load-balancing based on the enhanced ant colony optimization.


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 196
Author(s):  
Nancy A Angel ◽  
Dakshanamoorthy Ravindran ◽  
P M Durai Raj Vincent ◽  
Kathiravan Srinivasan ◽  
Yuh-Chung Hu

Cloud computing has become integral lately due to the ever-expanding Internet-of-things (IoT) network. It still is and continues to be the best practice for implementing complex computational applications, emphasizing the massive processing of data. However, the cloud falls short due to the critical constraints of novel IoT applications generating vast data, which entails a swift response time with improved privacy. The newest drift is moving computational and storage resources to the edge of the network, involving a decentralized distributed architecture. The data processing and analytics perform at proximity to end-users, and overcome the bottleneck of cloud computing. The trend of deploying machine learning (ML) at the network edge to enhance computing applications and services has gained momentum lately, specifically to reduce latency and energy consumed while optimizing the security and management of resources. There is a need for rigorous research efforts oriented towards developing and implementing machine learning algorithms that deliver the best results in terms of speed, accuracy, storage, and security, with low power consumption. This extensive survey presented on the prominent computing paradigms in practice highlights the latest innovations resulting from the fusion between ML and the evolving computing paradigms and discusses the underlying open research challenges and future prospects.


Sensors ◽  
2019 ◽  
Vol 19 (15) ◽  
pp. 3400 ◽  
Author(s):  
Tomasz Rymarczyk ◽  
Edward Kozłowski ◽  
Grzegorz Kłosowski ◽  
Konrad Niderla

The main goal of the research presented in this paper was to develop a refined machine learning algorithm for industrial tomography applications. The article presents algorithms based on logistic regression in relation to image reconstruction using electrical impedance tomography (EIT) and ultrasound transmission tomography (UST). The test object was a tank filled with water in which reconstructed objects were placed. For both EIT and UST, a novel approach was used in which each pixel of the output image was reconstructed by a separately trained prediction system. Therefore, it was necessary to use many predictive systems whose number corresponds to the number of pixels of the output image. Thanks to this approach the under-completed problem was changed to an over-completed one. To reduce the number of predictors in logistic regression by removing irrelevant and mutually correlated entries, the elastic net method was used. The developed algorithm that reconstructs images pixel-by-pixel is insensitive to the shape, number and position of the reconstructed objects. In order to assess the quality of mappings obtained thanks to the new algorithm, appropriate metrics were used: compatibility ratio (CR) and relative error (RE). The obtained results enabled the assessment of the usefulness of logistic regression in the reconstruction of EIT and UST images.


Author(s):  
Adrian Xi Lin ◽  
Andrew Fu Wah Ho ◽  
Kang Hao Cheong ◽  
Zengxiang Li ◽  
Wentong Cai ◽  
...  

The accurate prediction of ambulance demand provides great value to emergency service providers and people living within a city. It supports the rational and dynamic allocation of ambulances and hospital staffing, and ensures patients have timely access to such resources. However, this task has been challenging due to complex multi-nature dependencies and nonlinear dynamics within ambulance demand, such as spatial characteristics involving the region of the city at which the demand is estimated, short and long-term historical demands, as well as the demographics of a region. Machine learning techniques are thus useful to quantify these characteristics of ambulance demand. However, there is generally a lack of studies that use machine learning tools for a comprehensive modeling of the important demand dependencies to predict ambulance demands. In this paper, an original and novel approach that leverages machine learning tools and extraction of features based on the multi-nature insights of ambulance demands is proposed. We experimentally evaluate the performance of next-day demand prediction across several state-of-the-art machine learning techniques and ambulance demand prediction methods, using real-world ambulatory and demographical datasets obtained from Singapore. We also provide an analysis of this ambulatory dataset and demonstrate the accuracy in modeling dependencies of different natures using various machine learning techniques.


With the blessings of Science and Technology, as the death rate is getting decreased, population is getting increased. With that, the utilization of Land is also getting increased for urbanization for which the quality of Land is degrading day by day and also the climates as well as vegetations are getting affected. To keep the Land quality at its best possible, the study on Land cover images, which are acquired from satellites based on time series, spatial and colour, are required to understand how the Land can be used further in future. Using NDVI (Normalized Difference Vegetation Index) and Machine Learning algorithms (either supervised or unsupervised), now it is possible to classify areas and predict about Land utilization in future years. Our proposed study is to enhance the acquired images with better Vegetation Index which will segment and classify the data in more efficient way and by feeding these data to the Machine Learning algorithm model, higher accuracy will be achieved. Hence, a novel approach with proper model, Machine Learning algorithm and greater accuracy is always acceptable


Author(s):  
Abhilasha Rangra ◽  
Vivek Kumar Sehgal ◽  
Shailendra Shukla

Cloud computing represents a new era of using high quality and a lesser quantity of resources in a number of premises. In cloud computing, especially infrastructure base resources (IAAS), cost denotes an important factor from the service provider. So, cost reduction is the major challenge but at the same time, the cost reduction increases the time which affects the quality of the service provider. This challenge in depth is related to the balance between time and cost resulting in a complex decision-based problem. This analysis helps in motivating the use of learning approaches. In this article, the proposed multi-tasking convolution neural network (M-CNN) is used which provides learning of task-based deadline and cost. Further, provides a decision for the process of task scheduling. The experimental analysis uses two types of dataset. One is the tweets and the other is Genome workflow and the comparison of the method proposed has been done with the use of distinct approaches such as PSO and PSO-GA. Simulated results show significant improvement in the use of both the data sets.


Author(s):  
Nirmalan R. ◽  
Gokulakrishnan K. ◽  
Jesu Vedha Nayahi J.

Cloud computing is a modern exemplar to provide services through the internet. The development of cloud computing has eliminated the need of manpower, which is mainly used for the management of resources. During the cloud computing process, the term cloud balancing is a vital one. It deals with distribution of workloads and computing resources. The load balancing allows the company to balance the load according to the demands by the allocation of the resources to multiple servers or networks. The quality of service (QoS) metrics, including cost, response time, performance, throughput, and resource utilization are improved by means of load balancing. In this chapter, the authors study the literature on the load-balancing algorithms in heterogeneous cluster cloud environment with some of its classification. Additionally, they provide a review in each of these categories. Also, they provide discernment into the identification of open issues and guidance for future research work.


Sign in / Sign up

Export Citation Format

Share Document