scholarly journals Processing streams in a monitoring cloud cluster

Author(s):  
Alexey N. Nazarov

The creation of monitoring clusters based on cloud computing technologies is a promising direction for the development of systems for continuous monitoring of objects for various purposes in the web space. Hadoop web-programming environment is the technological basis for the development of algorithmic and software solutions for the synthesis of monitoring clusters, including information security and information counteraction systems. The International Telecommunication Union’ (ITU) recommendations Y. 3510 present the requirements for cloud infrastructure that require monitoring the performance of deployed applications based on the collection of real-world statistics. Often, computing resources of monitoring clusters of cloud data centers are allocated for continuous parallel processing of high-speed streaming data, which imposes new requirements to monitoring technologies, necessitating the creation and research of new models of parallel computing. The need to use service monitoring plays an important role in the cloud computing industry, especially for SLA/QoS assessment, as the application or service may experience problems even if the virtual machines on which the work is taking place appear to be operational. This requires to study the methodological possibilities of organization to study of parallel processing high-speed streaming services with the processing of huge amounts of bit data, and, simultaneously, to estimate the necessary computational resource. In the conditions of high dynamics of changes in the bit rate of information generation from the source, a model of the bit rate of Discretized Stream (DStream) formation is proposed, which has a common application. Based on the poly-burst nature of the bit rate model, a model of group content traffic of any sources of different services processed in the cloud cluster was created. The obtained results made it possible to develop mathematical models of parallel DStreams from sources processed in a cloud cluster via Hadoop technology using the micro-batch architecture of the Spark Streaming module. These models take into account the flow of requests for maintenance from sources of different services, on the one hand, and, on the other hand, the needs of services in bit rate, taking into account the multichannel traffic of sources of various services. At the same time, analytical relations are obtained to calculate the required performance of the Hadoop cluster at a given value of the probability of batch loss.

2015 ◽  
Vol 5 (4) ◽  
pp. 36-55
Author(s):  
Paula Prata ◽  
Samuel Alves

This paper presents a platform to create and manage virtual computing laboratories using Cloud resources. Using this platform a professor can create a customized laboratory according to the class needs. The laboratory is composed of a set of virtual machines that students may use to get access to the necessary computing resources to attend the class. The platform aims at the creation of a solution to avoid proprietary lock in's, and it was designed to be agnostic to the cloud infrastructure. The machines of the lab can be accessed using some remote desktop protocol and managed by non-experts users.


Author(s):  
Fargana J. Abdullayeva

The paper proposes a method for predicting the workload of virtual machines in the cloud infrastructure. Reconstruction probabilities of variational autoencoders were used to provide the prediction. Reconstruction probability is a probability criterion that considers the variability in the distribution of variables. In the proposed approach, the values of the reconstruction probabilities of the variational autoencoder show the workload level of the virtual machines. The results of the experiments showed that variational autoencoders gave better results in predicting the workload of virtual machines compared to simple deep neural networks. The generative characteristics of the variational autoencoders determine the workload level by the data reconstruction.


2019 ◽  
pp. 532-552
Author(s):  
Paula Prata ◽  
Samuel Alves

This paper presents a platform to create and manage virtual computing laboratories using Cloud resources. Using this platform a professor can create a customized laboratory according to the class needs. The laboratory is composed of a set of virtual machines that students may use to get access to the necessary computing resources to attend the class. The platform aims at the creation of a solution to avoid proprietary lock in's, and it was designed to be agnostic to the cloud infrastructure. The machines of the lab can be accessed using some remote desktop protocol and managed by non-expert users.


The cloud computing paradigm has settled to a stable stage. Due to its enormous advantages, services based on cloud computing are getting more and more attraction and adoption by diversified sectors of society. Because of its pay per use model, people prefer to execute various data crunching operations on high end virtual machines. Optimized resource management however becomes critical in such scenarios. Poor management of cloud resources may affect not only customer satisfaction but also wastage of available cloud infrastructure. An optimized resource sharing mechanism for collaborated cloud computing environments is suggested here. The suggested resource sharing technique solves starvation issue in inter cloud load balancing context. In case of occurrence of starvation problem, the suggested technique resolves the issue by switching under loaded and overloaded virtual machines between intra cloud and inter cloud computing environment.


The advancements in the cloud computing has gained the attention of several researchers to provide on-demand network access to users with shared resources. Cloud computing is important a research direction that can provide platforms and softwares to clients using internet. But, handling huge number of tasks in cloud infrastructure is a complicated task. Thus, it needs a load balancing method for allocating tasks to Virtual Machines (VMs) without influencing system performance. This paper proposes a load balancing technique, named Elephant Herd Grey Wolf Optimization (EHGWO) for balancing the loads. The proposed EHGWO is designed by integrating Elephant Herding Optimization (EHO) in Grey Wolf Optimizer (GWO) for selecting the optimal VMs for reallocation based on newly devised fitness function. The proposed load balancing technique considers different parameters of VMs and PMs for selecting the tasks to initiate the reallocation for load balancing. Here, two pick factors, named Task Pick Factor (TPF) and VM Pick Factor (VPF), are considered for allocating the tasks to balance the loads.


2013 ◽  
Vol 3 (2) ◽  
pp. 47-60 ◽  
Author(s):  
Absalom E. Ezugwu ◽  
Seyed M. Buhari ◽  
Sahalu B. Junaidu

Virtual machine allocation problem is one of the challenges in cloud computing environments, especially for the private cloud design. In this environment, each virtual machine is mapped unto the physical host in accordance with the available resource on the host machine. Specifically, quantifying the performance of scheduling and allocation policy on a Cloud infrastructure for different application and service models under varying performance metrics and system requirement is an extremely challenging and difficult problem to resolve. In this paper, the authors present a Virtual Computing Laboratory framework model using the concept of private cloud by extending the open source IaaS solution Eucalyptus. A rule based mapping algorithm for Virtual Machines (VMs) which is formulated based on the principles of set theoretic is also presented. The algorithmic design is projected towards being able to automatically adapt the mapping between VMs and physical hosts’ resources. The paper, similarly presents a theoretical study and derivations of some performance evaluation metrics for the chosen mapping policies, these includes determining the context switching, waiting time, turnaround time, and response time for the proposed mapping algorithm.


2021 ◽  
Vol 11 (16) ◽  
pp. 7379
Author(s):  
Oleg Bystrov ◽  
Ruslan Pacevič ◽  
Arnas Kačeniauskas

The pervasive use of cloud computing has led to many concerns, such as performance challenges in communication- and computation-intensive services on virtual cloud resources. Most evaluations of the infrastructural overhead are based on standard benchmarks. Therefore, the impact of communication issues and infrastructure services on the performance of parallel MPI-based computations remains unclear. This paper presents the performance analysis of communication- and computation-intensive software based on the discrete element method, which is deployed as a service (SaaS) on the OpenStack cloud. The performance measured on KVM-based virtual machines and Docker containers of the OpenStack cloud is compared with that obtained by using native hardware. The improved mapping of computations to multicore resources reduced the internode MPI communication by 34.4% and increased the parallel efficiency from 0.67 to 0.78, which shows the importance of communication issues. Increasing the number of parallel processes, the overhead of the cloud infrastructure increased to 13.7% and 11.2% of the software execution time on native hardware in the case of the Docker containers and KVM-based virtual machines of the OpenStack cloud, respectively. The observed overhead was mainly caused by OpenStack service processes that increased the load imbalance of parallel MPI-based SaaS.


2021 ◽  
Author(s):  
Hung Cong Tran ◽  
Khiet Thanh Bui ◽  
Hung Dac Ho ◽  
Vu Tran Vu

Abstract Cloud computing technology provides shared computing which can be accessed over the Internet. When cloud data centers are flooded by end-users, how to efficiently manage virtual machines to balance both economical cost and ensure QoS becomes a mandatory work to service providers. Virtual machine migration feature brings a plenty of benefits to stakeholders such as cost, energy, performance, stability, availability. However, stakeholder's objectives are usually conflicted with each other. Also, the optimal resource allocation problem in cloud infrastructure is usually NP-Hard or NP-Complete class. In this paper, the virtual migration problem is formulated by applying game theory to ensure both load balance and resource utilization. The virtual machine migration algorithm, named V2PQL, is proposed based on Markov Decision Process and Q-learning algorithm. The results of the simulation demonstrate the efficiency of our proposal which are divided into training phase and extraction phase. The proposed V2PQL policy has been benchmarked to the Round-Robin policy in order to highlight their strength and feasibility in policy extraction phase.


Author(s):  
Malini Alagarsamy ◽  
Ajitha Sundarji ◽  
Aparna Arunachalapandi ◽  
Keerthanaa Kalyanasundaram

: Balancing the incoming data traffic across the servers is termed as Load balancing. In cloud computing, Load balancing means distributing loads across the cloud infrastructure. The performance of cloud computing depends on the different factors which include balancing the loads at the data center which increase the server utilization. Proper utilization of resources is termed as server utilization. The power consumption decreases with an increase in server utilization which in turn reduces the carbon footprint of the virtual machines at the data center. In this paper, the cost-aware ant colony optimization based load balancing model is proposed to minimize the execution time, response time and cost in a dynamic environment. This model enables to balance the load across the virtual machines in the data center and evaluate the overall performance with various load balancing models. As an average, the proposed model reduces carbon footprint by 45% than existing methods.


Author(s):  
S. Karthiga Devi ◽  
B. Arputhamary

Today the volume of healthcare data generated increased rapidly because of the number of patients in each hospital increasing.  These data are most important for decision making and delivering the best care for patients. Healthcare providers are now faced with collecting, managing, storing and securing huge amounts of sensitive protected health information. As a result, an increasing number of healthcare organizations are turning to cloud based services. Cloud computing offers a viable, secure alternative to premise based healthcare solutions. The infrastructure of Cloud is characterized by a high volume storage and a high throughput. The privacy and security are the two most important concerns in cloud-based healthcare services. Healthcare organization should have electronic medical records in order to use the cloud infrastructure. This paper surveys the challenges of cloud in healthcare and benefits of cloud techniques in health care industries.


Sign in / Sign up

Export Citation Format

Share Document