scholarly journals Mathematical Models and Methods for Monitoring and Predicting the State of Globally Distributed Computing Systems

2021 ◽  
Vol 7 (3) ◽  
pp. 73-78
Author(s):  
D. Shchemelinin

Monitoring events and predicting the behavior of a dynamic information system are becoming increasingly important due to the globalization of cloud services and a sharp increase in the volume of processed data. Well-known monitoring systems are used for the timely detection and prompt correction of the anomaly, which require new, more effective and proactive forecasting tools. At the CMG-2013 conference, a method for predicting memory leaks in Java applications was presented, which allows IT teams to automatically release resources by safely restarting services when a certain critical threshold value is reached. Article’s solution implements a simple linear mathematical model for describing the historical trend function. However, in practice, the degradation of memory and other computational resources may not occur gradually, but very quickly, depending on the workload, and therefore, solving the forecasting problem using linear methods is not effective enough.

2019 ◽  
Vol 23 (2) ◽  
pp. 153-173
Author(s):  
M. Sadeq Jaafar

Purpose of research. The object of the study is a network cloud service built on the basis of a replicated database. Data in distributed computing systems are replicated in order to ensure the reliability of their storage, facilitate access to data as well as to improve the storage system performance. In this regard, the problem of analyzing the effectiveness of processing the queries to replicated databases in a network-based cloud environment, and, in particular, the problem of organizing priority query queues for updating databae copies (update requests) and for searching and reading information in databases (query-requests). The purpose of this work is to study and organize priority modes in a network distributed computing system with cloud service architecture.Methods. The study was conducted on the basis of two types of behavioural patterns: models based on Petri nets to describe and verify the functioning of a distributed computing system with replicated databases represented as a pool of resource units with several units, and models based on the GPSS simulation language for possible evaluation of passage of query time of each type in queues depending on the priority of queries.Results. Based on two simulation methods, the operation of a cloud system with database replicas was analyzed. In this system two distributed cloud computing systems interact: MANET Cloud based on a wireless network and Internet Cloud based on the Internet. These databases together are the basis of the DBaaSoD (Data Bases as a Service on Demand) cloud service (databases as a service organized at user’s query). To study this system the models of two classes were developed. The model based on Petri nets is designed to test the simulated distributed application for proper functioning. The decisions on the mapping of Petri nets on the architecture of computer networks are discussed. The simulation statistical model is used to compare the priority and non-priority maintenance modes of query- and update-requests by the criterion of average passage of time of queries in queues.Conclusion. System models based on Petri nets were tested, which showed their liveness and security, which makes it possible to move from models to building formalized specifications for network applications for network cloud services in distributed computing systems with replicated databases. The study of GPSS-model showed that in the case of priority service of update-requests, the time of passage for them is reduced by about 2 to 4 times compared with query-requests, depending on the intensity of the query-requests. In the non-priority mode, the serving conditions for update-queries deteriorate and the time of passage in the queue for them increases by about 2 to 6 times as compared with query-requests depending on the intensity of the query-requests.


2019 ◽  
Author(s):  
Jaime Freire de Souza ◽  
Hermes Senger ◽  
Fabricio A. B. Silva

Bag-of-Tasks (BoT) applications are parallel applications composed of independent (i.e., embarrassingly parallel) tasks, which do not communicate with each other, may depend upon one or more input files, and can be executed in any order. BoT applications are very frequent in several scientific areas, and it is the ideal application class for execution on large distributed computing systems composed of hundreds to many thousands of computational resources. This paper focusses on the scalability of BoT applications running on large heterogeneous distributed computing systems organized as a master-slave platform. The results demonstrate that heterogeneous master-slave platforms can achieve higher scalability than homogeneous platforms for the execution of BoT applications, when the computational power of individual nodes in the homogeneous platform is fixed. However, when individual nodes of the homogeneous platform can scale-up, experiments show that master-slave platforms can achieve near linear speedups.


Sign in / Sign up

Export Citation Format

Share Document