scholarly journals Predicting runtime of computational jobs in distributed computing environment

2020 ◽  
Author(s):  
A.G. Feoktistov ◽  
O.Yu. Basharina

The paper addresses a relevant problem of predicting the runtime of jobs for executing problem-solving schemes of large-scale applications in a heterogeneous distributed computing environment. Such an environment includes nodes that have various hardware architectures, different system software, and diverse computational possibilities. We believe that increasing the accuracy in predicting the runtime of jobs can significantly improve the efficiency of problem-solving and rational use of resources in the heterogeneous environment. To this end, we propose new models that make it possible to take into account various estimations of the module runtime for all modules included in the problem-solving scheme. These models were developed using the special computational model of distributed applied software packages (large-scale scientific applications). In addition, we compare the prediction results (jobs runtime and their errors) using different estimations. Among them are the estimations obtained through the modules testing, users estimations, and estimations based on computational history. These results were obtained in continuous integration, delivery, and deployment of applied and system software of a package for solving warehouse logistics problems. They show that the largest accuracy is achieved by the modules testing.

2021 ◽  
Vol 33 (1) ◽  
pp. 151-172
Author(s):  
Andrei Nikolaevitch Tchernykh ◽  
Igor Vyacheslavovich Bychkov ◽  
Alexander Gennadevich Feoktistov ◽  
Sergei Alexeevich Gorsky ◽  
Ivan Alexandrovich Sidorov ◽  
...  

The paper represents new means of the Orlando Tools framework. This framework is used as the basis of an integrated software environment for developing distributed applied software packages. The additional means are focused on mitigating various types of uncertainties arising from the job distribution in an integrated computing environment. They provide continuous integration, delivery, and deployment of applied and system software of packages. This helps to significantly reduce the negative impact of uncertainty on problem-solving time, computing reliability, and resource efficiency. An experimental analysis of the results of solving practical problems clearly demonstrates the advantages of applying these means.


2001 ◽  
Vol 11 (01) ◽  
pp. 57-64 ◽  
Author(s):  
CHRISTOPH SIEGELIN ◽  
LAURENT CASTILLO ◽  
ULRICH FINGER

Smart cards are highly successful thanks to their unique combination of mobility and security. Based upon a single-chip microcontroller with volatile and non-volatile memories, a smart card implementes a small computer system that is very portable (credit card size), easy to use, and extremely resistant against external attacks. However, today's smart cards use proprietary protocols, application schemes, and development tools. This is due to the limitations of current technology, and it leads to situation of "splendid isolation" where smart cards are not being regarded as an integral part of the overall IT architecture. In this paper, we describe recent research towards "next generation" smart cards. It combines an advanced programming language (Java), novel hardware architectures that provide the required "MIPS budget" (RISC 32 bit), as well as an implementation of key Internet protocols (IP, HTTP) on smart cards. As a result, we show how smart cards can be seamlessly integrated within a distributed computing environment.


Distributed computing system creates or provides a platform having multiple computing nodes linked in a specified manner. On the basis of literature review of last few decades it has been noticed that most of distributed computing researchers have shown their effort to maintain load balancing between processors ,effective task scheduling and optimizing different parameters affecting execution cost and throughput .With these above scenario an additional parameter “Self reconfiguration of CPU” is also a countable parameter to augment the efficiency of distributed computing system .Through this research paper we want to present new approach of adaptive scheduling algorithm which is the mix output of effective task allocation to processor involved in computing and self-reconfiguration of those processors as per need of computing. By this proposed method we will optimize the execution cost, service rate and maximize the throughput as an outcome of organized processors consist in heterogeneous distributed computing system, resulting provide the considerable enhancement in the performance of Distributed computing environment.


Author(s):  
R. Arokia Paul Rajan

Service request scheduling has a major impact on the performance of the service processing design in a large-scale distributed computing environment like cloud systems. It is desirable to have a service request scheduling principle that evenly distributes the workload among the servers, according to their capacities. The capacities of the servers are termed high or low relative to one another. Therefore, there is a need to quantify the server capacity to overcome this subjective assessment. Subsequently, a method to split and distribute the service requests based on this quantified server capacity is also needed. The novelty of this research paper is to address these requirements by devising a service request scheduling principle for a heterogeneous distributed system using appropriate statistical methods, namely Conjoint analysis and Z-score. Suitable experiments were conducted and the experimental results show considerable improvement in the performance of the designed service request scheduling principle compared to a few other existing principles. Areas of further improvement have also been identified and presented.


Sign in / Sign up

Export Citation Format

Share Document