Workload Modeling and Workload Management: Recent Theoretical Developments

Author(s):  
Raja Parasuraman ◽  
Ericka Rovira
1997 ◽  
Vol 36 (4II) ◽  
pp. 855-862
Author(s):  
Tayyeb Shabir

Well-functioning financial markets can have a positive effect on economic growth by facilitating savings and more efficient allocation of capital. This paper characterises some of the recent theoretical developments that analyse the relationship between financial intermediation and economic growth and presents empirical estimates based on a model of the linkage between financially intermediated investment and growth for two separate groups of countries, developing and advanced. Empirical estimates for both groups suggest that financial intermediation through the efficiency of investment leads to a higher rate of growth per capita. The relevant coefficient estimates show a higher level of significance for the developing countries. This financial liberalisation in the form of deregulation and establishment and development of stock markets can be expected to lead to enhanced economic growth.


1990 ◽  
Author(s):  
DEPARTMENT OF THE ARMY WASHINGTON DC

2020 ◽  
Vol 961 (7) ◽  
pp. 2-7
Author(s):  
A.V. Zubov ◽  
N.N. Eliseeva

The authors describe a software suite for determining tilt degrees of tower-type structures according to ground laser scanning indication. Defining the tilt of the pipe is carried out with a set of measured data through approximating the sections by circumferences. They are constructed using one of the simplest search engine optimization methods (evolutionary algorithm). Automatic filtering the scan of the current section from distorting data is performed by the method of assessing the quality of models constructed with that of least squares. The software was designed using Visual Basic for Applications. It contains several blocks (subprograms), with each of them performing a specific task. The developed complex enables obtaining operational data on the current state of the object with minimal user participation in the calculation process. The software suite is the result of practical implementing theoretical developments on the possibilities of using search methods at solving optimization problems in geodetic practice.


Mathematics ◽  
2021 ◽  
Vol 9 (3) ◽  
pp. 266 ◽  
Author(s):  
Savin Treanţă

A new class of differential variational inequalities (DVIs), governed by a variational inequality and an evolution equation formulated in infinite-dimensional spaces, is investigated in this paper. More precisely, based on Browder’s result, optimal control theory, measurability of set-valued mappings and the theory of semigroups, we establish that the solution set of DVI is nonempty and compact. In addition, the theoretical developments are accompanied by an application to differential Nash games.


2021 ◽  
Vol 11 (3) ◽  
pp. 923
Author(s):  
Guohua Li ◽  
Joon Woo ◽  
Sang Boem Lim

The complexity of high-performance computing (HPC) workflows is an important issue in the provision of HPC cloud services in most national supercomputing centers. This complexity problem is especially critical because it affects HPC resource scalability, management efficiency, and convenience of use. To solve this problem, while exploiting the advantage of bare-metal-level high performance, container-based cloud solutions have been developed. However, various problems still exist, such as an isolated environment between HPC and the cloud, security issues, and workload management issues. We propose an architecture that reduces this complexity by using Docker and Singularity, which are the container platforms most often used in the HPC cloud field. This HPC cloud architecture integrates both image management and job management, which are the two main elements of HPC cloud workflows. To evaluate the serviceability and performance of the proposed architecture, we developed and implemented a platform in an HPC cluster experiment. Experimental results indicated that the proposed HPC cloud architecture can reduce complexity to provide supercomputing resource scalability, high performance, user convenience, various HPC applications, and management efficiency.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Christian Ariza-Porras ◽  
Valentin Kuznetsov ◽  
Federica Legger

AbstractThe globally distributed computing infrastructure required to cope with the multi-petabyte datasets produced by the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) at CERN comprises several subsystems, such as workload management, data management, data transfers, and submission of users’ and centrally managed production requests. To guarantee the efficient operation of the whole infrastructure, CMS monitors all subsystems according to their performance and status. Moreover, we track key metrics to evaluate and study the system performance over time. The CMS monitoring architecture allows both real-time and historical monitoring of a variety of data sources. It relies on scalable and open source solutions tailored to satisfy the experiment’s monitoring needs. We present the monitoring data flow and software architecture for the CMS distributed computing applications. We discuss the challenges, components, current achievements, and future developments of the CMS monitoring infrastructure.


Sign in / Sign up

Export Citation Format

Share Document