scholarly journals Controlling the Injection of Best-Effort Tasks to Harvest Idle Computing Grid Resources

Author(s):  
Quentin Guilloteau ◽  
Olivier Richard ◽  
Bogdan Robu ◽  
Eric Rutten
2019 ◽  
Vol 214 ◽  
pp. 03025 ◽  
Author(s):  
Fernando Barreiro Megino ◽  
Alessandro Di Girolamo ◽  
Kaushik De ◽  
Tadashi Maeno ◽  
Rodney Walker

PanDA (Production and Distributed Analysis) is the workload management system for ATLAS across the Worldwide LHC Computing Grid. While analysis tasks are submitted to PanDA by over a thousand users following personal schedules (e.g. PhD or conference deadlines), production campaigns are scheduled by a central Physics Coordination group based on the organization’s calendar. The Physics Coordination group needs to allocate the amount of Grid resources dedicated to each activity, in order to manage sharing of CPU resources among various parallel campaigns and to make sure that results can be achieved in time for important deadlines. While dynamic and static shares on batch systems have been around for a long time, we are trying to move away from local resource partitioning and manage shares at a global level in the PanDA system. The global solution is not straightforward, given different requirements of the activities (number of cores, memory, I/O and CPU intensity), the heterogeneity of Grid resources (site/HW capabilities, batch configuration and queue setup) and constraints on data locality. We have therefore started the Global Shares project that follows a requirements-driven multi-step execution plan, starting from definition of nestable shares, implementing share-aware job dispatch, aligning internal processes with global shares and finally implementing a pilot stream control for controlling the batch slots that keeps late binding. This contribution will explain the development work and architectural changes in PanDA to implement Global Shares, and describe how the Global Shares project has enabled the central control of resources and significantly reduced manual operations.


2012 ◽  
Vol 2012 ◽  
pp. 1-19
Author(s):  
Andrea Bosin

In the last years, the availability and models of use of networked computing resources within reach of e-Science are rapidly changing and see the coexistence of many disparate paradigms: high-performance computing, grid, and recently cloud. Unfortunately, none of these paradigms is recognized as the ultimate solution, and a convergence of them all should be pursued. At the same time, recent works have proposed a number of models and tools to address the growing needs and expectations in the field of e-Science. In particular, they have shown the advantages and the feasibility of modeling e-Science environments and infrastructures according to the service-oriented architecture. In this paper, we suggest a model to promote the convergence and the integration of the different computing paradigms and infrastructures for the dynamic on-demand provisioning of resources from multiple providers as a cohesive aggregate, leveraging the service-oriented architecture. In addition, we propose a design aimed at endorsing a flexible, modular, workflow-based computing model for e-Science. The model is supplemented by a working prototype implementation together with a case study in the applicative domain of bioinformatics, which is used to validate the presented approach and to carry out some performance and scalability measurements.


2013 ◽  
Vol 23 (3) ◽  
pp. 223-242 ◽  
Author(s):  
Jarmila Škrinárová ◽  
Ladislav Huraj ◽  
Vladimír Siládi

2019 ◽  
Vol 8 (4) ◽  
pp. 12861-12866

Grid enables the integration of large number of geographically distributed heterogeneous resources owned by different organizations for resource sharing and collaboration in solving advanced sciences and engineering applications. In a distributed heterogeneous computing grid environment, scheduling independent tasks on the grid resources is more complicated and is an NP-Complete problem. Scheduling is the process of mapping the tasks to the available resources. In order to utilize the essence of grid efficiently, this paper presents a heuristic technique for scheduling/mapping the tasks to the resources. The efficiency of the proposed algorithm (WSSLVA) in terms of reduced makespan as well as better resource utilization is achieved. The experimental results indicate that the proposed WSSLVA algorithm is a promising algorithm than the Min-min heuristic scheduling algorithm in terms of makespan and resource utilization.


2020 ◽  
Vol 245 ◽  
pp. 03011
Author(s):  
Maiken Pedersen ◽  
Balazs Konya ◽  
David Cameron ◽  
Mattias Ellert ◽  
Aleksandr Konstantinov ◽  
...  

The Worldwide LHC Computing Grid (WLCG) is today comprised of a range of different types of resources such as cloud centers, large and small HPC centers, volunteer computing as well as the traditional grid resources. The Nordic Tier 1 (NT1) is a WLCG computing infrastructure distributed over the Nordic countries. The NT1 deploys the Nordugrid ARC-CE, which is non-intrusive and lightweight, originally developed to cater for HPC centers where no middleware could be installed on the worker nodes. The NT1 runs ARC in the native Nordugrid mode which contrary to the Pilot mode leaves jobs data transfers up to ARC. ARCs data transfer capabilities together with the ARC Cache are the most important features of ARC. In this article we will describe the datastaging and cache functionality of the ARC-CE set up as an edge service to an HPC or cloud resource, and show the gain in efficiency this model provides compared to a traditional pilot model, especially for sites with remote storage.


Author(s):  
Priya Mathur ◽  
Amit Kumar Gupta ◽  
Prateek Vashishtha

Cloud computing is an emerging technique by which anyone can access the applications as utilities over the internet. Cloud computing is the technology which comprises of all the characteristics of the technologies like distributed computing, grid computing, and ubiquitous computing. Cloud computing allows everyone to create, to configure as well as to customize the business applications online. Cryptography is the technique which is use to convert the plain text into cipher text using various encryption techniques. The art and science used to introduce the secrecy in the information security in order to secure the messages is defined as cryptography. In this paper we are going to review few latest Cryptographic algorithms which are used to enhance the security of the data on the cloud servers. We are comparing Short Range Natural Number Modified RSA (SRNN), Elliptic Curve Cryptography Algorithm, Client Side Encryption Technique and Hybrid Encryption Technique to secure the data in cloud.


Sign in / Sign up

Export Citation Format

Share Document