grid site
Recently Published Documents


TOTAL DOCUMENTS

27
(FIVE YEARS 4)

H-INDEX

4
(FIVE YEARS 0)

2020 ◽  
Vol 13 (5) ◽  
pp. 999-1007
Author(s):  
Karthikeyan Periyasami ◽  
Arul Xavier Viswanathan Mariammal ◽  
Iwin Thanakumar Joseph ◽  
Velliangiri Sarveshwaran

Background: Medical image analysis application has complex resource requirement. Scheduling Medical image analysis application is the complex task to the grid resources. It is necessary to develop a new model to improve the breast cancer screening process. Proposed novel Meta scheduler algorithm allocate the image analyse applications to the local schedulers and local scheduler submit the job to the grid node which analyses the medical image and generates the result sent back to Meta scheduler. Meta schedulers are distinct from the local scheduler. Meta scheduler and local scheduler have the aim at resource allocation and management. Objective: The main objective of the CDAM meta-scheduler is to maximize the number of jobs accepted. Methods: In the beginning, the user sends jobs with the deadline to the global grid resource broker. Resource providers sent information about the available resources connected in the network at a fixed interval of time to the global grid resource broker, the information such as valuation of the resource and number of an available free resource. CDAM requests the global grid resource broker for available resources details and user jobs. After receiving the information from the global grid resource broker, it matches the job with the resources. CDAM sends jobs to the local scheduler and local scheduler schedule the job to the local grid site. Local grid site executes the jobs and sends the result back to the CDAM. Success full completion of the job status and resource status are updated into the auction history database. CDAM collect the result from all local grid site and return to the grid users. Results: The CDAM was simulated using grid simulator. Number of jobs increases then the percentage of the jobs accepted also decrease due to the scarcity of resources. CDAM is providing 2% to 5% better result than Fair share Meta scheduling algorithm. CDAM algorithm bid density value is generated based on the user requirement and user history and ask value is generated from the resource details. Users who, having the most significant deadline are generated the highest bid value, grid resource which is having the fastest processor are generated lowest ask value. The highest bid is assigned to the lowest Ask it means that the user who is having the most significant deadline is assigned to the grid resource which is having the fastest processor. The deadline represents a time by which the user requires the result. The user can define the deadline by which the results are needed, and the CDAM will try to find the fastest resource available in order to meet the user-defined deadline. If the scheduler detects that the tasks cannot be completed before the deadline, then the scheduler abandons the current resource, tries to select the next fastest resource and tries until the completion of application meets the deadline. CDAM is providing 25% better result than grid way Meta scheduler this is because grid way Meta scheduler allocate jobs to the resource based on the first come first served policy. Conclusion: The proposed CDAM model was validated through simulation and was evaluated based on jobs accepted. The experimental results clearly show that the CDAM model maximizes the number of jobs accepted than conventional Meta scheduler. We conclude that a CDAM is highly effective meta-scheduler systems and can be used for an extraordinary situation where jobs have a combinatorial requirement.


2020 ◽  
Vol 245 ◽  
pp. 09010
Author(s):  
Michal Svatoš ◽  
Jiří Chudoba ◽  
Petr Vokáč

The distributed computing system of the ATLAS experiment at LHC is allowed to opportunistically use resources at the Czech national HPC center IT4Innovations in Ostrava. The jobs are submitted via an ARC Compute Element (ARC-CE) installed at the grid site in Prague. Scripts and input files are shared between the ARC-CE and a shared file system located at the HPC centre via sshfs. This basic submission system has worked there since the end of 2017. Several improvements were made to increase the amount of resource that ATLAS can use. The most significant change was the migration of the submission system to enable pre-emptable jobs, to adapt to the HPC management’s decision to start pre-empting opportunistic jobs. Another improvement of the submission system was related to the sshfs connection which seemed to be a limiting factor of the system. Now, the submission system consists of several ARC-CE machines. Also, various parameters of sshfs were tested in an attempt to increase throughput. As a result of the improvements, the utilisation of the Czech national HPC center by the ATLAS distributed computing increased.


2019 ◽  
Vol 214 ◽  
pp. 03005
Author(s):  
Michal Svatos ◽  
Jiri Chudoba ◽  
Petrri Vokac

The Czech national HPC center IT4Innovations located in Ostrava provides two HPC systems, Anselm and Salomon. The Salomon HPC is amongst the hundred most powerful supercomputers on Earth since its commissioning in 2015. Both clusters were tested for usage by the ATLAS experiment for running simulation jobs. Several thousand core hours were allocatedto the project for tests, but the main aim is to use free resources waitigfor large parallel jobs of other users. Multiple strategies for ATLAS job execution were tested on the Salomon and Anselm HPCs. The solution described herein is based on the ATLAS experience with other HPC sites. ARC Compute Element (ARCCE) installed at the grid site in Prague is used for job submission to Salomon. The ATLAS production system submits jobs to the ARC-Evia ARC Control Tower (aCT). The ARC-CE processes job requirements from aTand creates a script for a batch system which is then executed via ssh. Sshfs is used to share scripts and input files between the site and the HPC cluster. The software used to run jobs is rsynced from the site's CVMFS installation to the HPC's scratch space every day to ensure availabiliy of recent software. Using this setting, opportunistic capacity of the Salomon HPC was exploited.


2017 ◽  
Vol 898 ◽  
pp. 092014
Author(s):  
Gaston Lyons ◽  
Rokas Maciulaitis ◽  
Giuseppe Bagliesi ◽  
Stephan Lammel ◽  
Andrea Sciabà

2014 ◽  
Vol 513 (6) ◽  
pp. 062048 ◽  
Author(s):  
C J Walker ◽  
D P Traynor ◽  
D T Rand ◽  
T S Froy ◽  
S L Lloyd
Keyword(s):  
Tier 2 ◽  

2014 ◽  
Vol 513 (3) ◽  
pp. 032030 ◽  
Author(s):  
J Elmsheuser ◽  
F Hönig ◽  
F Legger ◽  
R Medrano LLamas ◽  
F G Sciacca ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document