scholarly journals Advances and enhancements in the FabrIc for Frontier Experiments project at Fermilab

2019 ◽  
Vol 214 ◽  
pp. 03059
Author(s):  
Kenneth Herner ◽  
Andres Felipe Alba Hernandez ◽  
Shreyas Bhat ◽  
Dennis Box ◽  
Joeseph Boyd ◽  
...  

The FabrIc for Frontier Experiments (FIFE) project within the Fermilab Scientific Computing Division is charged with integrating offline computing components into a common computing stack for the non-LHC Fermilab experiments, supporting experiment offline computing, and consulting on new, novel workflows. We will discuss the general FIFE onboarding strategy, the upgrades and enhancements in the FIFE toolset, and plans for the coming year. These enhancements include: expansion of opportunistic computing resources (including GPU and high-performance computing resources) available to experiments; assistance with commissioning computing resources at European sites for individual experiments; StashCache repositories for experiments; enhanced job monitoring tools; and a custom workflow management service. Additionally we have completed the first phase of a Federated Identity Management system to make it easier for FIFE users to access Fermilab computing resources.

2020 ◽  
Author(s):  
Maria Luiza Mondelli ◽  
Marcelo Monteiro Galheigo ◽  
Vivivan Medeiros ◽  
Bruno F. Bastos ◽  
Antônio Tadeu Azevedo Gomes ◽  
...  

Bioinformatics experiments are rapidly and constantly evolving due improvements in sequencing technologies. These experiments usually demand high performance computation and produce huge quantities of data. They also require different programs to be executed in a certain order, allowing the experiments to be modeled as workflows. However, users do not always have the infrastructure needed to perform these experiments. Our contribution is the integration of scientific workflow management systems and grid-enabled scientific gateways, providing the user with a transparent way to run these workflows in geographically distributed computing resources. The availability of the workflow through the gateway allows for a better usability of these experiments.


2015 ◽  
pp. 1660-1685
Author(s):  
Vladimir Vujin ◽  
Konstantin Simić ◽  
Borko Kovačević

Existing approaches for management of digital identities within e-learning ecosystems imply defining different access parameters for each service or application. However, this can reduce system security and lead to insufficient usage of the services by end-users. This chapter investigates various approaches for identity management, particulary in a cloud computing environment. Several complex issues are discussed, such as cross-domain authentication, provisioning, multi-tenancy, delegation, and security. The main goal of the research is to provide a highly effective, scalable identity management for end-users in an educational private cloud. A federated identity concept was introduced as a solution that enables organizations to implement secure identity management and to share information on the identities of users in the cloud environment. As a proof of concept, the identity management system was implemented in the e-learning system of Faculty of Organizational Sciences, University of Belgrade.


2018 ◽  
Vol 2018 ◽  
pp. 1-15 ◽  
Author(s):  
Álvaro Brandón ◽  
María S. Pérez ◽  
Jesus Montes ◽  
Alberto Sanchez

Monitoring has always been a key element on ensuring the performance of complex distributed systems, being a first step to control quality of service, detect anomalies, or make decisions about resource allocation and job scheduling, to name a few. Edge computing is a new type of distributed computing, where data processing is performed by a large number of heterogeneous devices close to the place where the data is generated. Some of the differences between this approach and more traditional architectures, like cloud or high performance computing, are that these devices have low computing power, have unstable connectivity, and are geo-distributed or even mobile. All of these aforementioned characteristics establish new requirements for monitoring tools, such as customized monitoring workflows or choosing different back-ends for the metrics, depending on the device hosting them. In this paper, we present a study of the requirements that an edge monitoring tool should meet, based on motivating scenarios drawn from literature. Additionally, we implement these requirements in a monitoring tool named FMonE. This framework allows deploying monitoring workflows that conform to the specific demands of edge computing systems. We evaluate FMonE by simulating a fog environment in the Grid’5000 testbed and we demonstrate that it fulfills the requirements we previously enumerated.


2021 ◽  
Vol 13 (03) ◽  
pp. 43-59
Author(s):  
Maha Aldosary ◽  
Norah Alqahtani

An efficient identity management system has become one of the fundamental requirements for ensuring safe, secure, and transparent use of identifiable information and attributes. Federated Identity Management (FIdM) allows users to distribute their identity information across security domains which increases the portability of their digital identities, and it is considered a promising approach to facilitate secure resource sharing among collaborating participants in heterogeneous IT environments. However, it also raises new architectural challenges and significant security and privacy issues that need to be mitigated. In this paper, we provide a comparison between FIdM architectures, presented the limitations and risks in FIdM system, and discuss the results and proposed solutions.


2018 ◽  
Author(s):  
Maria Luiza Mondelli ◽  
Marcelo Monteiro Galheigo ◽  
V´ıvian Medeiros ◽  
Bruno F. Bastos ◽  
Antônio Tadeu Azevedo Gomes ◽  
...  

Bioinformatics experiments are rapidly and constantly evolving due improvements in sequencing technologies. These experiments usually demand high performance computation and produce huge quantities of data. They also require different programs to be executed in a certain order, allowing the experiments to be modeled as workflows. However, users do not always have the infrastructure needed to perform these experiments. Our contribution is the integration of scientific workflow management systems and grid-enabled scientific gateways, providing the user with a transparent way to run these workflows in geographically distributed computing resources. The availability of the workflow through the gateway allows for a better usability of these experiments.  


2020 ◽  
Vol 245 ◽  
pp. 07023
Author(s):  
Kenyi Hurtado Anampa ◽  
Cody Kankel ◽  
Mike Hildreth ◽  
Paul Brenner ◽  
Irena Johnson ◽  
...  

High Performance Computing (HPC) facilities provide vast computational power and storage, but generally work on fixed environments designed to address the most common software needs locally, making it challenging for users to bring their own software. To overcome this issue, most HPC facilities have added support for HPC friendly container technologies such as Shifter, Singularity, or Charliecloud. These different container technologies are all compatible with the more popular Docker containers, however the implementation and use of said containers is different for each HPC friendly container technology. These usage differences can make it difficult for an end user to easily submit and utilize different HPC sites without making adjustments to their workflows and software. This issue is exacerbated when attempting to utilize workflow management software between different sites with differing container technologies. The SCAILFIN project aims to develop and deploy artificial intelligence (AI) and likelihood-free inference (LFI) techniques and software using scalable cyberinfrastructure (CI) that span multiple sites. The project has extended the CERN-based REANA framework, a platform designed to enable analysis reusability, and reproducibility while supporting different workflow engine languages, in order to support submission to different HPC facilities. The work presented here focuses on the development of an abstraction layer that allows the support of different container technologies and different transfer protocols for files and directories between the HPC facility and the REANA cluster edge service from the user’s workflow application.


Sign in / Sign up

Export Citation Format

Share Document