scholarly journals WLCG Token Usage and Discovery

2021 ◽  
Vol 251 ◽  
pp. 02028
Author(s):  
Brian Bockelman ◽  
Andrea Ceccanti ◽  
Thomas Dack ◽  
Dave Dykstra ◽  
Maarten Litmaath ◽  
...  

Since 2017, the Worldwide LHC Computing Grid (WLCG) has been working towards enabling token based authentication and authorisation throughout its entire middleware stack. Following the publication of the WLCG Common JSON Web Token (JWT) Schema v1.0 [1] in 2019, middleware developers have been able to enhance their services to consume and validate the JWT-based [2] OAuth2.0 [3] tokens and process the authorization information they convey. Complex scenarios, involving multiple delegation steps and command line flows, are a key challenge to be addressed in order for the system to be fully operational. This paper expands on the anticipated token based workflows, with a particular focus on local storage of tokens and their discovery by services. The authors include a walk-through of this token flow in the RUCIO managed data-transfer scenario, including delegation to FTS and authorised access to storage elements. Next steps are presented, including the current target of submitting production jobs authorised by Tokens within 2021.

Author(s):  
Sriram Krishnan ◽  
Luca Clementi ◽  
Zhaohui Ding ◽  
Wilfred Li

Grid systems provide mechanisms for single sign-on, and uniform APIs for job submission and data transfer, in order to allow the coupling of distributed resources in a seamless manner. However, new users face a daunting barrier of entry due to the high cost of deployment and maintenance. They are often required to learn complex concepts relative to Grid infrastructures (credential management, scheduling systems, data staging, etc). To most scientific users, running their applications with minimal changes and yet getting results faster is highly desirable, without having to know much about how the resources are used. Hence, a higher level of abstraction must be provided for the underlying infrastructure to be used effectively. For this purpose, we have developed the Opal toolkit for exposing applications on Grid resources as simple Web services. Opal provides a basic set of Application Programming Interfaces (APIs) that allows users to execute their deployed applications, query job status, and retrieve results. Opal also provides a mechanism to define command-line arguments and automatically generates user interfaces for the Web services dynamically. In addition, Opal services can be hooked up to a Metascheduler such as CSF4 to leverage a distributed set of resources, and accessed via a multitude of interfaces such as Web browsers, rich desktop environments, workflow tools, and command-line clients.


2012 ◽  
pp. 1904-1928
Author(s):  
Sriram Krishnan ◽  
Luca Clementi ◽  
Zhaohui Ding ◽  
Wilfred Li

Grid systems provide mechanisms for single sign-on, and uniform APIs for job submission and data transfer, in order to allow the coupling of distributed resources in a seamless manner. However, new users face a daunting barrier of entry due to the high cost of deployment and maintenance. They are often required to learn complex concepts relative to Grid infrastructures (credential management, scheduling systems, data staging, etc). To most scientific users, running their applications with minimal changes and yet getting results faster is highly desirable, without having to know much about how the resources are used. Hence, a higher level of abstraction must be provided for the underlying infrastructure to be used effectively. For this purpose, we have developed the Opal toolkit for exposing applications on Grid resources as simple Web services. Opal provides a basic set of Application Programming Interfaces (APIs) that allows users to execute their deployed applications, query job status, and retrieve results. Opal also provides a mechanism to define command-line arguments and automatically generates user interfaces for the Web services dynamically. In addition, Opal services can be hooked up to a Metascheduler such as CSF4 to leverage a distributed set of resources, and accessed via a multitude of interfaces such as Web browsers, rich desktop environments, workflow tools, and command-line clients.


2011 ◽  
Vol 80-81 ◽  
pp. 1237-1243
Author(s):  
Jian Cheng ◽  
Yi Min Wei ◽  
Xin Xiong ◽  
Chun Biao Gan ◽  
Shi Xi Yang

Aiming at issues of the modern embedded turbine supervisory instruments such as simplistic function, weak process power, low efficiency and so on, the technique of double CPUs with the cores of Intel PXA270 and Cyclone II EP2C20 microprocessors is employed in the present design. With embedded linux operating system and the developing platform of Qt software implemented, important functions such as data acquisition, data transfer based on TCP/IP protocols, display of data waveform, local storage of data, data analysis, elementary fault diagnosis and so forth can be achieved all together in a single facility. After tested on the Rotor Kit 4 of Bently Nevada and applied on 300MW turbine of the Power System Dynamic Simulation Laboratory in North China Electric Power University, the newly designed embedded turbine supervisory instruments show that the technique of double CPUs is effective, and the portable system can overcome the deficiencies of the modern embedded turbine supervisory instruments as mentioned above.


2019 ◽  
Vol 214 ◽  
pp. 04045
Author(s):  
Brian Bockelman ◽  
Andrew Hanushevsky ◽  
Oliver Keeble ◽  
Mario Lassnig ◽  
Paul Millar ◽  
...  

GridFTP transfers and the corresponding Grid Security Infrastructure (GSI)-based authentication and authorization system have been data transfer pillars of the Worldwide LHC Computing Grid (WLCG) for more than a decade. However, in 2017, the end of support for the Globus Toolkit - the reference platform for these technologies - was announced. This has reinvigorated and expanded efforts to replace these pillars. We present an end-to-end alternate utilizing HTTP-based WebDAV as the transfer protocol, and bearer tokens for distributed authorization. This alternate ecosystem, integrating significant pre-existing work and ideas in the area, adheres to common industry standards to the fullest extent possible, with minimal agreed-upon extensions or common interpretations of the core protocols. The bearer token approach allows resource providers to delegate authorization decisions to the LHC experiments for experiment-dedicated storage areas. This demonstration touches the entirety of the stack - from multiple storage element implementations to FTS3 to the Rucio data management system. We show how the traditional production and user workflows can be reworked utilizing bearer tokens, eliminating the need for GSI proxy certificates for storage interactions.


2017 ◽  
Vol 1 (2) ◽  
pp. 50
Author(s):  
Nidal Hassan Hussein ◽  
Ahmed Khalid ◽  
Khalid Khanfar

Cloud computing is a burgeoning and revolutionary technology that has changed how data are stored and computed in the cloud. This technology incorporates many elements into an innovative architecture. Among them are autonomic computing, grid computing, and utility computing. Moreover, the rapid storage of data in the clouds has an impact on the security level of organizations. The chief challenge of cloud computing is how to build a secured cloud storage.The reason for this difficulty is thatbefore data transfer, data are usually encrypted in order to achieve a high utilization. Another real challenging task of cloud computing is how to apply a search over encrypted data. As many techniques support only exact keywordmatches, we propose a model to search over encrypted data that are written in Arabic. If an exact keyword match fails, our model will approximate the file as a secondary result. Our model will also use a fuzzy keyword search to enhance system usability by obtaining matching result whenever users input exact matches or the closest possible matches based on keywords. To the best of our knowledge, our model is considered to be the first research work that applies fuzzy search over Arabic encrypted data.


2019 ◽  
Vol 15 (10) ◽  
pp. 155014771987899 ◽  
Author(s):  
Changsong Yang ◽  
Xiaoling Tao ◽  
Feng Zhao

With the rapid development of cloud storage, more and more resource-constraint data owners can employ cloud storage services to reduce the heavy local storage overhead. However, the local data owners lose the direct control over their data, and all the operations over the outsourced data, such as data transfer and deletion, will be executed by the remote cloud server. As a result, the data transfer and deletion have become two security issues because the selfish remote cloud server might not honestly execute these operations for economic benefits. In this article, we design a scheme that aims to make the data transfer and the transferred data deletion operations more transparent and publicly verifiable. Our proposed scheme is based on vector commitment (VC), which is used to deal with the problem of public verification during the data transfer and deletion. More specifically, our new scheme can provide the data owner with the ability to verify the data transfer and deletion results. In addition, by using the advantages of VC, our proposed scheme does not require any trusted third party. Finally, we prove that the proposed scheme not only can reach the expected security goals but also can satisfy the efficiency and practicality.


2020 ◽  
Vol 245 ◽  
pp. 03011
Author(s):  
Maiken Pedersen ◽  
Balazs Konya ◽  
David Cameron ◽  
Mattias Ellert ◽  
Aleksandr Konstantinov ◽  
...  

The Worldwide LHC Computing Grid (WLCG) is today comprised of a range of different types of resources such as cloud centers, large and small HPC centers, volunteer computing as well as the traditional grid resources. The Nordic Tier 1 (NT1) is a WLCG computing infrastructure distributed over the Nordic countries. The NT1 deploys the Nordugrid ARC-CE, which is non-intrusive and lightweight, originally developed to cater for HPC centers where no middleware could be installed on the worker nodes. The NT1 runs ARC in the native Nordugrid mode which contrary to the Pilot mode leaves jobs data transfers up to ARC. ARCs data transfer capabilities together with the ARC Cache are the most important features of ARC. In this article we will describe the datastaging and cache functionality of the ARC-CE set up as an edge service to an HPC or cloud resource, and show the gain in efficiency this model provides compared to a traditional pilot model, especially for sites with remote storage.


Author(s):  
M.F. Schmid ◽  
R. Dargahi ◽  
M. W. Tam

Electron crystallography is an emerging field for structure determination as evidenced by a number of membrane proteins that have been solved to near-atomic resolution. Advances in specimen preparation and in data acquisition with a 400kV microscope by computer controlled spot scanning mean that our ability to record electron image data will outstrip our capacity to analyze it. The computed fourier transform of these images must be processed in order to provide a direct measurement of amplitudes and phases needed for 3-D reconstruction.In anticipation of this processing bottleneck, we have written a program that incorporates a menu-and mouse-driven procedure for auto-indexing and refining the reciprocal lattice parameters in the computed transform from an image of a crystal. It is linked to subsequent steps of image processing by a system of data bases and spawned child processes; data transfer between different program modules no longer requires manual data entry. The progress of the reciprocal lattice refinement is monitored visually and quantitatively. If desired, the processing is carried through the lattice distortion correction (unbending) steps automatically.


1982 ◽  
Vol 21 (04) ◽  
pp. 181-186 ◽  
Author(s):  
M. A. A. Moussa

A drug information system (DARIS) has been created for handling reports on suspected drug reactions. The system is suitable for being run on desktop computers with a minimum of hardware requirements: 187 K read/write memory, flexible or hard disc drive and a thermal printer. The data base (DRUG) uses the QUERY and IMAGE programming capabilities for data entry and search. The data base to statistics link program (DBSTAT) enables data transfer from the data base into a file for statistical analysis and signalling suspected adverse drug reactions.The operational, medical and statistical aspects of the general population voluntary adverse drug reaction monitoring programme—recently initiated in the State of Kuwait—are described.


Author(s):  
B. G. Shadrin ◽  
◽  
D. E. Zachateyskiy ◽  
V. A. Dvoryanchikov Dvoryanchikov ◽  
◽  
...  

Sign in / Sign up

Export Citation Format

Share Document