scholarly journals DNS-embedded service endpoint registry for distributed e-Infrastructures

2021 ◽  
Author(s):  
Andrii Salnikov ◽  
Balázs Kónya

AbstractDistributed e-Infrastructure is a key component of modern BIG Science. Service discovery in e-Science environments, such as Worldwide LHC Computing Grid (WLCG), is a crucial functionality that relies on service registry. In this paper we re-formulate the requirements for the service endpoint registry based on our more than 10 years experience with many systems designed or used within the WLCG e-Infrastructure. To satisfy those requirements the paper proposes a novel idea to use the existing well-established Domain Name System (DNS) infrastructure together with a suitable data model as a service endpoint registry. The presented ARC Hierarchical Endpoints Registry (ARCHERY) system consists of a minimalistic data model representing services and their endpoints within e-Infrastructures, a rendering of the data model embedded into DNS-records, a lightweight software layer for DNS-record management and client-side data discovery. Our approach for the ARCHERY registry required minimal software development and inherits all the benefits of one of the most reliable distributed information discovery source of the internet, the DNS infrastructure. In particular, deployment, management and operation of ARCHERY is fully relying on DNS. Results of ARCHERY deployment use-cases are provided together with performance analysis.

Author(s):  
Priya Mathur ◽  
Amit Kumar Gupta ◽  
Prateek Vashishtha

Cloud computing is an emerging technique by which anyone can access the applications as utilities over the internet. Cloud computing is the technology which comprises of all the characteristics of the technologies like distributed computing, grid computing, and ubiquitous computing. Cloud computing allows everyone to create, to configure as well as to customize the business applications online. Cryptography is the technique which is use to convert the plain text into cipher text using various encryption techniques. The art and science used to introduce the secrecy in the information security in order to secure the messages is defined as cryptography. In this paper we are going to review few latest Cryptographic algorithms which are used to enhance the security of the data on the cloud servers. We are comparing Short Range Natural Number Modified RSA (SRNN), Elliptic Curve Cryptography Algorithm, Client Side Encryption Technique and Hybrid Encryption Technique to secure the data in cloud.


2021 ◽  
Author(s):  
Sarah Bauermeister ◽  
Joshua R Bauermeister ◽  
R Bridgman ◽  
C Felici ◽  
M Newbury ◽  
...  

Abstract Research-ready data (that curated to a defined standard) increases scientific opportunity and rigour by integrating the data environment. The development of research platforms has highlighted the value of research-ready data, particularly for multi-cohort analyses. Following user consultation, a standard data model (C-Surv), optimised for data discovery, was developed using data from 12 Dementias Platform UK (DPUK) population and clinical cohort studies. The model uses a four-tier nested structure based on 18 data themes selected according to user behaviour or technology. Standard variable naming conventions are applied to uniquely identify variables within the context of longitudinal studies. The data model was used to develop a harmonised dataset for 11 cohorts. This dataset populated the Cohort Explorer data discovery tool for assessing the feasibility of an analysis prior to making a data access request. It was concluded that developing and applying a standard data model (C-Surv) for research cohort data is feasible and useful.


Author(s):  
Kamalendu Pal

The importance of integrating and coordinating supply chain business partners have been appreciated in many industries. In the global manufacturing industry, supply chain business partners' information integration is technically a daunting task due to highly disconnected infrastructures and operations. Information, software applications, and services are loosely distributed among participant business partners with heterogeneous operating infrastructures. A secure, and flexible information exchange architecture that can interconnect distributed information and share that information across global service provision applications is, therefore, immensely advantageous. This chapter describes the main features of an ontology-based web service framework for integrating distributed business processes in a global supply chain. A Scalable Web Service Discovery Framework (SWSDF) for material procurement systems of a manufacturing supply chain is described. Description Logic (DL) is used to represent and explain SWSDF. The framework uses a hybrid knowledge-based system, which consists of Case-Based Reasoning (CBR) and Rule-Based Reasoning (RBR). SWSDF includes: (1) a collection of web service descriptions in Ontology Web Language-Based Service (OWL-S), (2) service advertisement using complex concepts, and (3) a service concept similarity assessment algorithm. Finally, a business scenario is used to demonstrate functionalities of the described system.


2014 ◽  
Vol 30 (2) ◽  
pp. 91-107 ◽  
Author(s):  
Oluwalani Adeleke ◽  
E.J. Otoo

Purpose – This paper aims to study integrated metadata access infrastructure for a network of federated curated data repositories. With the increase in collaborative initiatives among diverse scientific discipline, infrastructure should be in place to facilitate effective information sharing. Scientific data sharing involves provisioning, curation and dissemination of information about the various datasets for discovery and access by other peers, which is achieved using metadata services. The heterogeneous nature of various distributed dataset repositories has resulted in the use of heterogeneous metadata services. This poses some challenges in efficient dataset sharing and information retrieval. To allow for universal accessibility of these autonomous curated data repositories, it is important to establish cross-integration on these data repositories for information sharing. Design/methodology/approach – The authors address this problem through provisioning of a universal metadata interface design that can be integrated with some popular metadata services such as integrated rule-oriented data system (iRODS), OpenDap/THREDDS and MERCURY. Given a network of federated heterogeneous distributed metadata services over autonomous curated data repositories, the authors present an implementation of a universal interface system that can probe and query different metadata databases to access the essential information provided for data discovery and enable data migration. Findings – The authors present the architecture that integrates and allows communication between our interface and the various autonomous data repositories. The authors show how they can integrate their system with THREDDS and iRODS to accomplish data discovery and access operations without altering the implementations of the metadata services at their remote locations. Originality/value – Their system provides unique architecture for information discovery and metadata searches which employs the application programming interfaces for the respective metadata services and communicates using the Zero C Internet communication engine (ICE) protocol.


2021 ◽  
Author(s):  
Sarah Bauermeister ◽  
Joshua R Bauermeister ◽  
Ruth Bridgman ◽  
Caterina Felici ◽  
Mark Newbury ◽  
...  

Abstract Research-ready data (that curated to a defined standard) increases scientific opportunity and rigour by integrating the data environment. The development of research platforms has highlighted the value of research-ready data, particularly for multi-cohort analyses. Following user consultation, a standard data model (C-Surv), optimised for data discovery, was developed using data from 12 Dementias Platform UK (DPUK) population and clinical cohort studies. The model uses a four-tier nested structure based on 18 data themes selected according to user behaviour or technology. Standard variable naming conventions are applied to uniquely identify variables within the context of longitudinal studies. The data model was used to develop a harmonised dataset for 11 cohorts. This dataset populated the Cohort Explorer data discovery tool for assessing the feasibility of an analysis prior to making a data access request. It was concluded that developing and applying a standard data model (C-Surv) for research cohort data is feasible and useful.


2020 ◽  
Vol 245 ◽  
pp. 04032
Author(s):  
Andrea Formica ◽  
Nurcan Ozturk ◽  
Millissa Si Amer ◽  
Julio Lozano Bahilo ◽  
Elizabeth J Gallas ◽  
...  

ATLAS event processing requires access to centralized database systems where information about calibrations, detector status and data-taking conditions are stored. This processing is done on more than 150 computing sites on a world-wide computing grid which are able to access the database using the Squid-Frontier system. Some processing workflows have been found which overload the Frontier system due to the Conditions data model currently in use, specifically because some of the Conditions data requests have been found to have a low caching efficiency. The underlying cause is that non-identical requests as far as the caching are actually retrieving a much smaller number of unique payloads. While ATLAS is undertaking an adiabatic transition during the LHC Long Shutdown 2 and Run 3 from the current COOL Conditions data model to a new data model called CREST for Run 4, it is important to identify the problematic Conditions queries with low caching efficiency and work with the detector subsystems to improve the storage of such data within the current data model. For this purpose ATLAS put together an information aggregation and analytics system. The system is based on aggregated data from the Squid-Frontier logs using the Elasticsearch technology. This paper§ describes the components of this analytics system from the server based on Flask/Celery application to the user interface and how we use Spark SQL functionalities to filter data for making plots, storing the caching efficiency results into a Elasticsearch database and finally deploying the package via a Docker container.


2008 ◽  
pp. 345-363 ◽  
Author(s):  
Christian Platzer ◽  
Florian Rosenberg ◽  
Schahram Dustdar

Web services provide a fundamental technology for developing service-oriented systems by leveraging platform-independent interface descriptions (WSDL) and a flexible message encoding (SOAP). Beside the functional description, Quality of Service (QoS) issues are currently not part of the Web service standards stack, although they provide valuable metadata of a Web service such as performance, dependability, security or cost and payment. This additional information can be used to greatly enhance service discovery, selection and composition. As a result of the latest research that is dedicated to this area, this chapter deals with the various ways of describing, bootstrapping and evaluating QoS attributes. A strong focus is laid on client-side QoS assessment and the arising problems. Furthermore, a method to analyze Web service interactions by using our evaluation tool and extract important QoS information without any knowledge about the service implementation will be presented and thoroughly explained. Usually, taking performance measures for a specific Web service requires access to the service implementation or at least the server machine where it is hosted. This chapter will address a way to bootstrap the most important performance and dependability values form the client’s perspective and therefore overcoming these restrictions.


Author(s):  
Michael Kay

This paper describes a proposal for language extensions to XSLT 3.0, and to the XDM data model, to provide for asynchronous processing. The proposal is particularly motivated by the requirement for asynchronous retrieval of external resources on the Javascript platform (whether client-side or server-side), but other use cases for asynchronous processing, and other execution platforms, are also considered.


2021 ◽  
Author(s):  
Sarah Bauermeister ◽  
Joshua R Bauermeister ◽  
Ruth Bridgman ◽  
Caterina Felici ◽  
Mark Newbury ◽  
...  

Abstract Research-ready data (that curated to a defined standard) increases scientific opportunity and rigour by integrating the data environment. The development of research platforms has highlighted the value of research-ready data, particularly for multi-cohort analyses. Following user consultation, a standard data model (C-Surv), optimised for data discovery, was developed using data from 12 population and clinical cohort studies. The model uses a four-tier nested structure based on 18 data themes and 137 domains selected according to user behaviour or technology. Standard variable naming conventions are applied to uniquely identify variables within the context of longitudinal studies. The model was used to develop a harmonised dataset for 11 cohorts. This dataset populated the Cohort Explorer data discovery tool for assessing the feasibility of an analysis prior to making a data access request. It was concluded that developing and applying a standard data model (C-Surv) for research cohort data is feasible and useful.


Sign in / Sign up

Export Citation Format

Share Document