scholarly journals Gardens4Science: Setting Up a Trusted Network for German Botanic Gardens Using Open Source Technologies

Author(s):  
Jörg Holetschek ◽  
Gabriele Droege ◽  
Anton Güntsch ◽  
Nils Köster ◽  
Jeannine Marquardt ◽  
...  

Botanic gardens are an invaluable refuge for plant diversity for conservation, education and research. Worldwide, they manage over 100,000 species, roughly 30% of all plant species diversity, and over 41% of known threatened species; the botanic gardens in Germany house approximately 50,000 different species (Marquardt et al. in press). Scientists in need of plant material rely upon these resources for their research; they require a pooled, up-to-date inventory of ideally all accessions of these gardens. Sharing data from (living) specimen collections online has become routine in the past years; initiatives like PlantSearch of Botanic Gardens Conservation International and the Global Biodiversity Information Facility (GBIF) allow requesting specimens of interest. However, these catalogues are accessible for everyone. Legitimate concerns about potential theft and legal issues keep curators of living collections from sharing their full catalogues; in most cases, only filtered views of the data will be fed into these networks. Gardens4Science (http://gardens4science.biocase.org) aims at overcoming this issue by creating a trusted network between botanic gardens that allows an unfiltered access on the constituents’ accession catalogues. This unified data pool needs to be automatically synchronized with the individual garden’s catalogues, irrespective of the collection management systems used locally. For the three-year construction phase of Gardens4Science, focus is on Cactaceae and Bromeliaceae, since these families are well-represented in the collections and ideal models for studying the origin of biodiversity on evolutionary time scale. Gardens4Science’s technical architecture (Fig. 1) is based on existing tools for setting up biodiversity networks: The BioCASe (Biological Collections Access Service) Provider Software acts as an interface to the local databases that shields the network from their peculiarities (database management systems and data models used). BioCASe transforms the data into the Access to Biological Collections Data schema (ABCD) and publishes them as a BioCASe-compliant web service (Holetschek and Döring 2008, Holetschek et al. 2012). The data portal is based on portal software from the Global Genome Biodiversity Network and provides a user-specific view on the data. Registered trusted users will be able to display full details of individual accessions, whereas guest users will see only an aggregated view (Droege et al. 2014). The Berlin Indexing and Harvesting Toolkit (B-HIT) is used for harvesting the BioCASe web services of the local catalogues and creating a unified index database (Kelbert et al. 2015). Harvesting is done in regular intervals in order to keep the index in sync with the source databases and does not require any action on the provider’s side. The BioCASe (Biological Collections Access Service) Provider Software acts as an interface to the local databases that shields the network from their peculiarities (database management systems and data models used). BioCASe transforms the data into the Access to Biological Collections Data schema (ABCD) and publishes them as a BioCASe-compliant web service (Holetschek and Döring 2008, Holetschek et al. 2012). The data portal is based on portal software from the Global Genome Biodiversity Network and provides a user-specific view on the data. Registered trusted users will be able to display full details of individual accessions, whereas guest users will see only an aggregated view (Droege et al. 2014). The Berlin Indexing and Harvesting Toolkit (B-HIT) is used for harvesting the BioCASe web services of the local catalogues and creating a unified index database (Kelbert et al. 2015). Harvesting is done in regular intervals in order to keep the index in sync with the source databases and does not require any action on the provider’s side. In addition to harvesting, B-HIT performs several data cleaning steps. Foremost, it reconciles scientific names from the source databases with a taxonomic backbone (currently caryophyllales.org for Cactaceae and the Butcher and Gouda checklist for Bromeliaceae), which allows harmonizing the taxonomies from the different sources and the correction of outdated species names and orthographic mistakes. Provenance information are validated (for example specified geographic coordinates versus country) and corrected, if possible; date values are parsed and converted into a standard format. The issues found and potential corrections are compiled in reports and send to the curators, so the mistakes can be rectified in the source databases. In the construction phase, Gardens4Science consists of seven German Botanic gardens that share their accessions of the Bromeliaceae and Cactaceae families. Up to now (March 2019), 19.539 records have been published in Evo-BoGa, with about 3,500 to be added until the end of the project in January 2020. After the construction phase, it is planned to extend the network to include more Botanic Gardens – both from Germany and other countries – as well as additional plant families.

2010 ◽  
Vol 21 (4) ◽  
pp. 60-90 ◽  
Author(s):  
Konstantinos Stamkopoulos ◽  
Evaggelia Pitoura ◽  
Panos Vassiliadis ◽  
Apostolos Zarras

The appropriate deployment of web service operations at the service provider site plays a critical role in the efficient provision of services to clients. In this paper, the authors assume that a service provider has several servers over which web service operations can be deployed. Given a workflow of web services and the topology of the servers, the most efficient mapping of operations to servers must then be discovered. Efficiency is measured in terms of two cost functions that concern the execution time of the workflow and the fairness of the load distribution among the servers. The authors study different topologies for the workflow structure and the server connectivity and propose a suite of greedy algorithms for each combination.


2011 ◽  
Vol 08 (04) ◽  
pp. 291-302
Author(s):  
RAVI SHANKAR PANDEY

Web services are programs which perform some elementary business process of an application and are distributed over the Internet. These services are described, discovered and executed using standard languages WSDL, SOAP and UDDI. Proliferation of web services has resulted in intense competition between providers, which provide the same service. To survive in such a competitive environment, they need to advertise the quality of their service. Web service description language does not provide support to describe quality attributes. Recently, DAmbrogio proposed QOS model of web services based on a meta model of WSDL. In this paper, we present a platform to advertise QOS as declared by the service provider. This tool generates a WSDL file from Java code along with its quality of service attributes. It accepts Java code and a file containing quality attributes. These attributes include reliability, availability, and operation demand and operation latency. These attributes are included in WSDL file as a content of description element.


Author(s):  
Jana Polgar ◽  
Robert Mark Braum ◽  
Tony Polgar

Web Services are gaining in popularity because of the benefits they provide. One of the major benefits is their support for interoperability in a heterogeneous environment, which leads to the capability to add systems and solutions that require different platforms. As long as the various systems are enabled for Web Services, the services can be used to facilitate interoperation. Web Services let enterprise application developers reuse and customize existing information assets. Web Services provide developers with standard ways to access middle-tier and back-end services, such as database management systems and transaction monitors, and to integrate them with other applications.


Author(s):  
Anurag Choudhary

Abstract: Cloud services are being provided by various giant corporations notably Amazon Web Services, Microsoft Azure, Google Cloud Platform, and others. In this scenario, we address the most prominent web service provider, which is Amazon Web Services, which comprises the Elastic Compute Cloud functionality. Amazon offers a comprehensive package of computing solutions to let businesses establish dedicated virtual clouds while maintaining complete configuration control over their working environment. An organization needs to interact with several other technologies; however, instead of installing the technologies, the company may just buy the technology available online as a service. Amazon's Elastic Compute Cloud Web service, delivers highly customizable computing capacity throughout the cloud, allowing developers to establish applications with high scalability. Explicitly put, an Elastic Compute Cloud is a virtual platform that replicates a physical server on which you may host your applications. Instead of acquiring your own hardware and connecting it to a network, Amazon provides you with almost endless virtual machines to deploy your applications while they control the hardware. This review will focus on the quick overview of the Amazon Web Services Elastic Compute Cloud which also containing the features, pricing, and challenges. Finally, unanswered obstacles, and future research directions in Amazon Web Services Elastic Compute Cloud, are addressed. Keywords: Cloud Computing, Cloud Service Provider, Amazon Web Services, Amazon Elastic Compute Cloud, AWS EC2


2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Haiteng Zhang ◽  
Zhiqing Shao ◽  
Hong Zheng ◽  
Jie Zhai

In the early service transactions, quality of service (QoS) information was published by service provider which was not always true and credible. For better verification the trust of the QoS information was provided by the Web service. In this paper, the factual QoS running data are collected by our WS-QoS measurement tool; based on these objectivity data, an algorithm compares the difference of the offered and measured quality data of the service and gives the similarity, and then a reputation evaluation method computes the reputation level of the Web service based on the similarity. The initial implementation and experiment with three Web services' example show that this approach is feasible and these values can act as the references for subsequent consumers to select the service.


2011 ◽  
pp. 1929-1950
Author(s):  
George O.M. Yee

The growth of the Internet has been accompanied by the growth of Web services (e.g., e-commerce, e-health, etc.), leading to important provisions put in place to protect the privacy of Web service users. However, it is also important to be able to estimate the privacy protection capability of a Web service provider. Such estimates would benefit both users and providers. Users would benefit from being able to choose (assuming that such estimates were made public) the service that has the greatest ability to protect their privacy (this would in turn encourage Web service providers to pay more attention to privacy). Web service providers would benefit by being able to adjust their provisions for protecting privacy until certain target capability levels of privacy protection are reached. This article presents an approach for estimating the privacy protection capability of a Web service provider and illustrates the approach with an example.


Author(s):  
George Yee ◽  
Larry Korba

The growth of the Internet has been accompanied by the growth of Internet services (e.g., e-commerce, e-health). This proliferation of services and the increasing attacks on them by malicious individuals have highlighted the need for service security. The security requirements of an Internet or Web service may be specified in a security policy. The provider of the service is then responsible for implementing the security measures contained in the policy. However, a service customer or consumer may have security preferences that are not reflected in the provider’s security policy. In order for service providers to attract and retain customers, as well as reach a wider market, a way of personalizing a security policy to a particular customer is needed. We derive the content of an Internet or Web service security policy and propose a flexible security personalization approach that will allow an Internet or Web service provider and customer to negotiate to an agreed-upon personalized security policy. In addition, we present two application examples of security policy personalization, and overview the design of our security personalization prototype.


2011 ◽  
pp. 2498-2517
Author(s):  
Zhengping Wu ◽  
Alfred C. Weaver

The lack of effective trust establishment mechanisms impedes the deployment of diverse trust models for web services. One issue is that collaborating organizations need mechanisms to bridge extant relationships among cooperating parties. We describe an indirect trust establishment mechanism to bridge and build new trust relationships from extant trust relationships with privacy protection. Another issue is that a trust establishment mechanism for web services must ensure privacy and owner control. Current web service technologies encourage a service requester to reveal all its private attributes in a pre-packaged credential to the service provider to fulfill the requirements for direct trust establishment. This may lead to privacy leakage. We propose a mechanism whereby the service requester discovers the service provider’s requirements from a policy document, then formulates a trust primitive by selectively disclosing attributes in a pre-packaged credential to negotiate a trust relationship. Thus the service requester’s privacy is preserved.


Author(s):  
Konstantinos Stamkopoulos ◽  
Evaggelia Pitoura ◽  
Panos Vassiliadis ◽  
Apostolos Zarras

The appropriate deployment of web service operations at the service provider site plays a critical role in the efficient provision of services to clients. In this paper, the authors assume that a service provider has several servers over which web service operations can be deployed. Given a workflow of web services and the topology of the servers, the most efficient mapping of operations to servers must then be discovered. Efficiency is measured in terms of two cost functions that concern the execution time of the workflow and the fairness of the load distribution among the servers. The authors study different topologies for the workflow structure and the server connectivity and propose a suite of greedy algorithms for each combination.


Author(s):  
George Yee

The growth of the Internet has been accompanied by the growth of Web services (e.g., e-commerce, e-health, etc.), leading to important provisions put in place to protect the privacy of Web service users. However, it is also important to be able to estimate the privacy protection capability of a Web service provider. Such estimates would benefit both users and providers. Users would benefit from being able to choose (assuming that such estimates were made public) the service that has the greatest ability to protect their privacy (this would in turn encourage Web service providers to pay more attention to privacy). Web service providers would benefit by being able to adjust their provisions for protecting privacy until certain target capability levels of privacy protection are reached. This article presents an approach for estimating the privacy protection capability of a Web service provider and illustrates the approach with an example. [Article copies are available for purchase from InfoSci-on-Demand.com]


Sign in / Sign up

Export Citation Format

Share Document