Accelerating Web Service Workflow Execution via Intelligent Allocation of Services to Servers

Author(s):  
Konstantinos Stamkopoulos ◽  
Evaggelia Pitoura ◽  
Panos Vassiliadis ◽  
Apostolos Zarras

The appropriate deployment of web service operations at the service provider site plays a critical role in the efficient provision of services to clients. In this paper, the authors assume that a service provider has several servers over which web service operations can be deployed. Given a workflow of web services and the topology of the servers, the most efficient mapping of operations to servers must then be discovered. Efficiency is measured in terms of two cost functions that concern the execution time of the workflow and the fairness of the load distribution among the servers. The authors study different topologies for the workflow structure and the server connectivity and propose a suite of greedy algorithms for each combination.

2010 ◽  
Vol 21 (4) ◽  
pp. 60-90 ◽  
Author(s):  
Konstantinos Stamkopoulos ◽  
Evaggelia Pitoura ◽  
Panos Vassiliadis ◽  
Apostolos Zarras

The appropriate deployment of web service operations at the service provider site plays a critical role in the efficient provision of services to clients. In this paper, the authors assume that a service provider has several servers over which web service operations can be deployed. Given a workflow of web services and the topology of the servers, the most efficient mapping of operations to servers must then be discovered. Efficiency is measured in terms of two cost functions that concern the execution time of the workflow and the fairness of the load distribution among the servers. The authors study different topologies for the workflow structure and the server connectivity and propose a suite of greedy algorithms for each combination.


2011 ◽  
Vol 08 (04) ◽  
pp. 291-302
Author(s):  
RAVI SHANKAR PANDEY

Web services are programs which perform some elementary business process of an application and are distributed over the Internet. These services are described, discovered and executed using standard languages WSDL, SOAP and UDDI. Proliferation of web services has resulted in intense competition between providers, which provide the same service. To survive in such a competitive environment, they need to advertise the quality of their service. Web service description language does not provide support to describe quality attributes. Recently, DAmbrogio proposed QOS model of web services based on a meta model of WSDL. In this paper, we present a platform to advertise QOS as declared by the service provider. This tool generates a WSDL file from Java code along with its quality of service attributes. It accepts Java code and a file containing quality attributes. These attributes include reliability, availability, and operation demand and operation latency. These attributes are included in WSDL file as a content of description element.


2010 ◽  
Vol 7 (3) ◽  
pp. 1-29 ◽  
Author(s):  
Maricela Bravo ◽  
Matias Alvarado

Web service substitution is one of the most advanced tasks that a composite Web service developer must achieve. Substitution occurs when, in a composite scenario, a service operation is replaced to improve the composition performance or fix a disruption caused by a failing service. To move the automation of substitution forward, a set of measures, considering structure and functionality of Web services, are provided. Most of current proposals for the discovery and matchmaking of Web services are based on the semantic perspective, which lacks the precise information that is needed toward Web service substitution. This paper describes a set of similarity measures to support this substitution. Similarity measurement accounts the differences or similarities by the syntax comparison of names and data types, followed by the comparison of input and output parameters values of Web service operations. Calculation of these measures was implemented using a filtering process. To evaluate this approach, a software architecture was implemented, and experimental tests were carried on both private and public available Web services. Additionally, as is discussed, the application of these measures can be extended to other Web services tasks, such as classification, clustering and composition.


Author(s):  
Duy Ngan Le ◽  
Karel Mous ◽  
Angela Goh

Web services have been employed in a wide range of applications and have become a key technology in developing business operations on the Web. In order to leverage on the use of Web services, Web service operations such as discovery, composition, and interoperability need to be fully supported. Several approaches have been proposed for each of these operations but these have advantages and disadvantages as well as varying levels of suitability for different applications. This leads to a motivation to explore and to compare current approaches as well as to highlight problems of the operations and their possible solutions. In this chapter, an introduction, a brief survey, problems and possible solutions to the three Web service operations mentioned above are discussed. The research opportunities and possible future directions on Web service are also presented.


F1000Research ◽  
2014 ◽  
Vol 3 ◽  
pp. 173 ◽  
Author(s):  
Kristina Hettne ◽  
Reinout van Schouwen ◽  
Eleni Mina ◽  
Eelke van der Horst ◽  
Mark Thompson ◽  
...  

The Concept Profile Analysis technology (overlapping co-occurring concept sets based on knowledge contained in biomedical abstracts) has led to new biomedical discoveries, and users have been able to interact with concept profiles through the interactive tool “Anni” (http://biosemantics.org/anni). However, Anni provides no way for users to save their procedures, results, or related provenance. Here we present a new suite of Web Service operations that allows bioinformaticians to design and execute their own Concept Profile Analysis workflow, possibly as part of a larger bioinformatics analysis. The source code can be downloaded from ZENODO at http://www.dx.doi.org/10.5281/zenodo.10963.


Author(s):  
Jörg Holetschek ◽  
Gabriele Droege ◽  
Anton Güntsch ◽  
Nils Köster ◽  
Jeannine Marquardt ◽  
...  

Botanic gardens are an invaluable refuge for plant diversity for conservation, education and research. Worldwide, they manage over 100,000 species, roughly 30% of all plant species diversity, and over 41% of known threatened species; the botanic gardens in Germany house approximately 50,000 different species (Marquardt et al. in press). Scientists in need of plant material rely upon these resources for their research; they require a pooled, up-to-date inventory of ideally all accessions of these gardens. Sharing data from (living) specimen collections online has become routine in the past years; initiatives like PlantSearch of Botanic Gardens Conservation International and the Global Biodiversity Information Facility (GBIF) allow requesting specimens of interest. However, these catalogues are accessible for everyone. Legitimate concerns about potential theft and legal issues keep curators of living collections from sharing their full catalogues; in most cases, only filtered views of the data will be fed into these networks. Gardens4Science (http://gardens4science.biocase.org) aims at overcoming this issue by creating a trusted network between botanic gardens that allows an unfiltered access on the constituents’ accession catalogues. This unified data pool needs to be automatically synchronized with the individual garden’s catalogues, irrespective of the collection management systems used locally. For the three-year construction phase of Gardens4Science, focus is on Cactaceae and Bromeliaceae, since these families are well-represented in the collections and ideal models for studying the origin of biodiversity on evolutionary time scale. Gardens4Science’s technical architecture (Fig. 1) is based on existing tools for setting up biodiversity networks: The BioCASe (Biological Collections Access Service) Provider Software acts as an interface to the local databases that shields the network from their peculiarities (database management systems and data models used). BioCASe transforms the data into the Access to Biological Collections Data schema (ABCD) and publishes them as a BioCASe-compliant web service (Holetschek and Döring 2008, Holetschek et al. 2012). The data portal is based on portal software from the Global Genome Biodiversity Network and provides a user-specific view on the data. Registered trusted users will be able to display full details of individual accessions, whereas guest users will see only an aggregated view (Droege et al. 2014). The Berlin Indexing and Harvesting Toolkit (B-HIT) is used for harvesting the BioCASe web services of the local catalogues and creating a unified index database (Kelbert et al. 2015). Harvesting is done in regular intervals in order to keep the index in sync with the source databases and does not require any action on the provider’s side. The BioCASe (Biological Collections Access Service) Provider Software acts as an interface to the local databases that shields the network from their peculiarities (database management systems and data models used). BioCASe transforms the data into the Access to Biological Collections Data schema (ABCD) and publishes them as a BioCASe-compliant web service (Holetschek and Döring 2008, Holetschek et al. 2012). The data portal is based on portal software from the Global Genome Biodiversity Network and provides a user-specific view on the data. Registered trusted users will be able to display full details of individual accessions, whereas guest users will see only an aggregated view (Droege et al. 2014). The Berlin Indexing and Harvesting Toolkit (B-HIT) is used for harvesting the BioCASe web services of the local catalogues and creating a unified index database (Kelbert et al. 2015). Harvesting is done in regular intervals in order to keep the index in sync with the source databases and does not require any action on the provider’s side. In addition to harvesting, B-HIT performs several data cleaning steps. Foremost, it reconciles scientific names from the source databases with a taxonomic backbone (currently caryophyllales.org for Cactaceae and the Butcher and Gouda checklist for Bromeliaceae), which allows harmonizing the taxonomies from the different sources and the correction of outdated species names and orthographic mistakes. Provenance information are validated (for example specified geographic coordinates versus country) and corrected, if possible; date values are parsed and converted into a standard format. The issues found and potential corrections are compiled in reports and send to the curators, so the mistakes can be rectified in the source databases. In the construction phase, Gardens4Science consists of seven German Botanic gardens that share their accessions of the Bromeliaceae and Cactaceae families. Up to now (March 2019), 19.539 records have been published in Evo-BoGa, with about 3,500 to be added until the end of the project in January 2020. After the construction phase, it is planned to extend the network to include more Botanic Gardens – both from Germany and other countries – as well as additional plant families.


Author(s):  
Anurag Choudhary

Abstract: Cloud services are being provided by various giant corporations notably Amazon Web Services, Microsoft Azure, Google Cloud Platform, and others. In this scenario, we address the most prominent web service provider, which is Amazon Web Services, which comprises the Elastic Compute Cloud functionality. Amazon offers a comprehensive package of computing solutions to let businesses establish dedicated virtual clouds while maintaining complete configuration control over their working environment. An organization needs to interact with several other technologies; however, instead of installing the technologies, the company may just buy the technology available online as a service. Amazon's Elastic Compute Cloud Web service, delivers highly customizable computing capacity throughout the cloud, allowing developers to establish applications with high scalability. Explicitly put, an Elastic Compute Cloud is a virtual platform that replicates a physical server on which you may host your applications. Instead of acquiring your own hardware and connecting it to a network, Amazon provides you with almost endless virtual machines to deploy your applications while they control the hardware. This review will focus on the quick overview of the Amazon Web Services Elastic Compute Cloud which also containing the features, pricing, and challenges. Finally, unanswered obstacles, and future research directions in Amazon Web Services Elastic Compute Cloud, are addressed. Keywords: Cloud Computing, Cloud Service Provider, Amazon Web Services, Amazon Elastic Compute Cloud, AWS EC2


2018 ◽  
Vol 7 (1.9) ◽  
pp. 107
Author(s):  
T N.Aruna ◽  
Dr Vijivinod

Proficient QoS-based web service determination from the various number of practically substitutable web service to convey complex undertakings are a present call from the business world. QoS-based web service’s determination is a multi-target improvement issue. Current methodologies like FCFS, Priority and Multi-queue to explain it. Be that as it may, the execution time of QoS-based web services choice to accomplish the most extreme wellness esteem is as yet a worry for handy circulated applications. This paper proposes a productive method to take care of this issue utilizing the Social Spider Algorithm (SSA).


Author(s):  
Evelina Pencheva

The variety of Machine-to-Machine (M2M) applications based on very heterogeneous forms of platforms, technology, and data models has resulted in vertical solutions where interoperability is very limited. In order to develop horizontal platforms across different business domains, networks and devices, it is necessary to outline generic capabilities. Service Capabilities provide data mediation functions that may be shared by different applications through application programming interfaces. The paper presents an approach to design RESTful Web Services for access to location and presence status information of M2M devices. The Device Reachability Service Capability provides access to device location and allows device presence information to be registered and obtained. Web Service operations are identified by analysis on typical use cases. M2M device reachability information is modelled as REST resources organized in a tree structure. Web Service performance characteristics are evaluated by simulation.


2018 ◽  
Vol 7 (10) ◽  
pp. 404 ◽  
Author(s):  
Mahdi Farnaghi ◽  
Ali Mansourian

Automatic composition of geospatial web services increases the possibility of taking full advantage of spatial data and processing capabilities that have been published over the internet. In this paper, a multi-agent artificial intelligence (AI) planning solution was proposed, which works within the geoportal architecture and enables the geoportal to compose semantically annotated Open Geospatial Consortium (OGC) Web Services based on users’ requirements. In this solution, the registered Catalogue Service for Web (CSW) services in the geoportal along with a composition coordinator component interact together to synthesize Open Geospatial Consortium Web Services (OWSs) and generate the composition workflow. A prototype geoportal was developed, a case study of evacuation sheltering was implemented to illustrate the functionality of the algorithm, and a simulation environment, including one hundred simulated OWSs and five CSW services, was used to test the performance of the solution in a more complex circumstance. The prototype geoportal was able to generate the composite web service, based on the requested goals of the user. Additionally, in the simulation environment, while the execution time of the composition with two CSW service nodes was 20 s, the addition of new CSW nodes reduced the composition time exponentially, so that with five CSW nodes the execution time reduced to 0.3 s. Results showed that due to the utilization of the computational power of CSW services, the solution was fast, horizontally scalable, and less vulnerable to the exponential growth in the search space of the AI planning problem.


Sign in / Sign up

Export Citation Format

Share Document