Parallelizing Web Service Upload \ Download files

Author(s):  
Mustapha Mohammed Baua'a

The I\O file system Read\Write operations are considered the most significant characteristics. Where, many researchers focus on their works on how to decrease the response time of I\O file system read\write operations. However, most articles concentrate on how to read\write content of the file in parallelism manner. Here in this paper, the author considers the parallelizing Read\Write whole file bytes not only its contents. A case study has been applied in order to make the idea more clear. It talks about two techniques of uploading\downloading files via Web Service. The first one is a traditional way where the files uploaded and downloaded serially. While the second one is uploaded\ downloaded files using Java thread in order to simulate parallelism technique. Java Netbeans 8.0.2 have been used as a programming environment to implement the Download\Upload files through Web Services. Validation results are also presented via using Mat-lab platform as benchmarks. The visualized figures of validation results are clearly clarifying that the second technique shows better response time in comparison to the traditional way.

2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Shengqi Wu ◽  
Huaizhen Kou ◽  
Chao Lv ◽  
Wanli Huang ◽  
Lianyong Qi ◽  
...  

In recent years, the number of web services grows explosively. With a large amount of information resources, it is difficult for users to quickly find the services they need. Thus, the design of an effective web service recommendation method has become the key factor to satisfy the requirements of users. However, traditional recommendation methods often tend to pay more attention to the accuracy of the results but ignore the diversity, which may lead to redundancy and overfitting, thus reducing the satisfaction of users. Considering these drawbacks, a novel method called DivMTID is proposed to improve the effectiveness by achieving accurate and diversified recommendations. First, we utilize users’ historical scores of web services to explore the users’ preferences. And we use the TF-IDF algorithm to calculate the weight vector of each web service. Second, we utilize cosine similarity to calculate the similarity between candidate web services and historical web services and we also forecast the ranking scores of candidate web services. At last, a diversification method is used to generate the top- K recommended list for users. And through a case study, we show that DivMTID is an effective, accurate, and diversified web service recommendation method.


2008 ◽  
pp. 257-297 ◽  
Author(s):  
Asif Akram ◽  
Rob Allen ◽  
Sanjay Chaudhary ◽  
Prateek Jain ◽  
Zakir Laliwala

This chapter presents a ‘Case Study’ based on the distributed market. The requirements of this Grid Business Process are more demanding than any typical business process deployed within a single organization or enterprise. Recently different specifications built on top of Web service standards have originated from the Grid paradigm to address limitations of stateless Web services. These emerging specifications are evaluated in the first part of the chapter to capture requirements of a dynamic business process i.e. Business Process Grid. In second part of the chapter, a case study with different use cases is presented to simulate various scenarios. The abstract discussion and requirements of the case study is followed by the actual implementation. The implementation is meant for the proof-of-concept rather than fully functional application.


2019 ◽  
Vol 9 (1) ◽  
pp. 134-138
Author(s):  
Ford Lumban Gaol ◽  
Rudy Fridian

AbstractThis research is to compare the performance of Loan Approval System Web Services using SOAP Web Service and REST Web Service. There are 3 parameters that will be used in this study based on Quality of Services parameter, throughput, response time and latency. There are 4 different services will be tested to get the result of quality of services, Installment Services, Customer Services, Blacklist Customer Services and Account Services. The result of analysis showed that there was significant difference in the Quality of Services between Loan Approval System Web Services using SOAP Web Service and REST Web Service. The results can be concluded that REST Web service is more appropriate to be used in the integration between Loan Approval System and Core system.


Electronics ◽  
2021 ◽  
Vol 10 (13) ◽  
pp. 1597
Author(s):  
Abdessalam Messiaid ◽  
Farid Mokhati ◽  
Rohallah Benaboud ◽  
Hajer Salem

Service-oriented architecture provides the ability to combine several web services in order to fulfil a user-specific requirement. In dynamic environments, the appearance of several unforeseen events can destabilize the composite web service (CWS) and affect its quality. To deal with these issues, the composite web service must be dynamically reconfigured. Dynamic reconfiguration may be enhanced by avoiding the invocation of degraded web services by predicting QoS for the candidate web service. In this paper, we propose a dynamic reconfiguration method based on HMM (Hidden Markov Model) states to predict the imminent degradation in QoS and prevent the invocation of partner web services with degraded QoS values. PSO (Particle Swarm Optimization) and SFLA (Shuffled Frog Leaping Algorithm) are used to improve the prediction efficiency of HMM. Through extensive experiments on a real-world dataset, WS-Dream, the results demonstrate that the proposed approach can achieve better prediction accuracy. Moreover, we carried out a case study where we revealed that the proposed approach outperforms several state-of-the-art methods in terms of execution time.


Symmetry ◽  
2022 ◽  
Vol 14 (1) ◽  
pp. 82
Author(s):  
Hassan Tarawneh ◽  
Issam Alhadid ◽  
Sufian Khwaldeh ◽  
Suha Afaneh

Web service composition allows developers to create and deploy applications that take advantage of the capabilities of service-oriented computing. Such applications provide the developers with reusability opportunities as well as seamless access to a wide range of services that provide simple and complex tasks to meet the clients’ requests in accordance with the service-level agreement (SLA) requirements. Web service composition issues have been addressed as a significant area of research to select the right web services that provide the expected quality of service (QoS) and attain the clients’ SLA. The proposed model enhances the processes of web service selection and composition by minimizing the number of integrated Web Services, using the Multistage Forward Search (MSF). In addition, the proposed model uses the Spider Monkey Optimization (SMO) algorithm, which improves the services provided with regards to fundamentals of service composition methods symmetry and variations. It achieves that by minimizing the response time of the service compositions by employing the Load Balancer to distribute the workload. It finds the right balance between the Virtual Machines (VM) resources, processing capacity, and the services composition capabilities. Furthermore, it enhances the resource utilization of Web Services and optimizes the resources’ reusability effectively and efficiently. The experimental results will be compared with the composition results of the Smart Multistage Forward Search (SMFS) technique to prove the superiority, robustness, and effectiveness of the proposed model. The experimental results show that the proposed SMO model decreases the service composition construction time by 40.4%, compared to the composition time required by the SMFS technique. The experimental results also show that SMO increases the number of integrated ted web services in the service composition by 11.7%, in comparison with the results of the SMFS technique. In addition, the dynamic behavior of the SMO improves the proposed model’s throughput where the average number of the requests that the service compositions processed successfully increased by 1.25% compared to the throughput of the SMFS technique. Furthermore, the proposed model decreases the service compositions’ response time by 0.25 s, 0.69 s, and 5.35 s for the Excellent, Good, and Poor classes respectively compared to the results of the SMFS Service composition response times related to the same classes.


2015 ◽  
Vol 3 (3) ◽  
pp. 57-68 ◽  
Author(s):  
Hiroki Takatsuka ◽  
Sachio Saiki ◽  
Shinsuke Matsumoto ◽  
Masahide Namamura

Machine-to-Machine (M2M) systems and cloud services provide various kinds of data via distributed Web services. A context-aware service recognizes real-world contexts from such data and behaves autonomously. However, it has been challenging to manage contexts and services defined on the heterogeneous and distributed Web services. In this paper, the authors propose a framework, called RuCAS, which systematically creates and manages context-aware service using various Web services. RuCAS describes every context-aware service by an ECA (Event-Condition-Action) rule. For this, an event is a context triggering the service, a condition is a set of contexts to be satisfied for execution, and the action is a set of Web services to be executed by the service. Thus, every context-aware service is managed in a uniform manner. Since RuCAS is published as a Web service, created contexts and services are reusable. As a case study, RuCAS is applied to a real home network system.


Author(s):  
Murat Gunestas ◽  
Duminda Wijesekera ◽  
Anoop Singhal
Keyword(s):  

Web services are currently a preferred way to architect and provide complex services. This complexity arises due to the composition of new services by choreographing, orchestrating and dynamically invoking existing services. These compositions create service inter-dependencies that can be misused for monetary or other gains. When a misuse is reported, investigators have to navigate through a collection of logs to recreate the attack. In order to facilitate that task, the authors propose creating forensic web services (FWS), a specialized web service that when used would securely maintain transactional records between other web services. These secure records can be re-linked to reproduce the transactional history by an independent agency. Although their work is ongoing, they show the necessary components of a forensic framework for web services and its success through a case study.


Author(s):  
M. Comerio ◽  
F. De Paoli ◽  
S. Grega ◽  
A. Maurino ◽  
Carlo Batini

Web services are increasingly used as an effective means to create and streamline processes and collaborations among governments, businesses, and citizens. As the number of available web services is steadily increasing, there is a growing interest in providing methodologies that address the design of web services according to specific qualities of service (QoS) rather than functional descriptions only. This chapter presents WSMoD (Web Services MOdeling Design), a methodology that explicitly addresses this issue. Furthermore, it exploits general knowledge available on services, expressed by ontologies describing services, their qualities, and the context of use, to help the designer in expressing service requirements in terms of design artifacts. Ontologies are used to acquire and specialize common knowledge among the entities involved in service design, and to check the consistency of the web service model with constraints defined by provider and customer requirements. To improve the effectiveness of the process, the authors propose a Platform Independent Model that includes the description of specific context of service provision, without considering implementation details. The discussion of a QoS-based web service design within a real case study bears evidence of the potentials of WSMoD.


2013 ◽  
Vol 765-767 ◽  
pp. 912-915
Author(s):  
Bao Yu ◽  
Jun Xiao ◽  
Ying Wang

As there is no convenient and custom XBRL data service in the present market, in this article we designed the XBRL Data Service Platform in the perspective that a computer application can get the needed XBRL data conveniently. The platform parses the XBRL taxonomy and instance first, and then store the XBRL data in the combination of relational database and file system. The platform is designed in 3-tier architecture to provide data service. The work of Data Access Tier and Model Tier is to query and encapsulate the XBRL data; the Business Logical Tier is to complete the main logical work, to invoke the Data Access Tier to get the data, and then assembly and serialize the data, and return the result to the View Tier at last; View Tier opens the API list to public as web service, users may invoke the interface with an application. The test work shows that XBRL Data Service Platforms interface is simple and definite, and its function to get XBRL data is powerful. With the platform users can get both the specific XBRL data segments and the complete XBRL files as their needs. Furthermore, all the data requests are completed in a short response time.


Author(s):  
Richi Nayak

The business needs, the availability of huge volumes of data and the continuous evolution in Web services functions derive the need of application of data mining in the Web service domain. This article recommends several data mining applications that can leverage problems concerned with the discovery and monitoring of Web services. This article then presents a case study on applying the clustering data mining technique to the Web service usage data to improve the Web service discovery process. This article also discusses the challenges that arise when applying data mining to Web services usage data and abstract informat


Sign in / Sign up

Export Citation Format

Share Document