A Web Service-based Grid Portal for Edgebreaker Compression

2005 ◽  
Vol 44 (02) ◽  
pp. 233-238 ◽  
Author(s):  
M. C. Barba ◽  
E. Blasi ◽  
M. Cafaro ◽  
S. Fiore ◽  
M. Mirto ◽  
...  

Summary Background: In health applications, and elsewhere, 3D data sets are increasingly accessed through the Internet. To reduce the transfer time while maintaining an unaltered 3D model, adequate compression and decompression techniques are needed. Recently, Grid technologies have been integrated with Web Services technologies to provide a framework for interoperable application-to-application interaction. Objectives: The paper describes an implementation of the Edgebreaker compression technique exploiting web services technology and presents a novel approach for using such services in a Grid Portal. The Grid portal, developed at the CACT/ISUFI of the University of Lecce, allows the processing and delivery of biomedical images (CT – computerized tomography – and MRI – magnetic resonance images) in a distributed environment, using the power and security of computational Grids. Methods: The Edgebreaker Compression Web Service has been deployed on a Grid portal and allows compressing and decompressing 3D data sets using the Globus toolkit GSI (Globus Security Infrastructure) protocol. Moreover, the classical algorithm has been modified extending the compression to files containing more than one object. Results and Conclusions: An implementation of the Edgebreaker compression technique and related experimental results are presented. A novel approach for using the compression web service in a Grid portal allowing storing and preprocessing of huge 3D data sets, and subsequent efficient transmission of results for remote visualization is also described.

Author(s):  
Dr. Manish L Jivtode

Web services are applications that allow for communication between devices over the internet and are independent of the technology. The devices are built and use standardized eXtensible Markup Language (XML) for information exchange. A client or user is able to invoke a web service by sending an XML message and then gets back and XML response message. There are a number of communication protocols for web services that use the XML format such as Web Services Flow Language (WSFL), Blocks Extensible Exchange Protocol(BEEP) etc. Simple Object Access Protocol (SOAP) and Representational State Transfer (REST) are used options for accessing web services. It is not directly comparable that SOAP is a communications protocol while REST is a set of architectural principles for data transmission. In this paper, the data size of 1KB, 2KB, 4KB, 8KB and 16KB were tested each for Audio, Video and result obtained for CRUD methods. The encryption and decryption timings in milliseconds/seconds were recorded by programming extensibility points of a WCF REST web service in the Azure cloud..


Commercial-off-the-shelf (COTS) Simulation Packages (CSPs) are widely used in industry primarily due to economic factors associated with developing proprietary software platforms. Regardless of their widespread use, CSPs have yet to operate across organizational boundaries. The limited reuse and interoperability of CSPs are affected by the same semantic issues that restrict the inter-organizational use of software components and web services. The current representations of Web components are predominantly syntactic in nature lacking the fundamental semantic underpinning required to support discovery on the emerging Semantic Web. The authors present new research that partially alleviates the problem of limited semantic reuse and interoperability of simulation components in CSPs. Semantic models, in the form of ontologies, utilized by the authors’ Web service discovery and deployment architecture, provide one approach to support simulation model reuse. Semantic interoperation is achieved through a simulation component ontology that is used to identify required components at varying levels of granularity (i.e. including both abstract and specialized components). Selected simulation components are loaded into a CSP, modified according to the requirements of the new model and executed. The research presented here is based on the development of an ontology, connector software, and a Web service discovery architecture. The ontology is extracted from example simulation scenarios involving airport, restaurant and kitchen service suppliers. The ontology engineering framework and discovery architecture provide a novel approach to inter-organizational simulation, by adopting a less intrusive interface between participants Although specific to CSPs this work has wider implications for the simulation community. The reason being that the community as a whole stands to benefit through from an increased awareness of the state-of-the-art in Software Engineering (for example, ontology-supported component discovery and reuse, and service-oriented computing), and it is expected that this will eventually lead to the development of a unique Software Engineering-inspired methodology to build simulations in future.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
Ying Jin ◽  
Guangming Cui ◽  
Yiwen Zhang

Service-oriented architecture (SOA) is widely used, which has fueled the rapid growth of Web services and the deployment of tremendous Web services over the last decades. It becomes challenging but crucial to find the proper Web services because of the increasing amount of Web services. However, it proves unfeasible to inspect all the Web services to check their quality values since it will consume a lot of resources. Thus, developing effective and efficient approaches for predicting the quality values of Web services has become an important research issue. In this paper, we propose UIQPCA, a novel approach for hybrid User and Item-based Quality Prediction with Covering Algorithm. UIQPCA integrates information of both users and Web services on the basis of users’ ideas on the quality of coinvoked Web services. After the integration, users and Web services which are similar to the target user and the target Web service are selected. Then, considering the result of integration, UIQPCA makes predictions on how a target user will appraise a target Web service. Broad experiments on WS-Dream, a web service dataset which is widely used in real world, are conducted to evaluate the reliability of UIQPCA. According to the results of experiment, UIQPCA is far better than former approaches, including item-based, user-based, hybrid, and cluster-based approaches.


2005 ◽  
Vol 06 (03) ◽  
pp. 209-228 ◽  
Author(s):  
QUSAY H. MAHMOUD ◽  
WASSAM ZAHREDDINE

The modularity of web services has left an open problem in composition, a scenario that involves an amalgamation of two or more web services to fulfill a request that no one web service is able to provide. This paper presents a framework for adaptive and dynamic composition of web services, enabling web services to be discovered either statically or dynamically by utilizing a semantic ontology to describe web services and their methods. This novel approach gives greater control on how web services are dynamically discovered by allowing the application developer to specify how matches are made, which goes beyond the present techniques of semantically matching inputs and outputs along with classification taxonomies. We utilize the Composite Capabilities/Preferences Profiles (CC/PP) to adapt the interface and content to be compatible with virtually any device. A proof of concept implementation has been constructed that enables users of any device to dynamically discover context-based services that will be dynamically composed to satisfy a user's request. In addition, we have designed and implemented a UDDI-like registry to support context-based adaptive composition of web services. Existing web services can be easily adapted and new web services can be effortlessly deployed.


2013 ◽  
Vol 4 (1) ◽  
pp. 8-11
Author(s):  
Mrs. M. Akila Rani ◽  
Dr. D. Shanthi

Web mining is the application of data mining techniques to discover patterns from the Web. Web services defines set of standards like WSDL(Web Service Description Language), SOAP(Simple Object Access Protocol) and UDDI(Universal Description Discovery and Integration) to support service description, discovery and invocation in a uniform interchangeable format between heterogeneous applications. Due to huge number of Web services and short content of WSDL description, the identification of correct Web services becomes a time consuming process and retrieves a vast amount of irrelevant Web services. This emerges the need for the efficient Web service mining framework for Web service discovery. Discovery involves matching, assessment and selection. Various complex relationships may provide incompatibility in delivering and identifying efficient Web services. As a result the web service requester did not attain the exact useful services. A research has emerged to develop method to improve the accuracy of Web service discovery to match the best services. In the discovery of Web services there are two approaches are available namely Semantic based approach and Syntactic based approach. Semantic based approach gives high accuracy than Syntactic approach but it takes high processing time. Syntactic based approach has high flexibility. Thus, this paper presents a survey of semantic based and syntactic based approaches of Web service discovery system and it proposed a novel approach which has better accuracy and good flexibility than existing one. Finally, it compares the existing approaches in web service discovery.


Author(s):  
Gaurav Raj ◽  
Manish Mahajan ◽  
Dheerendra Singh

In secure web application development, the role of web services will not continue if it is not trustworthy. Retaining customers with applications is one of the major challenges if the services are not reliable and trustworthy. This article proposes a trust evaluation and decision model where the authors have defined indirect attribute, trust, calculated based on available direct attributes in quality web service (QWS) data sets. After getting training of such evaluation and decision strategies, developers and customers, both will use the knowledge and improve the QoS. This research provides web-based learning about web service quality which will be utilized for prediction, recommendation and the selection of trusted web services in the pool of web services available globally. In this research, the authors include designs to make decisions about the trustworthy web services based on classification, correlation, and curve fitting to improve trust in web service prediction. In order to empower the web services life cycle, they have developed a quality assessment model to incorporate a security and performance policy.


2011 ◽  
pp. 604-622
Author(s):  
Taha Osman ◽  
Dhavalkumar Thakker ◽  
David Al-Dabass

With the rapid proliferation of Web services as the medium of choice to securely publish application services beyond the firewall, the importance of accurate, yet flexible matchmaking of similar services gains importance both for the human user and for dynamic composition engines. In this article, we present a novel approach that utilizes the case based reasoning methodology for modelling dynamic Web service discovery and matchmaking, and investigate the use of case adaptation for service composition. Our framework considers Web services execution experiences in the decision making process and is highly adaptable to the service requester constraints. The framework also utilizes OWL semantic descriptions extensively for implementing both the components of the CBR engine and the matchmaking profile of the Web services.


Author(s):  
Douglas L. Dorset

The quantitative use of electron diffraction intensity data for the determination of crystal structures represents the pioneering achievement in the electron crystallography of organic molecules, an effort largely begun by B. K. Vainshtein and his co-workers. However, despite numerous representative structure analyses yielding results consistent with X-ray determination, this entire effort was viewed with considerable mistrust by many crystallographers. This was no doubt due to the rather high crystallographic R-factors reported for some structures and, more importantly, the failure to convince many skeptics that the measured intensity data were adequate for ab initio structure determinations.We have recently demonstrated the utility of these data sets for structure analyses by direct phase determination based on the probabilistic estimate of three- and four-phase structure invariant sums. Examples include the structure of diketopiperazine using Vainshtein's 3D data, a similar 3D analysis of the room temperature structure of thiourea, and a zonal determination of the urea structure, the latter also based on data collected by the Moscow group.


2003 ◽  
Vol 42 (05) ◽  
pp. 215-219
Author(s):  
G. Platsch ◽  
A. Schwarz ◽  
K. Schmiedehausen ◽  
B. Tomandl ◽  
W. Huk ◽  
...  

Summary: Aim: Although the fusion of images from different modalities may improve diagnostic accuracy, it is rarely used in clinical routine work due to logistic problems. Therefore we evaluated performance and time needed for fusing MRI and SPECT images using a semiautomated dedicated software. Patients, material and Method: In 32 patients regional cerebral blood flow was measured using 99mTc ethylcystein dimer (ECD) and the three-headed SPECT camera MultiSPECT 3. MRI scans of the brain were performed using either a 0,2 T Open or a 1,5 T Sonata. Twelve of the MRI data sets were acquired using a 3D-T1w MPRAGE sequence, 20 with a 2D acquisition technique and different echo sequences. Image fusion was performed on a Syngo workstation using an entropy minimizing algorithm by an experienced user of the software. The fusion results were classified. We measured the time needed for the automated fusion procedure and in case of need that for manual realignment after automated, but insufficient fusion. Results: The mean time of the automated fusion procedure was 123 s. It was for the 2D significantly shorter than for the 3D MRI datasets. For four of the 2D data sets and two of the 3D data sets an optimal fit was reached using the automated approach. The remaining 26 data sets required manual correction. The sum of the time required for automated fusion and that needed for manual correction averaged 320 s (50-886 s). Conclusion: The fusion of 3D MRI data sets lasted significantly longer than that of the 2D MRI data. The automated fusion tool delivered in 20% an optimal fit, in 80% manual correction was necessary. Nevertheless, each of the 32 SPECT data sets could be merged in less than 15 min with the corresponding MRI data, which seems acceptable for clinical routine use.


Author(s):  
Mustapha Mohammed Baua'a

The I\O file system Read\Write operations are considered the most significant characteristics. Where, many researchers focus on their works on how to decrease the response time of I\O file system read\write operations. However, most articles concentrate on how to read\write content of the file in parallelism manner. Here in this paper, the author considers the parallelizing Read\Write whole file bytes not only its contents. A case study has been applied in order to make the idea more clear. It talks about two techniques of uploading\downloading files via Web Service. The first one is a traditional way where the files uploaded and downloaded serially. While the second one is uploaded\ downloaded files using Java thread in order to simulate parallelism technique. Java Netbeans 8.0.2 have been used as a programming environment to implement the Download\Upload files through Web Services. Validation results are also presented via using Mat-lab platform as benchmarks. The visualized figures of validation results are clearly clarifying that the second technique shows better response time in comparison to the traditional way.


Sign in / Sign up

Export Citation Format

Share Document