Large-scale Performance Evaluation of e-Homecare Architectures Using the WS-NS Simulator

2011 ◽  
Vol 50 (05) ◽  
pp. 408-419 ◽  
Author(s):  
B. Volckaert ◽  
B. Dhoedt ◽  
F. De Turck ◽  
S. Van Hoecke

SummaryBackground: E-homecare creates opportunities to provide care faster, at lower cost and higher levels of convenience for patients. As e-homecare services are time-critical, stringent requirements are imposed in terms of total response time and reliability, this way requiring a characterization of their network load and usage behavior. However, it is usually hard to build testbeds on a realistic scale in order to evaluate large-scale e-home-care applications.Objective: This paper describes the design and evaluation of the Network Simulator for Web Services (WS-NS), an NS2-based simulator capable of accurately modeling service-oriented architectures that can be used to evaluate the performance of e-homecare architectures.Methods: WS-NS is applied to the Coplintho e-homecare use case, based on the results of the field trial prototype which targeted diabetes and multiple sclerosis patients. Network-unaware and network-aware service selection algorithms are presented and their performance is tested.Results: The results show that when selecting a service to execute the request, suboptimal decisions can be made when selection is solely based on the service’s properties and status. Taking into account the network links interconnecting the services leads to better selection strategies. Based on the results, the e-homecare broker design is optimized from a centralized design to a hierarchical region-based design, resulting in an important decrease of average response times.Conclusions: The WS-NS simulator can be used to analyze the load and response times of large-scale e-homecare architectures. An optimization of the e-homecare architecture of the Coplintho project resulted in optimized network overhead and more than 45% lower response times.

2021 ◽  
Vol 15 (2) ◽  
pp. 1-25
Author(s):  
Amal Alhosban ◽  
Zaki Malik ◽  
Khayyam Hashmi ◽  
Brahim Medjahed ◽  
Hassan Al-Ababneh

Service-Oriented Architectures (SOA) enable the automatic creation of business applications from independently developed and deployed Web services. As Web services are inherently a priori unknown, how to deliver reliable Web services compositions is a significant and challenging problem. Services involved in an SOA often do not operate under a single processing environment and need to communicate using different protocols over a network. Under such conditions, designing a fault management system that is both efficient and extensible is a challenging task. In this article, we propose SFSS, a self-healing framework for SOA fault management. SFSS is predicting, identifying, and solving faults in SOAs. In SFSS, we identified a set of high-level exception handling strategies based on the QoS performances of different component services and the preferences articled by the service consumers. Multiple recovery plans are generated and evaluated according to the performance of the selected component services, and then we execute the best recovery plan. We assess the overall user dependence (i.e., the service is independent of other services) using the generated plan and the available invocation information of the component services. Due to the experiment results, the given technique enhances the service selection quality by choosing the services that have the highest score and betters the overall system performance. The experiment results indicate the applicability of SFSS and show improved performance in comparison to similar approaches.


Author(s):  
Neven Vrcek ◽  
Ivan Magdalenic

Many benefits from implementation of e-business solutions are related to network effects which means that there are many interconnected parties utilizing the same or compatible technologies. The large-scale adoption of e-business practices in public sectors and in small and medium enterprises (SMEs)-prevailing economic environments will be successful if appropriate support in the form of education, adequate legislative, directions, and open source applications is provided. This case study describes the adoption of e-business in public sectors and SMEs by using an integrated open source approach called e-modules. E-module is a model which has process properties, data properties, and requirements on technology. Therefore e-module presents a holistic framework for deployment of e-business solutions and such e-module structure mandates an approach which requires reengineering of business processes and adoption of strong standardization that solves interoperability issues. E-module is based on principles of service-oriented architectures with guidelines for introduction into business processes and integration with ERP systems. Such an open source approach enables the spreading of compatible software solutions across any given country, thus, increasing e-business adoption. This paper presents a methodology for defining and building e-modules.


2015 ◽  
Vol 2015 ◽  
pp. 1-20 ◽  
Author(s):  
Xiao Song ◽  
Yulin Wu ◽  
Yaofei Ma ◽  
Yong Cui ◽  
Guanghong Gong

Big data technology has undergone rapid development and attained great success in the business field. Military simulation (MS) is another application domain producing massive datasets created by high-resolution models and large-scale simulations. It is used to study complicated problems such as weapon systems acquisition, combat analysis, and military training. This paper firstly reviewed several large-scale military simulations producing big data (MS big data) for a variety of usages and summarized the main characteristics of result data. Then we looked at the technical details involving the generation, collection, processing, and analysis of MS big data. Two frameworks were also surveyed to trace the development of the underlying software platform. Finally, we identified some key challenges and proposed a framework as a basis for future work. This framework considered both the simulation and big data management at the same time based on layered and service oriented architectures. The objective of this review is to help interested researchers learn the key points of MS big data and provide references for tackling the big data problem and performing further research.


2010 ◽  
Vol 67 (8) ◽  
pp. 659-675 ◽  
Author(s):  
Daniel A. Menascé ◽  
Emiliano Casalicchio ◽  
Vinod Dubey

Author(s):  
HALUK DEMIRKAN ◽  
MICHAEL GOUL

The service orientation — coupled with dynamic choreography of business processes, service oriented architectures and service oriented infrastructures — is a developing structure that carries with it the potential to improve agility in today's complex business environments. But because of the newness of the concept and the limited number of large-scale organizations ready or willing to be "early adopters," it is difficult to predict the organizational and technical impacts, understand the critical issues, or perform rigorous research on services computing. So, how should a company begin assessing the real impacts of these service orientation paradigm shifts? In this article, we established an integrated assessment process for creating an organizational roadmap to realize visions of how to deliver reliable, scalable enterprise processes built upon services-computing.


Author(s):  
Valentin Cristea ◽  
Ciprian Dobre ◽  
Corina Stratan ◽  
Florin Pop

This chapter introduces the macroscopic views on distributed systems’ components and their inter-relations. The importance of the architecture for understanding, designing, implementing, and maintaining distributed systems is presented first. Then the currently used architectures and their derivatives are analyzed. The presentation refers to the client-server (with details about Multi-tiered, REST, Remote Evaluation, and Code-on-Demand architectures), hierarchical (with insights in the protocol oriented Grid architecture), service-oriented architectures including OGSA (Open Grid Service Architecture), cloud, cluster, and peer-to-peer (with its versions: hierarchical, decentralized, distributed, and event-based integration architectures). Due to the relation between architecture and application categories supported, the chapter’s structure is similar to that of Chapter 1. Nevertheless, the focus is different. In the current chapter, for each architecture the model, advantages, disadvantages and areas of applicability are presented. Also the chapter includes concrete cases of use (namely actual distributed systems and platforms), and clarifies the relation between the architecture and the enabling technology used in its instantiation. Finally, Chapter 2 frames the discussion in the other chapters, which refer to specific components and services for large scale distributed systems.


Author(s):  
Arcot Rajasekar ◽  
Mike Wan ◽  
Reagan Moore ◽  
Wayne Schroeder

Service-oriented architectures (SOA) enable orchestration of loosely-coupled and interoperable functional software units to develop and execute complex but agile applications. Data management on a distributed data grid can be viewed as a set of operations that are performed across all stages in the life-cycle of a data object. The set of such operations depends on the type of objects, based on their physical and discipline-centric characteristics. In this chapter, the authors define server-side functions, called micro-services, which are orchestrated into conditional workflows for achieving large-scale data management specific to collections of data. Micro-services communicate with each other using parameter exchange, in memory data structures, a database-based persistent information store, and a network messaging system that uses a serialization protocol for communicating with remote micro-services. The orchestration of the workflow is done by a distributed rule engine that chains and executes the workflows and maintains transactional properties through recovery micro-services. They discuss the micro-service oriented architecture, compare the micro-service approach with traditional SOA, and describe the use of micro-services for implementing policy-based data management systems.


Author(s):  
Vinod K. Dubey ◽  
Daniel A. Menascé

The use of Service Oriented Architectures (SOA) enables the existence of a market of service providers delivering functionally equivalent services at different Quality of Service (QoS) and cost levels. The QoS of composite applications can typically be described in terms of metrics such as response time, availability, and throughput of the services that compose the application. A global utility function of the various QoS metrics is the objective function used to determine a near-optimal selection of service providers that support the composite application. This chapter describes the architecture of a QoS Broker that manages the performance of composite applications. The broker continually monitors the utility of the applications and triggers a new service selection when the utility falls below a pre-established threshold or when a service provider fails. A proof-of-concept prototype of the QoS broker demonstrates how it maintains the average utility of the composite application above the threshold in spite of service provider failures and performance degradation.


Sign in / Sign up

Export Citation Format

Share Document