Development of Distributed Systems from Design to Application and Maintenance
Latest Publications


TOTAL DOCUMENTS

19
(FIVE YEARS 0)

H-INDEX

1
(FIVE YEARS 0)

Published By IGI Global

9781466626478, 9781466626782

Author(s):  
Heiko Thimm ◽  
Karsten Boye Rasmussen

Well-informed network participants are a necessity for successful collaboration in business networks. The widespread knowledge of the many aspects of the network is an effective vehicle to promote trust within the network, successfully resolve conflicts, and build a prospering collaboration climate. Despite their natural interest in being well informed about all the different aspects of the network, limited resources, e.g. time restrictions of the participants, often prevents reaching an appropriate level of shared information. It is possible to overcome this problem through the use of an active information provisioning service that allows users to adapt the provisioning of information to their specific needs. This paper presents an extensible information modeling framework and also additional complementary concepts that are designed to enable such an active provisioning service. Furthermore, a high-level architecture for a system that offers the targeted information provisioning service is described.


Author(s):  
Zahid Raza ◽  
Deo P. Vidyarthi

Computational Grid attributed with distributed load sharing has evolved as a platform to large scale problem solving. Grid is a collection of heterogeneous resources, offering services of varying natures, in which jobs are submitted to any of the participating nodes. Scheduling these jobs in such a complex and dynamic environment has many challenges. Reliability analysis of the grid gains paramount importance because grid involves a large number of resources which may fail anytime, making it unreliable. These failures result in wastage of both computational power and money on the scarce grid resources. It is normally desired that the job should be scheduled in an environment that ensures maximum reliability to the job execution. This work presents a reliability based scheduling model for the jobs on the computational grid. The model considers the failure rate of both the software and hardware grid constituents like application demanding execution, nodes executing the job, and the network links supporting data exchange between the nodes. Job allocation using the proposed scheme becomes trusted as it schedules the job based on a priori reliability computation.


Author(s):  
Sarsij Tripathi ◽  
Rama Shankar Yadav ◽  
Ranvijay ◽  
Rajib L. Jana

The world has become a global village. Today applications are developed which require sharing of resources dispersed geographically to fulfill the need of the users. In most cases applications turn out to be time bound thus leading to Real Time Distributed System (RTDS). Online Banking, Online Multimedia Applications, Real Time Databases, and Missile tracking systems are some examples of these types of applications. These applications face many challenges in the present scenario particularly in resource management, load balancing, security, and deadlock. The heterogeneous nature of the system exacerbates the challenges. This paper provides a widespread survey of research work reported in RTDS. This review has covered the work done in the field of resource management, load balancing, deadlock, and security. The challenges involved in tackling these issues is presented and future directions are discussed.


Author(s):  
Stijn Dekeyser ◽  
Jan Hidders

Collaboration on documents has been supported for several decades through a variety of systems and tools; recently a renewed interest is apparent through the appearance of new collaborative editors and applications. Some distributed groupware systems are plug-ins for standalone word processors while others have a purely web-based existence. Most exemplars of the new breed of systems are based on Operational Transformations, although some are using traditional version management tools and still others utilize document-level locking techniques. All existing techniques have their drawbacks, creating opportunities for new methods. The authors present a novel collaborative technique for documents which is based on transactions, schedulers, conflicts, and locks. It is not meant to replace existing techniques; rather, it can be used in specific situations where a strict form of concurrency control is required. While the approach of presentation in this article is highly formal with an emphasis on proving desirable properties such as guaranteed correctness, the work is part of a project which aims to fully implement the technique.


Author(s):  
José G. Hernández Ramírez ◽  
María J. García García ◽  
Gilberto J. Hernández García

An easy to apply multi-criteria technique is the Matrixes Of Weighing (MOW), but many of the professionals that use it, in their respective fields, do it in intuitive fashion. In this regard, applications are rarely reported in specialized literature, which explains how few references exist about them. One of the application areas for MOW is the handling of catastrophes, in particular the pre-catastrophe and post-catastrophe phases where a series of problems are usually handled which solution leads to a choice, which could be done by using multi-criteria techniques.The objective of this investigation is to present the MOW with multiplicative factors, and showing their application in the pre-catastrophe phase, when choosing possible shelters and in the post-catastrophe phase, by aiding to hierarchies which infrastructures to be recovered after a catastrophe.


Author(s):  
Thomas Moser ◽  
Stefan Biffl ◽  
Wikan Danar Sunindyo ◽  
Dietmar Winkler

The engineering of a complex production automation system involves experts from several backgrounds, such as mechanical, electrical, and software engineering. The production automation expert knowledge is embedded in their tools and data models, which are, unfortunately, insufficiently integrated across the expert disciplines, due to semantically heterogeneous data structures and terminologies. Traditional integration approaches to data integration using a common repository are limited as they require an agreement on a common data schema by all project stakeholders. This paper introduces the Engineering Knowledge Base (EKB), a semantic-web-based framework, which supports the efficient integration of information originating from different expert domains without a complete common data schema. The authors evaluate the proposed approach with data from real-world use cases from the production automation domain on data exchange between tools and model checking across tools. Major results are that the EKB framework supports stronger semantic mapping mechanisms than a common repository and is more efficient if data definitions evolve frequently.


Author(s):  
Andrei Lavinia ◽  
Ciprian Dobre ◽  
Florin Pop ◽  
Valentin Cristea

Failure detection is a fundamental building block for ensuring fault tolerance in large scale distributed systems. It is also a difficult problem. Resources under heavy loads can be mistaken as being failed. The failure of a network link can be detected by the lack of a response, but this also occurs when a computational resource fails. Although progress has been made, no existing approach provides a system that covers all essential aspects related to a distributed environment. This paper presents a failure detection system based on adaptive, decentralized failure detectors. The system is developed as an independent substrate, working asynchronously and independent of the application flow. It uses a hierarchical protocol, creating a clustering mechanism that ensures a dynamic configuration and traffic optimization. It also uses a gossip strategy for failure detection at local levels to minimize detection time and remove wrong suspicions. Results show that the system scales with the number of monitored resources, while still considering the QoS requirements of both applications and resources.


Author(s):  
Andreea Visan ◽  
Mihai Istin ◽  
Florin Pop ◽  
Valentin Cristea

The state prediction of resources in large scale distributed systems represents an important aspect for resources allocations, systems evaluation, and autonomic control. The paper presents advanced techniques for resources state prediction in Large Scale Distributed Systems, which include techniques based on bio-inspired algorithms like neural network improved with genetic algorithms. The approach adopted in this paper consists of a new fitness function, having prediction error minimization as the main scope. The proposed prediction techniques are based on monitoring data, aggregated in a history database. The experimental scenarios consider the ALICE experiment, active at the CERN institute. Compared with classical predicted algorithms based on average or random methods, the authors obtain an improved prediction error of 73%. This improvement is important for functionalities and performance of resource management systems in large scale distributed systems in the case of remote control ore advance reservation and allocation.


Author(s):  
Nik Bessis ◽  
Eleana Asimakopoulou ◽  
Peter Norrington ◽  
Suresh Thomas ◽  
Ravi Varaganti

Much work is underway within the broad next generation technologies community on issues associated with the development of services to support interdisciplinary domains. Disaster reduction and emergency management are domains in which utilization of advanced information and communication technologies (ICT) are critical for sustainable development and livelihoods. In this article, the authors aim to use an exemplar occupational disaster scenario in which advanced ICT utilization could present emergency managers with some collective computational intelligence in order to prioritize their decision making. To achieve this, they adapt concepts and practices from various next generation technologies including ad-hoc mobile networks, Web 2.0, wireless sensors, crowd sourcing and situated computing. On the implementation side, the authors developed a data mashup map, which highlights the criticality of victims at a location of interest. With this in mind, the article describes the service architecture in the form of data and process flows, its implementation and some simulation results.


Author(s):  
Jaime Santos-Reyes ◽  
Alan N. Beard

This paper presents some aspects of the ‘communication’ processes within a Systemic Disaster Management System (SDMS) model. Information and communication technology (ICT) plays a key part in managing natural disasters. However, it has been contended that ICT should not be used in ‘isolation’ but it should be seen as ‘part’ of the ‘whole’ system for managing disaster risk. Further research is needed in order to illustrate the full application of the ICT within the context of the developed model.


Sign in / Sign up

Export Citation Format

Share Document