DATA-DRIVEN COORDINATION IN PEER-TO-PEER INFORMATION SYSTEMS

2004 ◽  
Vol 13 (01) ◽  
pp. 63-89 ◽  
Author(s):  
NADIA BUSI ◽  
ALBERTO MONTRESOR ◽  
GIANLUIGI ZAVATTARO

Peer-to-peer (P2P) has recently emerged as a promising model for supporting scalable networks composed of autonomous and spontaneously cooperating entities. The key concept in P2P is decentralization: the resources, the services, as well as the control are not in charge of specialized nodes in the network, but each node (called peer in this context) is directly involved in the management of all these aspects. Besides the advantages of decentralization (autonomy, adaptability, collaboration, and dinamicity just to mention few of them) one of the main drawbacks is the impossibility to predict the topology of the network, thus leaving at run-time any decision about the management of the interaction among the peers. For this reason, we consider useful to provide the developers of P2P applications with a high-level coordination language to be exploited to program the coordination among the peers. In this paper, we present [Formula: see text], a new data-driven coordination model suitable for P2P networks, and we describe [Formula: see text], an implementation of the [Formula: see text] coordination model based on the JXTA peer-to-peer technology.

2011 ◽  
pp. 1-27 ◽  
Author(s):  
Detlef Schoder ◽  
Kai Fischbach ◽  
Christian Schmitt

This chapter reviews core concepts of peer-to-peer (P2P) networking. It highlights the management of resources, such as bandwidth, storage, information, files, and processor cycles based on P2P networks. A model differentiating P2P infrastructures, P2P applications, and P2P communities is introduced. This model provides a better understanding of the different perspectives of P2P. Key technical and social challenges that still limit the potential of information systems based on P2P architectures are discussed.


Author(s):  
S. H. Kwok ◽  
Y. M. Cheung ◽  
K. Y. Chan

A recent survey revealed that 18 millions American Internet users, or approximately 14% of total American Internet population have peer-to-peer (P2P) file-sharing applications running on their computers (Rainie & Madden, 2004). Not surprisingly, P2P applications have become common tools for information sharing and distribution since the appearance of Napster (Napster, 2003) in 1999. P2P systems are the distributed systems in which all nodes are equal in terms of functionality and able to directly communicate with each other without the coordination of a powerful server. Anonymity, scalability, fault resilience, decentralization and self-organization are the distinct characteristics of P2P computing (Milojicic et al., 2002) compared with the traditional client-server computing. P2P computing is believed to be capable of overcoming limitations of the computing environment placed by the client-server computing model. Milojicic et al. (2002), for example, suggested that P2P computing is capable of providing improved scalability by eliminating the limiting factor, the centralized server existing in the client-server computing. In the past few years, P2P computing and its promised characteristics have caught the attention of researchers who have studied the existing P2P networks, and the advantages and disadvantage of P2P systems. Important findings include the excessive network traffic caused by flooding-based searching mechanism that must be tackled in order to fully utilize the improved scalability of P2P systems (Matei, Iamnitchi, & Foster, 2002; Portmann & Seneviratne, 2002). There were proposed efficient searching techniques targeted for both structured and unstructured P2P systems. Other research projects were conducted to study, and were intended to complement, the drawbacks brought by distinct characteristics of P2P systems. For example, the P2P users’ free-riding behavior is generally attributed to the anonymity of such form of communication (Adar & Huberman, 2000). Recent research projects have shifted to a new line of investigation of P2P networks from the economic perspective and applications of P2P systems in workplaces (Kwok & Gao, 2004; Tiwana, 2003).


2014 ◽  
Author(s):  
Sofia Larissa Da Costa ◽  
Valdemar Vicente Graciano Neto ◽  
Juliano Lopes De Oliveira ◽  
Bruno dos Reis Calçado

This paper presents a model-based approach to build Information Systems User Interfaces (ISUI). In this approach, UI presentation and behavioral aspects are modeled as UI Stereotypes, which are high level abstractions of UI appearance and interaction features. A taxonomy of ISUI elements is proposed as the basis for definition of UI stereotypes. These elements are orchestrated on a software architecture which manages model-based UI building and integration with the IS applications. The proposed approach reduces software development efforts and costs, facilitating maintenance and evolution of ISUI. Moreover, UI stereotypes improve usability, consistency, reuse and standardization of both presentation and behavior of ISUI.


2013 ◽  
Vol 61 (3) ◽  
pp. 569-579 ◽  
Author(s):  
A. Poniszewska-Marańda

Abstract Nowadays, the growth and complexity of functionalities of current information systems, especially dynamic, distributed and heterogeneous information systems, makes the design and creation of such systems a difficult task and at the same time, strategic for businesses. A very important stage of data protection in an information system is the creation of a high level model, independent of the software, satisfying the needs of system protection and security. The process of role engineering, i.e. the identification of roles and setting up in an organization is a complex task. The paper presents the modeling and design stages in the process of role engineering in the aspect of security schema development for information systems, in particular for dynamic, distributed information systems, based on the role concept and the usage concept. Such a schema is created first of all during the design phase of a system. Two actors should cooperate with each other in this creation process, the application developer and the security administrator, to determine the minimal set of user’s roles in agreement with the security constraints that guarantee the global security coherence of the system.


2017 ◽  
Author(s):  
Seda Gurses ◽  
Joris Vredy Jan van Hoboken

Moving beyond algorithms and big data as starting points for discussions about privacy, the authors of Privacy After the Agile Turn focus our attention on the new modes of production of information systems. Specifically, they look at three shifts that have transformed most of the software industry: software is now delivered as services, software and hardware have moved into the cloud and software’s development is ever more agile. These shifts have altered the conditions for privacy governance, and rendered the typical mental models underlying regulatory frameworks for information systems out-of-date. After 'the agile turn', modularity in production processes creates new challenges for allocating regulatory responsibility. Privacy implications of software are harder to address due to the dynamic nature of services and feature development, which undercuts extant privacy regulation that assumes a clear beginning and end of production processes. And the data-driven nature of services, beyond the prospect of monetization, has become part of software development itself. With their focus on production, the authors manage to place known challenges to privacy in a new light and create new avenues for privacy research and practice.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2085
Author(s):  
Xue-Bo Jin ◽  
Ruben Jonhson Robert RobertJeremiah ◽  
Ting-Li Su ◽  
Yu-Ting Bai ◽  
Jian-Lei Kong

State estimation is widely used in various automated systems, including IoT systems, unmanned systems, robots, etc. In traditional state estimation, measurement data are instantaneous and processed in real time. With modern systems’ development, sensors can obtain more and more signals and store them. Therefore, how to use these measurement big data to improve the performance of state estimation has become a hot research issue in this field. This paper reviews the development of state estimation and future development trends. First, we review the model-based state estimation methods, including the Kalman filter, such as the extended Kalman filter (EKF), unscented Kalman filter (UKF), cubature Kalman filter (CKF), etc. Particle filters and Gaussian mixture filters that can handle mixed Gaussian noise are discussed, too. These methods have high requirements for models, while it is not easy to obtain accurate system models in practice. The emergence of robust filters, the interacting multiple model (IMM), and adaptive filters are also mentioned here. Secondly, the current research status of data-driven state estimation methods is introduced based on network learning. Finally, the main research results for hybrid filters obtained in recent years are summarized and discussed, which combine model-based methods and data-driven methods. This paper is based on state estimation research results and provides a more detailed overview of model-driven, data-driven, and hybrid-driven approaches. The main algorithm of each method is provided so that beginners can have a clearer understanding. Additionally, it discusses the future development trends for researchers in state estimation.


Sign in / Sign up

Export Citation Format

Share Document