scholarly journals SNAP-STABILIZING PREFIX TREE FOR PEER-TO-PEER SYSTEMS

2010 ◽  
Vol 20 (01) ◽  
pp. 15-30 ◽  
Author(s):  
EDDY CARON ◽  
FRÉDÉRIC DESPREZ ◽  
FRANCK PETIT ◽  
CÉDRIC TEDESCHI

Several factors still hinder the deployment of computational grids over large scale platforms. Among them, the resource discovery is one crucial issue. New approaches, based on peer-to-peer technologies, tackle this issue. Because they efficiently allow range queries, Tries (a.k.a., Prefix Trees) appear to be among promising ways in the design of distributed data structures indexing resources. Despite their lack of robustness in dynamic settings, trie-structured approaches outperform other peer-to-peer fashioned technologies by efficiently supporting range queries. Within recent trie-based approaches, the fault-tolerance is handled by preventive mechanisms, intensively using replication. However, replication can be very costly in terms of computing and storage resources and does not ensure the recovery of the system after arbitrary failures. Self-stabilization is an efficient approach in the design of reliable solutions for dynamic systems. It ensures a system to converge to its intended behavior, regardless of its initial state, in a finite time. A snap-stabilizing algorithm guarantees that it always behaves according to its specification, once the protocol is launched. In this paper, we provide the first snap-stabilizing protocol for trie construction. We design particular tries called Proper Greatest Common Prefix (PGCP) Tree. The proposed algorithm arranges the n label values stored in the tree, in average, in O(h + h′) rounds, where h and h′ are the initial and final heights of the tree, respectively. In the worst case, the algorithm requires an O(n) extra space on each node, O(n) rounds and O(n2) actions. However, simulations allow to state that this worst case is far from being reached and to confirm the average complexities, showing the practical efficiency of this protocol.

Author(s):  
Lu Liu ◽  
Duncan Russell ◽  
Jie Xu

Peer-to-peer (P2P) networks attract attentions worldwide with their great success in file sharing networks (e.g., Napster, Gnutella, BitTorrent, and Kazaa). In the last decade, numerous studies have been devoted to the problem of resource discovery in P2P networks. Recent research on structured and unstructured P2P systems provides a series of useful solutions to improve the scalability and performance of service discovery in large-scale service-based systems. In this chapter, the authors systematically review recent research studies on P2P search techniques and explore the potential roles and influence of P2P networking in dependable service-based military systems.


2014 ◽  
Vol 2 (3) ◽  
pp. 341-366
Author(s):  
ROMAIN HOLLANDERS ◽  
DANIEL F. BERNARDES ◽  
BIVAS MITRA ◽  
RAPHAËL M. JUNGERS ◽  
JEAN-CHARLES DELVENNE ◽  
...  

AbstractPeer-to-peer systems have driven a lot of attention in the past decade as they have become a major source of Internet traffic. The amount of data flowing through the peer-to-peer network is huge and hence challenging both to comprehend and to control. In this work, we take advantage of a new and rich dataset recording the peer-to-peer activity at a remarkable scale to address these difficult problems. After extracting the relevant and measurable properties of the network from the data, we develop two models that aim to make the link between the low-level properties of the network, such as the proportion of peers that do not share content (i.e., free riders) or the distribution of the files among the peers, and its high-level properties, such as the Quality of Service or the diffusion of content, which are of interest for supervision and control purposes. We observe a significant agreement between the high-level properties measured on the real data and on the synthetic data generated by our models, which is encouraging for our models to be used in practice as large-scale prediction tools. Relying on them, we demonstrate that spending efforts to reduce the amount of free riders indeed helps to improve the availability of files on the network. We observe however a saturation of this phenomenon after 60% of free riders.


Author(s):  
Wael Abdulkarim Habeeb, Abdulkarim Assalem

  Publish/ subscribe (pub/ sub) is a popular communication paradigm in the design of large-scale distributed systems. We are witnessing an increasingly widespread use of pub/ sub for a wide array of applications in industry, academia, financial data dissemination, business process management and does not end in social networking sites which takes a large area of user interests and used network bandwidth. Social network interactions have grown exponentially in recent years to the order of billions of notifications generated by millions of users every day. So, it has become very important to access in the field of publishing and subscription networks, especially peer-to-peer (P2P) networks in many ways like the publication speed for events And the percentage of loss in the incoming events of the participants. Peer-to-peer systems can be very large and include millions of nodes, those nodes join and leave the network continuously, and these characteristics are difficult to handle. The evaluation of a new protocol in a real environment, particularly in the early stages, was considered impractical. Hence the need for a simulator to perform such a function to facilitate the simulation of researchers and this emulator is an open source simulator running within the Eclipse environment. In this research we have adopted a new method of selecting nodes within the table of vicinity protocol. This method is concentrated in that the far node increases the probability of its inclusion in the table more than the adjacent node. and The proposed network that uses the Polder Cast protocol was modelled using PeerSim software for modelling deployment and subscription networks within the eclipse environment so that the event delivery service is a Peer-2-Peer network and the method used to register is subject-based (Topic-Based). experimental results showed noticeable improvement in the publication speed for events by 51.11% compared to the original design of the protocol. And The percentage of event loss was reduced by 20%.    


Sign in / Sign up

Export Citation Format

Share Document