scholarly journals Data-driven traffic and diffusion modeling in peer-to-peer networks: A real case study

2014 ◽  
Vol 2 (3) ◽  
pp. 341-366
Author(s):  
ROMAIN HOLLANDERS ◽  
DANIEL F. BERNARDES ◽  
BIVAS MITRA ◽  
RAPHAËL M. JUNGERS ◽  
JEAN-CHARLES DELVENNE ◽  
...  

AbstractPeer-to-peer systems have driven a lot of attention in the past decade as they have become a major source of Internet traffic. The amount of data flowing through the peer-to-peer network is huge and hence challenging both to comprehend and to control. In this work, we take advantage of a new and rich dataset recording the peer-to-peer activity at a remarkable scale to address these difficult problems. After extracting the relevant and measurable properties of the network from the data, we develop two models that aim to make the link between the low-level properties of the network, such as the proportion of peers that do not share content (i.e., free riders) or the distribution of the files among the peers, and its high-level properties, such as the Quality of Service or the diffusion of content, which are of interest for supervision and control purposes. We observe a significant agreement between the high-level properties measured on the real data and on the synthetic data generated by our models, which is encouraging for our models to be used in practice as large-scale prediction tools. Relying on them, we demonstrate that spending efforts to reduce the amount of free riders indeed helps to improve the availability of files on the network. We observe however a saturation of this phenomenon after 60% of free riders.

2018 ◽  
Vol 7 (2.7) ◽  
pp. 1051
Author(s):  
Gera Jaideep ◽  
Bhanu Prakash Battula

Peer to Peer (P2P) network in the real world is a class of systems that are made up of thousands of nodes in distributed environments. The nodes are decentralized in nature. P2P networks are widely used for sharing resources and information with ease. Gnutella is one of the well known examples for such network. Since these networks spread across the globe with large scale deployment of nodes, adversaries use them as a vehicle to launch DDoS attacks. P2P networks are exploited to make attacks over hosts that provide critical services to large number of clients across the globe. As the attacker does not make a direct attack it is hard to detect such attacks and considered to be high risk threat to Internet based applications. Many techniques came into existence to defeat such attacks. Still, it is an open problem to be addressed as the flooding-based DDoS is difficult to handle as huge number of nodes are compromised to make attack and source address spoofing is employed. In this paper, we proposed a framework to identify and secure P2P communications from a DDoS attacks in distributed environment. Time-to-Live value and distance between source and victim are considered in the proposed framework. A special agent is used to handle information about nodes, their capacity, and bandwidth for efficient trace back. A Simulation study has been made using NS2 and the experimental results reveal the significance of the proposed framework in defending P2P network and target hosts from high risk DDoS attacks.  


PLoS ONE ◽  
2020 ◽  
Vol 15 (12) ◽  
pp. e0243475
Author(s):  
David Mödinger ◽  
Jan-Hendrik Lorenz ◽  
Rens W. van der Heijden ◽  
Franz J. Hauck

The cryptocurrency system Bitcoin uses a peer-to-peer network to distribute new transactions to all participants. For risk estimation and usability aspects of Bitcoin applications, it is necessary to know the time required to disseminate a transaction within the network. Unfortunately, this time is not immediately obvious and hard to acquire. Measuring the dissemination latency requires many connections into the Bitcoin network, wasting network resources. Some third parties operate that way and publish large scale measurements. Relying on these measurements introduces a dependency and requires additional trust. This work describes how to unobtrusively acquire reliable estimates of the dissemination latencies for transactions without involving a third party. The dissemination latency is modelled with a lognormal distribution, and we estimate their parameters using a Bayesian model that can be updated dynamically. Our approach provides reliable estimates even when using only eight connections, the minimum connection number used by the default Bitcoin client. We provide an implementation of our approach as well as datasets for modelling and evaluation. Our approach, while slightly underestimating the latency distribution, is largely congruent with observed dissemination latencies.


Geophysics ◽  
1990 ◽  
Vol 55 (9) ◽  
pp. 1166-1182 ◽  
Author(s):  
Irshad R. Mufti

Finite‐difference seismic models are commonly set up in 2-D space. Such models must be excited by a line source which leads to different amplitudes than those in the real data commonly generated from a point source. Moreover, there is no provision for any out‐of‐plane events. These problems can be eliminated by using 3-D finite‐difference models. The fundamental strategy in designing efficient 3-D models is to minimize computational work without sacrificing accuracy. This was accomplished by using a (4,2) differencing operator which ensures the accuracy of much larger operators but requires many fewer numerical operations as well as significantly reduced manipulation of data in the computer memory. Such a choice also simplifies the problem of evaluating the wave field near the subsurface boundaries of the model where large operators cannot be used. We also exploited the fact that, unlike the real data, the synthetic data are free from ambient noise; consequently, one can retain sufficient resolution in the results by optimizing the frequency content of the source signal. Further computational efficiency was achieved by using the concept of the exploding reflector which yields zero‐offset seismic sections without the need to evaluate the wave field for individual shot locations. These considerations opened up the possibility of carrying out a complete synthetic 3-D survey on a supercomputer to investigate the seismic response of a large‐scale structure located in Oklahoma. The analysis of results done on a geophysical workstation provides new insight regarding the role of interference and diffraction in the interpretation of seismic data.


2011 ◽  
pp. 131-144
Author(s):  
Sridhar Asvathanarayanan

Computing strategies have constantly undergone changes, from being completely centralized to client-servers and now to peer-to-peer networks. Databases on peer-to-peer networks offer significant advantages in terms of providing autonomy to data owners, to store and manage the data that they work with and, at the same time, allow access to others. The issue of database security becomes a lot more complicated and the vulnerabilities associated with databases are far more pronounced when considering databases on a peer-to-peer network. Issues associated with database security in a peer-to-peer environment could be due to file sharing, distributed denial of service, and so forth, and trust plays a vital role in ensuring security. The components of trust in terms of authentication, authorization, and encryption offer methods to ensure security.


Author(s):  
Thomas Repantis ◽  
Vana Kalogeraki

In this chapter the authors study the problems of data dissemination and query routing in mobile peerto- peer networks. They provide a taxonomy and discussion of existing literature, spanning overlay topologies, query routing, and data propagation. They proceed by proposing content-driven routing and adaptive data dissemination algorithms for intelligently routing search queries in a peer-to-peer network that supports mobile users. In the authors’ mechanism, nodes build content synopses of their data and adaptively disseminate them to their most appropriate peers. Based on the content synopses, a routing mechanism is being built, to forward the queries to those peers that have a high probability of providing the desired results. The authors provide an experimental evaluation of different dissemination strategies, which shows that content-driven routing and adaptive data dissemination is highly scalable and significantly improves resource usage.


2005 ◽  
Vol 62 (1-4) ◽  
pp. 1-16 ◽  
Author(s):  
R. Gaeta ◽  
G. Balbo ◽  
S. Bruell ◽  
M. Gribaudo ◽  
M. Sereno

Sign in / Sign up

Export Citation Format

Share Document