Application of Game Theory for Network Recovery After Large-Scale Disasters

Author(s):  
Bo Gu ◽  
Osamu Mizuno

In recent years, large-scale disasters took place frequently and always caused severe damages to the network infrastructures. Due to these damages, available network resources are usually not sufficient to meet the data transmission requirements of users after disasters. Moreover, users tend to behave selfishly by consuming as much network resources as possible. Incentive mechanisms are therefore essential for the users to voluntarily cooperate with each other and improve the system performance. In commercial networks, this can be efficiently achieved through pricing. Namely, by selecting an appropriate pricing policy, it is able to incentivize users to choose the service that best matches their data transmission demands. In this chapter, assuming that a time-dependent pricing scheme is imposed on network users, a Stackelberg leader-follower game is then formulated to study the joint utility optimization problem of the users in a disaster region subject to maximum delay and storage constrains. The equilibrium for the Stackelberg leader-follower game is also investigated.

PLoS ONE ◽  
2020 ◽  
Vol 15 (12) ◽  
pp. e0243475
Author(s):  
David Mödinger ◽  
Jan-Hendrik Lorenz ◽  
Rens W. van der Heijden ◽  
Franz J. Hauck

The cryptocurrency system Bitcoin uses a peer-to-peer network to distribute new transactions to all participants. For risk estimation and usability aspects of Bitcoin applications, it is necessary to know the time required to disseminate a transaction within the network. Unfortunately, this time is not immediately obvious and hard to acquire. Measuring the dissemination latency requires many connections into the Bitcoin network, wasting network resources. Some third parties operate that way and publish large scale measurements. Relying on these measurements introduces a dependency and requires additional trust. This work describes how to unobtrusively acquire reliable estimates of the dissemination latencies for transactions without involving a third party. The dissemination latency is modelled with a lognormal distribution, and we estimate their parameters using a Bayesian model that can be updated dynamically. Our approach provides reliable estimates even when using only eight connections, the minimum connection number used by the default Bitcoin client. We provide an implementation of our approach as well as datasets for modelling and evaluation. Our approach, while slightly underestimating the latency distribution, is largely congruent with observed dissemination latencies.


2020 ◽  
Vol 3 (2) ◽  
pp. 128-139
Author(s):  
I Gusti Made Ngurah Desnanjaya ◽  
Mohammad Dwi Alfian

Wireless Sensor Network is a wireless network technology that includes sensor nodes and embedded systems. WSN has several advantages: it is cheaper for large-scale applications, can withstand extreme environments, and data transmission is relatively more stable. One of the WSN devices is nRF24L01+. Within the specifications given, the maximum communication distance is 1.1 km. However, the most effective distance for transmitting data in line of sight and non-line of sight is still unknown. Therefore, testing and analysis are needed so that the nRF24L01+ device can be used optimally for communication and data transmission. Through testing analysis on nRF24L01+ line of sight, Kuta beach location in Bali and non-line of sight on the STMIK STIKOM Indonesia campus. The effective communication distance of the nRF24L01+ module in line of sight is between 1 and 1000 meters. The distance of 1000 meters is the limit of the effective distance for sending data, and the packet loss rate is less than 15% which is included in the medium category. Meanwhile, in the non-line of sight, the effective distance of the nRF24L01+ communication module is 20 meters, and the packet loss is close to 15%, which is a moderate level limit. With the analysis module, nRF24L01+ can be a reference in determining the effective distance on WSN nRF24L01+ in determining remote control equipment data communication.


Author(s):  
Lang Ruan ◽  
Jin Chen ◽  
Qiuju Guo ◽  
Xiaobo Zhang ◽  
Yuli Zhang ◽  
...  

In scenarios such as natural disasters and military strike, it is common for unmanned aerial vehicles (UAVs) to form groups to execute reconnaissance and surveillance. To ensure the effectiveness of UAV communications, repeated resource acquisition issues and transmission mechanism design need to be addressed urgently. In this paper, we build an information interaction scenario in a Flying Ad-hoc network (FANET). The data transmission problem with the goal of throughput maximization is modeled as a coalition game framework. Then, a novel mechanism of coalition selection and data transmission based on group-buying is investigated. Since large-scale UAVs will generate high transmission overhead due to the overlapping resource requirements, we propose a resource allocation optimization method based on distributed data content. Comparing existing works, a data transmission and coalition formation mechanism is designed. Then the system model is classified into graph game and coalition formation game. Through the design of the utility function, we prove that both games have stable solutions. We also prove the convergence of the proposed approach with coalition order and Pareto order. Binary log-linear learning based coalition selection algorithm (BLL-CSA) is proposed to explore the stable coalition partition of system model. Simulation results show that the proposed data transmission and coalition formation mechanism can achieve higher data throughput than the other contrast algorithms.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Dashmeet Anand, Hariharakumar Narasimhakumar, Et al.

Service Function Chaining (SFC) is a capability that links multiple network functions to deploy end-to-end network services. By virtualizing these network functions also known as Virtual Network Functions (VNFs), the dependency on traditional hardware can be removed, hence making it easier to deploy dynamic service chains over the cloud environment. Before implementing service chains over a large scale, it is necessary to understand the performance overhead created by each VNF owing to their varied characteristics. This research paper attempts to gain insights on the server and networking overhead encountered when a service chain is deployed on a cloud orchestration tool such as OpenStack. Specifically, this research will measure the CPU utilization, RAM usage and System Load of the server hosting OpenStack. Each VNF will be monitored for its varying performance parameters when subjected to different kinds of traffic. Our focus lies on acquiring performance parameters of the entire system for different service chains and compare throughput, latency, and VNF statistics of the virtual network. Insights obtained from this research can be used in the industry to achieve optimum performance of hardware and network resources while deploying service chains.


Author(s):  
Vladanka S. Acimovic-Raspopovic ◽  
Mirjana D. Stojanovic

In order to reduce costs and make it easier to integrate disparate systems, networked and virtual organizations (NVOs) should adopt open technology standards throughout the entire organization—standard computing architectures, standard networks, and standard application interfaces. The Internet protocol (IP) technology has been foreseen as a basic networking infrastructure that supports communication requirements of NVOs. With the growing demand for the integration of heterogeneous telecommunication services (e.g., voice, data, video and multimedia), there is a strong need for deploying quality of service (QoS) in IP-based networks. Under such circumstances, the flat pricing models that have been satisfying in traditional best-effort Internet so far do not encourage users to make reasonable use of resources. QoS differentiation introduces a clear need for incentives to be offered to users to encourage them to choose the service that is most appropriate for their needs. In commercial networks, this can be most effectively achieved through pricing. Falkner, Devetsikiotis, and Lambadaris (2000) and Da Silva (2000) supplied comprehensive reviews and evaluation of pricing schemes developed during the nineties and mainly related with per-flow IP QoS approaches, such as the integrated services (IntServ) specified by Braden, Clark, and Shenker (1994). Proliferation of the differentiated services framework, since the late 90s posed a number of new issues and resulted in novel proposals for pricing IP QoS and network resources.


2020 ◽  
Author(s):  
Huibin Jia ◽  
Yonghe Gai ◽  
Dongfang Xu ◽  
Yincheng Qi ◽  
Hongda Zheng

Author(s):  
Xiaoling Li ◽  
Xinwei Zhou

Data security is very important in the multi-path transmission networks (MTN). Efficient data security measurement in MTN is crucial so as to ensure the reliability of data transmission. To this end, this paper presents an improved algorithm using single-single minimal path based back-up path (SSMP-BP), which is designed to ensure the data transmission when the second path is out of work. From the simulation study, the proposed algorithm has the better network reliability compared with existing double minimal path based backup path (DMP-BP) approach. It could be found that, the proposed algorithm uses less back-up paths compared with DMP-BP so that less network resources like nodes are achieved.


Sign in / Sign up

Export Citation Format

Share Document