Game Theory for Wireless Network Resource Management

Game Theory ◽  
2017 ◽  
pp. 383-399
Author(s):  
Sungwook Kim

Computer network bandwidth can be viewed as a limited resource. The users on the network compete for that resource. Their competition can be simulated using game theory models. No centralized regulation of network usage is possible because of the diverse ownership of network resources. Therefore, the problem is of ensuring the fair sharing of network resources. If a centralized system could be developed which would govern the use of the shared resources, each user would get an assigned network usage time or bandwidth, thereby limiting each person's usage of network resources to his or her fair share. As of yet, however, such a system remains an impossibility, making the situation of sharing network resources a competitive game between the users of the network and decreasing everyone's utility. This chapter explores this competitive game.

Computer network bandwidth can be viewed as a limited resource. The users on the network compete for that resource. Their competition can be simulated using game theory models. No centralized regulation of network usage is possible because of the diverse ownership of network resources. Therefore, the problem is of ensuring the fair sharing of network resources. If a centralized system could be developed which would govern the use of the shared resources, each user would get an assigned network usage time or bandwidth, thereby limiting each person's usage of network resources to his or her fair share. As of yet, however, such a system remains an impossibility, making the situation of sharing network resources a competitive game between the users of the network and decreasing everyone's utility. This chapter explores this competitive game.


2019 ◽  
Vol 9 (1) ◽  
pp. 137
Author(s):  
Zhiyong Ye ◽  
Yuanchang Zhong ◽  
Yingying Wei

The workload of a data center has the characteristics of complexity and requirement variability. However, in reality, the attributes of network workloads are rarely used by resource schedulers. Failure to dynamically schedule network resources according to workload changes inevitably leads to the inability to achieve optimal throughput and performance when allocating network resources. Therefore, there is an urgent need to design a scheduling framework that can be workload-aware and allocate network resources on demand based on network I/O virtualization. However, in the current mainstream I/O virtualization methods, there is no way to provide workload-aware functions while meeting the performance requirements of virtual machines (VMs). Therefore, we propose a method that can dynamically sense the VM workload to allocate network resources on demand, and can ensure the scalability of the VM while improving the performance of the system. We combine the advantages of I/O para-virtualization and SR-IOV technology, and use a limited number of virtual functions (VFs) to ensure the performance of network-intensive VMs, thereby improving the overall network performance of the system. For non-network-intensive VMs, the scalability of the system is guaranteed by using para-virtualized Network Interface Cards (NICs) which are not limited in number. Furthermore, to be able to allocate the corresponding bandwidth according to the VM’s network workload, we hierarchically divide the VF’s network bandwidth, and dynamically switch between VF and para-virtualized NICs through the active backup strategy of Bonding Drive and ACPI Hotplug technology to ensure the dynamic allocation of VF. Experiments show that the allocation framework can effectively improve system network performance, in which the average request delay can be reduced by more than 26%, and the system bandwidth throughput rate can be improved by about 5%.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3444 ◽  
Author(s):  
Cheol-Ho Hong ◽  
Kyungwoon Lee ◽  
Minkoo Kang ◽  
Chuck Yoo

Fog computing is a new computing paradigm that employs computation and network resources at the edge of a network to build small clouds, which perform as small data centers. In fog computing, lightweight virtualization (e.g., containers) has been widely used to achieve low overhead for performance-limited fog devices such as WiFi access points (APs) and set-top boxes. Unfortunately, containers have a weakness in the control of network bandwidth for outbound traffic, which poses a challenge to fog computing. Existing solutions for containers fail to achieve desirable network bandwidth control, which causes bandwidth-sensitive applications to suffer unacceptable network performance. In this paper, we propose qCon, which is a QoS-aware network resource management framework for containers to limit the rate of outbound traffic in fog computing. qCon aims to provide both proportional share scheduling and bandwidth shaping to satisfy various performance demands from containers while implementing a lightweight framework. For this purpose, qCon supports the following three scheduling policies that can be applied to containers simultaneously: proportional share scheduling, minimum bandwidth reservation, and maximum bandwidth limitation. For a lightweight implementation, qCon develops its own scheduling framework on the Linux bridge by interposing qCon’s scheduling interface on the frame processing function of the bridge. To show qCon’s effectiveness in a real fog computing environment, we implement qCon in a Docker container infrastructure on a performance-limited fog device—a Raspberry Pi 3 Model B board.


2020 ◽  
Vol 17 (2) ◽  
pp. 172988142091729 ◽  
Author(s):  
Yan Wang

With the development of big data technology more and more perfect, many colleges and universities have begun to use it to analyze the construction work. In daily life, such as class, study, and entertainment, the campus network exists. The purpose of this article is to study the online behavior of users, analyze students’ use of the campus network by analyzing students, and not only have a clear understanding of the students’ online access but also feedback on the operation and maintenance of the campus network. Based on the big data, this article uses distributed clustering algorithm to study the online behavior of users. This article selects a college online user as the research object and studies and analyzes the online behavior of school users. This study found that the second-year student network usage is as high as 330,000, which is 60.98% more than the senior. In addition, the majority of student users spend most of their online time on the weekend, and the other time is not much different. The duration is concentrated within 1 h, 1–2 h, 2–3 h in these three time periods. By studying the user’s online behavior, you can understand the utilization rate of the campus network bandwidth resources and the distribution of the use of the network, to prevent students from indulging in the virtual network world, and to ensure that the network users can improve the online experience of the campus network while accessing the network resources reasonably. The research provides a reference for network administrators to adjust network bandwidth and optimize the network.


2019 ◽  
Vol 10 (3) ◽  
pp. 33-48 ◽  
Author(s):  
Emilia Rosa Jimson ◽  
Kashif Nisar ◽  
Mohd Hanafi Ahmad Hijazi

Software defined networking (SDN) architecture has been verified to make the current network architecture management simpler, and flexible. The key idea of SDN is to simplify network management by introducing a centralized control, through which dynamic updates of forwarding rules, simplification of the network devices task, and flow abstractions can be realized. In this article, the researchers discuss the complex design of the current network architecture, which has inevitably resulted in poor network resources management, such as bandwidth management. SDN-based network model has been proposed to simplify the management of the limited bandwidth of a network. The proposed network model utilizes the limited network bandwidth systematically by giving real-time traffics higher priority than non-real-time traffics to access the limited resource. The experimental results showed that the proposed model helped ensure real-time traffics would be given greater priority to access the limited bandwidth, where major portion of the limited bandwidth being allocated to the real-time traffics.


2019 ◽  
Vol 10 (1) ◽  
pp. 78-95 ◽  
Author(s):  
Hindol Bhattacharya ◽  
Samiran Chattopadhyay ◽  
Matangini Chattopadhyay ◽  
Avishek Banerjee

Distributed storage allocation problems are an important optimization problem in reliable distributed storage, which aims to minimize storage cost while maximizing error recovery probability by optimal storage of data in distributed storage nodes. A key characteristic of distributed storage is that data is stored in remote servers across a network. Thus, network resources especially communication links are an expensive and non-trivial resource which should be optimized as well. In this article, the authors present a simulation-based study of the network characteristics of a distributed storage network in the light of several allocation patterns. By varying the allocation patterns, the authors have demonstrated the interdependence between network bandwidth, defined in terms of link capacity and allocation pattern using network throughput as a metric. Motivated by observing the importance of network resource as an important cost metric, the authors have formalized an optimization problem that jointly minimizes both the storage cost and the cost of network resources. A hybrid meta heuristic algorithm is employed that solves this optimization problem by allocating data in a distributed storage system. Experimental results validate the efficacy of the algorithm.


2014 ◽  
Vol 23 (01) ◽  
pp. 1440002 ◽  
Author(s):  
André Pessoa Negrão ◽  
João Costa ◽  
Paulo Ferreira ◽  
Luís Veiga

Cooperative editing applications enable geographically distributed users to concurrently edit a shared document space over a computer network. These applications present several technical challenges related to the scalability of the system and the promptness with which relevant updates are disseminated to the concerned users. This paper presents Cooperative Semantic Locality Awareness (CoopSLA), a consistency model for cooperative editing applications that is scalable and efficient with regards to user needs. In CoopSLA, updates to different parts of the document have different priorities, depending on the relative interest of the user in the region in which the update is performed; updates that are considered relevant are sent to the user promptly, while less important updates are postponed. As a result, the system makes a more intelligent usage of the network resources, since (1) it saves bandwidth by merging postponed updates and (2) it issues fewer accesses to the network resources as a result of both update merging and message aggregation. We have implemented a collaborative version of the open source Tex editor TexMaker using the CoopSLA approach. We present evaluation results that support our claim that CoopSLA is very effective regarding network usage while fulfilling user needs (e.g. ensuring that relevant updates are disseminated in time).


2022 ◽  
Vol 6 ◽  
pp. 857-876
Author(s):  
Yin Sheng Zhang ◽  

Purpose–This study is to explore a way toretainthe strengths and eliminatethe weaknesses of the existingarchitecture oflocal OS and cloud OS,then create an innovativeone, which is referredto as semi-network OS architecture.Method–The elements of semi-network OS architecture includes networkresources, localresources, and semi-mobile hardware resources; among them, networkresources are the expanded portionof OS, which is used to ensure the scalability of OS; local resources are the base portion of OS, which is used to ensure the stability of local computing, as well as the autonomy of user operations; the semi-mobile hardware resource is OSPU, which is used to ensure the positioning and security of dataflow.Results–Thefat client OS relies on the network shared resources,local exclusive resources,and semi-mobilehardware resources (OSPU), not relies solely on a single resource, to perform its tasks on a fat client, in thisarchitecture, most of the system files of OS on a fat client isderived from OS server, which is a network shared resources, and the rest of system files of OS is derived from OSPUof a fat client, which is a non-network resource, so the architecture of OShas "semi-network" attribute, wherein the OSPU is a key subordinate component for data processing and security verification,the OS server is a storage place rather than operating a placeof system files, and system files that stored on a server can only be downloaded to a fat client to carry out their mission.Conclusion–A complete OS is divided into base portion and expanded portion, and this "portion" division of OS enables a fat client to be dually supported by remote network resources and local non-network resources, therefore, it is expected to make a fat client more flexible, safer and more reliable, and more convenient to be operated.


2014 ◽  
Vol 484-485 ◽  
pp. 799-802
Author(s):  
Ning Li

Resource sharing is based on network resource sharing, in which many Internet enthusiasts share the information collected by themselves with the world through some platforms for not pursuing any interests. Along with the wide application of computer network in the groups of users, a good communication platform between the Internet and users has begun to be produced, and also network resources gradually trend to the development of resources sharing, but are no longer simply enjoyed by a certain user. To analyze and further know the advantages of computer network resource sharing, it is necessary to combine with their actual application conditions. This is analyzed in this paper.


Electronics ◽  
2019 ◽  
Vol 8 (3) ◽  
pp. 258
Author(s):  
Saleem Karmoshi ◽  
Shuo Wang ◽  
Naji Alhusaini ◽  
Jing Li ◽  
Ming Zhu ◽  
...  

Allocating bandwidth guarantees to applications in the cloud has become increasingly demanding and essential as applications compete to share cloud network resources. However, cloud-computing providers offer no network bandwidth guarantees in a cloud environment, predictably preventing tenants from running their applications. Existing schemes offer tenants practical cluster abstraction solutions emulating underlying physical network resources, proving impractical; however, providing virtual network abstractions has remained an essential step in the right direction. In this paper, we consider the requirements for enabling the application-aware network with bandwidth guarantees in a Virtual Data Center (VDC). We design GANA-VDC, a network virtualization framework supporting VDC application-aware networking with bandwidth guarantees in a cloud datacenter. GANA-VDC achieves scalability using an interceptor to translate OpenFlow features to prompt fine-grained Quality of Service (QoS). Facilitating the expression of diverse network resource demands, we also propose a new Virtual Network (VN) to Physical Network (PN) mapping approach, Graph Abstraction Network Architecture (GANA), which we innovatively introduce in this paper, allowing tenants to provide applications with cloud networking environment, thereby increasing the preservation performance. Our results show GANA-VDC can provide bandwidth guarantee and achieve low time complexity, yielding higher network utility.


Sign in / Sign up

Export Citation Format

Share Document