Game Theory for Collaboration in Future Networks

Author(s):  
José André Moura ◽  
Rui Neto Marinheiro ◽  
João Carlos Silva

Cooperative strategies have the great potential of improving network performance and spectrum utilization in future networking environments. This new paradigm in terms of network management, however, requires a novel design and analysis framework targeting a highly flexible networking solution with a distributed architecture. Game Theory is very suitable for this task, since it is a comprehensive mathematical tool for modeling the highly complex interactions among distributed and intelligent decision makers. In this way, the more convenient management policies for the diverse players (e.g. content providers, cloud providers, home providers, brokers, network providers or users) should be found to optimize the performance of the overall network infrastructure. The authors discuss in this chapter several Game Theory models/concepts that are highly relevant for enabling collaboration among the diverse players, using different ways to incentivize it, namely through pricing or reputation. In addition, the authors highlight several related open problems, such as the lack of proper models for dynamic and incomplete information games in this area.

Author(s):  
José André Moura ◽  
Rui Neto Marinheiro ◽  
João Carlos Silva

Cooperative strategies have the great potential of improving network performance and spectrum utilization in future networking environments. This new paradigm in terms of network management, however, requires a novel design and analysis framework targeting a highly flexible networking solution with a distributed architecture. Game Theory is very suitable for this task, since it is a comprehensive mathematical tool for modeling the highly complex interactions among distributed and intelligent decision makers. In this way, the more convenient management policies for the diverse players (e.g. content providers, cloud providers, home providers, brokers, network providers or users) should be found to optimize the performance of the overall network infrastructure. The authors discuss in this chapter several Game Theory models/concepts that are highly relevant for enabling collaboration among the diverse players, using different ways to incentivize it, namely through pricing or reputation. In addition, the authors highlight several related open problems, such as the lack of proper models for dynamic and incomplete information games in this area.


Author(s):  
Jose Moura ◽  
Rui Neto Marinheiro ◽  
Joao Carlos Silva

Cooperative strategies amongst network players can improve network performance and spectrum utilization in future networking environments. Game Theory is very suitable for these emerging scenarios, since it models high-complex interactions among distributed decision makers. It also finds the more convenient management policies for the diverse players (e.g., content providers, cloud providers, edge providers, brokers, network providers, or users). These management policies optimize the performance of the overall network infrastructure with a fair utilization of their resources. This chapter discusses relevant theoretical models that enable cooperation amongst the players in distinct ways through, namely, pricing or reputation. In addition, the authors highlight open problems, such as the lack of proper models for dynamic and incomplete information scenarios. These upcoming scenarios are associated to computing and storage at the network edge, as well as, the deployment of large-scale IoT systems. The chapter finalizes by discussing a business model for future networks.


2018 ◽  
Vol 8 (12) ◽  
pp. 2530
Author(s):  
Nan Nie ◽  
Xin Zhang ◽  
Chu Fang ◽  
Qiu Zhu ◽  
Jiao Lu ◽  
...  

Game theory—the scientific study of interactive, rational decision making—describes the interaction of two or more players from macroscopic organisms to microscopic cellular and subcellular levels. Life based on molecules is the highest and most complex expression of molecular interactions. However, using simple molecules to expand game theory for molecular decision-making remains challenging. Herein, we demonstrate a proof-of-concept molecular game-theoretical system (molecular prisoner’s dilemma) that relies on formation of the thymine–Hg2+–thymine hairpin structure specifically induced by Hg2+ and fluorescence quenching and molecular adsorption capacities of cobalt oxyhydroxide (CoOOH) nanosheets, resulting in fluorescence intensity and distribution change of polythymine oligonucleotide 33-repeat thymines (T33). The “bait” molecule, T33, interacted with two molecular players, CoOOH and Hg2+, in different states (absence = silence and presence = betrayal), regarded as strategies. We created conflicts (sharing or self-interest) of fluorescence distribution of T33, quantifiable in a 2 × 2 payoff matrix. In addition, the molecular game-theoretical-system based on T33 and CoOOH was used for sensing Hg2+ over the range of 20 to 600 nM with the detection limit of 7.94 nM (3σ) and for determination of Hg2+ in pond water. Inspired by the proof-of-concept for molecular game theory, various molecular decision-making systems could be developed, which would help promote molecular information processing and generating novel molecular intelligent decision systems for environmental monitoring and molecular diagnosis and therapy.


2013 ◽  
Vol 15 (03) ◽  
pp. 1340015 ◽  
Author(s):  
VITO FRAGNELLI ◽  
STEFANO GAGLIARDO

Location problems describe those situations in which one or more facilities have to be placed in a region trying to optimize a suitable objective function. Game theory has been used as a tool to solve location problems and this paper is devoted to describe the state-of-the-art of the research on location problems through the tools of game theory. Particular attention is given to the problems that are still open in the field of cooperative location game theory.


Author(s):  
Stojan Kitanov ◽  
Borislav Popovski ◽  
Toni Janevski

Because of the increased computing and intelligent networking demands in 5G network, cloud computing alone encounters too many limitations, such as requirements for reduced latency, high mobility, high scalability, and real-time execution. A new paradigm called fog computing has emerged to resolve these issues. Fog computing distributes computing, data processing, and networking services to the edge of the network, closer to end users. Fog applied in 5G significantly improves network performance in terms of spectral and energy efficiency, enable direct device-to-device wireless communications, and support the growing trend of network function virtualization and separation of network control intelligence from radio network hardware. This chapter evaluates the quality of cloud and fog computing services in 5G network, and proposes five algorithms for an optimal selection of 5G RAN according to the service requirements. The results demonstrate that fog computing is a suitable technology solution for 5G networks.


Author(s):  
José Juan Pazos-Arias ◽  
Martín López-Nores

We are witnessing the development of new communication technologies (e.g., DTV networks [digital TV], 3G [thirdgeneration] telephony, and DSL [digital subscriber line]) and a rapid growth in the amount of information available. In this scenario, users were supposed to benefit extensively from services delivering news, entertainment, education, commercial functionalities, and so forth. However, the current situation may be better referred to as information overload; as it frequently happens that users are faced with an overwhelming amount of information. A similar situation was noticeable in the 1990s with the exponential growth of the Internet, which made users feel disoriented among the myriad of contents available through their PCs. This gave birth to search engines (e.g., Google and Yahoo) that would retrieve relevant Web pages in response to user-entered queries. These tools proved effective, with millions of people using them to find pieces of information and services. However, the advent of new devices (DTV receivers, mobile phones, media players, etc.) introduces consumption and usage habits that render the search-engine paradigm insufficient. It is no longer realistic to think that users will bother to visit a site, enter queries describing what they want, and select particular contents from among those in a list. The reasons may relate to users adopting a predominantly passive role (e.g., while driving or watching TV), the absence of bidirectional communication (as in broadcasting environments), or users feeling uneasy with the interfaces provided. To tackle these issues, a large body of research is being devoted nowadays to the design and provision of personalized information services, with a new paradigm of recommender systems proactively selecting the contents that match the interests and needs of each individual at any time. This article describes the evolution of these services, followed by an overview of the functionalities available in diverse areas of application and a discussion of open problems. Background The development of personalized information


Author(s):  
J. Joaquín Escudero-Garzás ◽  
Ana García-Armada

The goal of this chapter is to introduce the novel concept of cognitive radio (CR) for wireless telecommunications. Cognitive radios are a new type of radio devices that include cognition and reconfigurability features. The raising number of studies in different areas of research shows their potential and the expectation created among the telecommunications community. In this chapter, the authors first introduce the reader to the new paradigm that cognitive radio networks have created; more specifically, they explain in detail the new next generation networks. Given that our intention is to introduce cognitive radio, the authors focus on the challenges in PHY layer and MAC sublayer and the most relevant studies in these fields. Finally, the integration of game theory and cognitive radio creates a new paradigm where the advantages of both technologies merge to solve complex problems.


2014 ◽  
Vol 12 (1) ◽  
pp. 29-38
Author(s):  
Silvanus Teneng Kiyang ◽  
Robert Van Zyl

Purpose – The purpose of this work is to assess the influence of ambient noise on the performance of wireless sensor networks (WSNs) empirically and, based on these findings, develop a mathematical tool to assist technicians to determine the maximum inter-node separation before deploying a new WSN. Design/methodology/approach – A WSN test platform is set up in an electromagnetically shielded environment (RF chamber) to accurately control and quantify the ambient noise level. The test platform is subsequently placed in an operational laboratory to record network performance in typical unshielded spaces. Results from the RF chamber and the real-life environments are analysed. Findings – A minimum signal-to-noise ratio (SNR) at which the network still functions was found to be of the order 30 dB. In the real-life scenarios (machines, telecommunications and computer laboratories), the measured SNR exceeded this minimum value by more than 20 dB. This is due to the low ambient industrial noise levels observed in the 2.4 GHz ISM band for typical environments found at academic institutions. It, therefore, suggests that WSNs are less prone to industrial interferences than anticipated. Originality/value – A predictive mathematical tool is developed that can be used by technicians to determine the maximum inter-node separation before the WSN is deployed. The tool yields reliable results and promises to save installation time.


Sign in / Sign up

Export Citation Format

Share Document