network utility
Recently Published Documents


TOTAL DOCUMENTS

226
(FIVE YEARS 50)

H-INDEX

20
(FIVE YEARS 3)

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Xiaoge Huang ◽  
Xuesong Deng ◽  
Chengchao Liang ◽  
Weiwei Fan

To address the data security and user privacy issues in the task offloading process and resource allocation of the fog computing network, a blockchain-enabled fog computing network task offloading model is proposed in this paper. Furthermore, to reduce the network utility which is defined as the total energy consumption of the fog computing network and the total delay of the blockchain network, a blockchain-enabled fog computing network task offloading and resource allocation algorithm (TR-BFCN) is proposed to jointly optimize the task offloading decision and resource allocation. Finally, the original nonconvex optimization problem is converted into two suboptimization problems, namely, task offloading decisions and computational resource allocations. Moreover, a two-stage Stackelberg game model is designed to obtain the optimal amount of purchased resource and the optimal resource pricing. Simulation results show that the proposed TR-BFCN algorithm can effectively reduce the network utility compared with other algorithms.


2021 ◽  
Author(s):  
Yi Shi ◽  
Parisa Rahimzadeh ◽  
Maice Costa ◽  
Tugba Erpek ◽  
Yalin E. Sagduyu

The paper presents a reinforcement learning solution to dynamic admission control and resource allocation for 5G radio access network (RAN) slicing requests, when the spectrum is potentially shared between 5G and an incumbent user such as in the Citizens Broadband Radio Service scenarios. Available communication resources (frequency-time resource blocks and transmit powers) and computational resources (processor power) not used by the incumbent user can be allocated to stochastic arrivals of network slicing requests. Each request arrives with priority (weight), throughput, computational resource, and latency (deadline) requirements. As online algorithms, the greedy and myopic solutions that do not consider heterogeneity of future requests and their arrival process become ineffective for network slicing. Therefore, reinforcement learning solutions (Q-learning and Deep Q-learning) are presented to maximize the network utility in terms of the total weight of granted network slicing requests over a time horizon, subject to communication and computational constraints. Results show that reinforcement learning provides improvements in the 5G network utility relative to myopic, greedy, random, and first come first served solutions. In particular, deep Q-learning reduces the complexity and allows practical implementation as the state-action space grows, and effectively admits/rejects requests when 5G needs to share the spectrum with incumbent users that may dynamically occupy some of the frequency-time blocks. Furthermore, the robustness of deep reinforcement learning is demonstrated in the presence of the misdetection/false alarm errors in detecting the incumbent user's activity.


2021 ◽  
Author(s):  
Yi Shi ◽  
Parisa Rahimzadeh ◽  
Maice Costa ◽  
Tugba Erpek ◽  
Yalin E. Sagduyu

The paper presents a reinforcement learning solution to dynamic admission control and resource allocation for 5G radio access network (RAN) slicing requests, when the spectrum is potentially shared between 5G and an incumbent user such as in the Citizens Broadband Radio Service scenarios. Available communication resources (frequency-time resource blocks and transmit powers) and computational resources (processor power) not used by the incumbent user can be allocated to stochastic arrivals of network slicing requests. Each request arrives with priority (weight), throughput, computational resource, and latency (deadline) requirements. As online algorithms, the greedy and myopic solutions that do not consider heterogeneity of future requests and their arrival process become ineffective for network slicing. Therefore, reinforcement learning solutions (Q-learning and Deep Q-learning) are presented to maximize the network utility in terms of the total weight of granted network slicing requests over a time horizon, subject to communication and computational constraints. Results show that reinforcement learning provides improvements in the 5G network utility relative to myopic, greedy, random, and first come first served solutions. In particular, deep Q-learning reduces the complexity and allows practical implementation as the state-action space grows, and effectively admits/rejects requests when 5G needs to share the spectrum with incumbent users that may dynamically occupy some of the frequency-time blocks. Furthermore, the robustness of deep reinforcement learning is demonstrated in the presence of the misdetection/false alarm errors in detecting the incumbent user's activity.


2021 ◽  
Vol 48 (3) ◽  
pp. 63-70
Author(s):  
Shiva Raj Pokhrel ◽  
Carey Williamson

Network utility maximization (NUM) for Multipath TCP (MPTCP) is a challenging task, since there is no well-defined utility function for MPTCP [6]. In this paper, we identify the conditions under which we can use Kelly's NUM mechanism, and explicitly compute the equilibrium. We obtain this equilibrium by using Tullock's rent-seeking framework from game theory to define a utility function for MPTCP. This approach allows us to design MPTCP algorithms with common delay and/or loss constraints at the subflow level. Furthermore, this utility function has diagonal strict concavity, which guarantees a globally unique (normalized) equilibrium.


2021 ◽  
Vol 11 (4) ◽  
pp. 1884
Author(s):  
Shuai Liu ◽  
Jing He ◽  
Jiayun Wu

Dynamic spectrum access (DSA) has been considered as a promising technology to address spectrum scarcity and improve spectrum utilization. Normally, the channels are related to each other. Meanwhile, collisions will be inevitably caused by communicating between multiple PUs or multiple SUs in a real DSA environment. Considering these factors, the deep multi-user reinforcement learning (DMRL) is proposed by introducing the cooperative strategy into dueling deep Q network (DDQN). With no demand of prior information about the system dynamics, DDQN can efficiently learn the correlations between channels, and reduce the computational complexity in the large state space of the multi-user environment. To reduce the conflicts and further maximize the network utility, cooperative channel strategy is explored by utilizing the acknowledge (ACK) signals without exchanging spectrum information. In each time slot, each user selects a channel and transmits a packet with a certain probability. After sending, ACK signals are utilized to judge whether the transmission is successful or not. Compared with other popular models, the simulation results show that the proposed DMRL can achieve better performance on effectively enhancing spectrum utilization and reducing conflict rate in the dynamic cooperative spectrum sensing.


Sign in / Sign up

Export Citation Format

Share Document