scholarly journals Learning to Harness Bandwidth with Multipath Congestion Control and Scheduling

Author(s):  
Shiva Raj Pokhrel ◽  
Anwar Walid

Multipath TCP (MPTCP) has emerged as a facilitator for harnessing and pooling available bandwidth in wireless/wireline communication networks and in data centers. Existing implementations of MPTCP such as, Linked Increase Algorithm (LIA), Opportunistic LIA (OLIA) and BAlanced LInked Adaptation (BALIA) include separate algorithms for congestion control and packet scheduling, with pre-selected control parameters. We propose a Deep Q-Learning (DQL) based framework for joint congestion control and packet scheduling for MPTCP. At the heart of the solution is an intelligent agent for interface, learning and actuation, which learns from experience optimal congestion control and scheduling mechanism using DQL techniques with policy gradients. We provide a rigorous stability analysis of system dynamics which provides important practical design insights. In addition, the proposed DQL-MPTCPalgorithm utilizes the ‘recurrent neural network’ and integrates it with ‘long short-term memory’ for continuously i) learning dynamic behavior of subflows (paths) and ii) responding promptly to their behavior using prioritized experience replay. With extensive emulations, we show that the proposed DQL-based MPTCP algorithm outperforms MPTCP LIA, OLIA and BALIA algorithms. Moreover, the DQL-MPTCP algorithm is robust to time-varying network characteristics and provides dynamic exploration and exploitation of paths. The revised version is to appear in IEEE Trans. in Mobile Computing soon.<br>

2021 ◽  
Author(s):  
Shiva Raj Pokhrel ◽  
Anwar Walid

Multipath TCP (MPTCP) has emerged as a facilitator for harnessing and pooling available bandwidth in wireless/wireline communication networks and in data centers. Existing implementations of MPTCP such as, Linked Increase Algorithm (LIA), Opportunistic LIA (OLIA) and BAlanced LInked Adaptation (BALIA) include separate algorithms for congestion control and packet scheduling, with pre-selected control parameters. We propose a Deep Q-Learning (DQL) based framework for joint congestion control and packet scheduling for MPTCP. At the heart of the solution is an intelligent agent for interface, learning and actuation, which learns from experience optimal congestion control and scheduling mechanism using DQL techniques with policy gradients. We provide a rigorous stability analysis of system dynamics which provides important practical design insights. In addition, the proposed DQL-MPTCPalgorithm utilizes the ‘recurrent neural network’ and integrates it with ‘long short-term memory’ for continuously i) learning dynamic behavior of subflows (paths) and ii) responding promptly to their behavior using prioritized experience replay. With extensive emulations, we show that the proposed DQL-based MPTCP algorithm outperforms MPTCP LIA, OLIA and BALIA algorithms. Moreover, the DQL-MPTCP algorithm is robust to time-varying network characteristics and provides dynamic exploration and exploitation of paths. The revised version is to appear in IEEE Trans. in Mobile Computing soon.<br>


2021 ◽  
Author(s):  
Mohammed Yahya Asiri

<div>Today, mobile devices like smartphones are supported with various wireless radio interfaces including cellular (3G/4G/LTE) and Wi-Fi (IEEE 802.11) [46]. The legacy devices can only communicate with only one interface. The Transmission Control Protocol, or TCP, has a limitation inability to change connection settings without breaking the connection. Multi-path TCP (MPTCP) protocol has been proposed to solve TCP single-interface limitation and provides a huge improvement on application performance by using multiple paths transparently (auto path changing). The last mile is the final networking segment which carried all network traffic. The available bandwidth in last-mile link can effectively harm the network throughput as it limits the amount of transmitted data. The quality of the last mile networks significantly determines the reliability and quality of the carrying network. MPTCP can provide a convenient solution for the last mile problem. An MPTCP scheduler needs to provide significant packet routing schedules based on the current status of paths (sub-flows) in terms of loss rate, bandwidth and jitter, in a way, maximizing the network goodput. MPTCP extends the TCP by enabling a single byte stream split into multiple byte streams and transfer them over multiple disjoint network paths or subflows. An MPTCP connection combines a set of different subflows where each subflow performance depends on the condition of its path (including packet loss rate, queue delay, and throughput capacity). Unreliable packet scheduling may lead to critical networking issues such as the head-of-line (HoL) blocking where the packets scheduled on the low-latency path must wait for the packets on the high-latency path to ensure in-order delivery and the out-of-order (OFO) packets, the receiver must maintain a large queue to reorganize the received packets. In this project, we aim to study and experiment MPTCP scheduling on dynamic networks (like a cellular network) and try to propose an MPTCP schema which can be effective to overcome limitations of dynamic networks performance.</div>


2020 ◽  
Vol 28 (2) ◽  
pp. 653-666 ◽  
Author(s):  
Wenjia Wei ◽  
Kaiping Xue ◽  
Jiangping Han ◽  
David S. L. Wei ◽  
Peilin Hong

2021 ◽  
Author(s):  
Mohammed Yahya Asiri

<div>Today, mobile devices like smartphones are supported with various wireless radio interfaces including cellular (3G/4G/LTE) and Wi-Fi (IEEE 802.11) [46]. The legacy devices can only communicate with only one interface. The Transmission Control Protocol, or TCP, has a limitation inability to change connection settings without breaking the connection. Multi-path TCP (MPTCP) protocol has been proposed to solve TCP single-interface limitation and provides a huge improvement on application performance by using multiple paths transparently (auto path changing). The last mile is the final networking segment which carried all network traffic. The available bandwidth in last-mile link can effectively harm the network throughput as it limits the amount of transmitted data. The quality of the last mile networks significantly determines the reliability and quality of the carrying network. MPTCP can provide a convenient solution for the last mile problem. An MPTCP scheduler needs to provide significant packet routing schedules based on the current status of paths (sub-flows) in terms of loss rate, bandwidth and jitter, in a way, maximizing the network goodput. MPTCP extends the TCP by enabling a single byte stream split into multiple byte streams and transfer them over multiple disjoint network paths or subflows. An MPTCP connection combines a set of different subflows where each subflow performance depends on the condition of its path (including packet loss rate, queue delay, and throughput capacity). Unreliable packet scheduling may lead to critical networking issues such as the head-of-line (HoL) blocking where the packets scheduled on the low-latency path must wait for the packets on the high-latency path to ensure in-order delivery and the out-of-order (OFO) packets, the receiver must maintain a large queue to reorganize the received packets. In this project, we aim to study and experiment MPTCP scheduling on dynamic networks (like a cellular network) and try to propose an MPTCP schema which can be effective to overcome limitations of dynamic networks performance.</div>


2021 ◽  
Vol 13 (3) ◽  
pp. 82
Author(s):  
Swarna Bindu Chetty ◽  
Hamed Ahmadi ◽  
Sachin Sharma ◽  
Avishek Nag

With the emergence of various types of applications such as delay-sensitive applications, future communication networks are expected to be increasingly complex and dynamic. Network Function Virtualization (NFV) provides the necessary support towards efficient management of such complex networks, by virtualizing network functions and placing them on shared commodity servers. However, one of the critical issues in NFV is the resource allocation for the highly complex services; moreover, this problem is classified as an NP-Hard problem. To solve this problem, our work investigates the potential of Deep Reinforcement Learning (DRL) as a swift yet accurate approach (as compared to integer linear programming) for deploying Virtualized Network Functions (VNFs) under several Quality-of-Service (QoS) constraints such as latency, memory, CPU, and failure recovery requirements. More importantly, the failure recovery requirements are focused on the node-outage problem where outage can be either due to a disaster or unavailability of network topology information (e.g., due to proprietary and ownership issues). In DRL, we adopt a Deep Q-Learning (DQL) based algorithm where the primary network estimates the action-value function Q, as well as the predicted Q, highly causing divergence in Q-value’s updates. This divergence increases for the larger-scale action and state-space causing inconsistency in learning, resulting in an inaccurate output. Thus, to overcome this divergence, our work has adopted a well-known approach, i.e., introducing Target Neural Networks and Experience Replay algorithms in DQL. The constructed model is simulated for two real network topologies—Netrail Topology and BtEurope Topology—with various capacities of the nodes (e.g., CPU core, VNFs per Core), links (e.g., bandwidth and latency), several VNF Forwarding Graph (VNF-FG) complexities, and different degrees of the nodal outage from 0% to 50%. We can conclude from our work that, with the increase in network density or nodal capacity or VNF-FG’s complexity, the model took extremely high computation time to execute the desirable results. Moreover, with the rise in complexity of the VNF-FG, the resources decline much faster. In terms of the nodal outage, our model provided almost 70–90% Service Acceptance Rate (SAR) even with a 50% nodal outage for certain combinations of scenarios.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Sanguk Ryu ◽  
Inwhee Joe ◽  
WonTae Kim

Named data networking (NDN) is a future network architecture that replaces IP-oriented communication with content-oriented communication and has new features such as cache, multiple paths, and multiple sources. Services such as video streaming, to which NDN can be applied in the future, can cause congestion if data is concentrated on one of the nodes during high demand. To solve this problem, sending rate control methods such as TCP congestion control have been proposed, but they do not adequately reflect the characteristics of NDN. Therefore, we use reinforcement learning and deep learning to propose a congestion control method that takes advantage of multipath features. The intelligent forwarding strategy for congestion control using Q-learning and long short-term memory in NDN proposed in this paper is divided into two phases. The first phase uses an LSTM model to train a pending interest table (PIT) entry rate that can be used as an indicator to detect congestion by knowing the amount of data returned. In the second phase, it is forwarded to an alternative path that is not congestive via Q-learning based on the PIT entry rate predicted by the trained LSTM model. The simulation results show that the proposed method increases the data reception rate by 6.5% and 19.5% and decreases the packet drop rate by 7.3% and 17.2% compared to an adaptive SRTT-based forwarding strategy (ASF) and BestRoute.


Sign in / Sign up

Export Citation Format

Share Document