tcp connections
Recently Published Documents


TOTAL DOCUMENTS

155
(FIVE YEARS 13)

H-INDEX

14
(FIVE YEARS 1)

2021 ◽  
pp. 91-194
Author(s):  
Sloan Kelly ◽  
Khagendra Kumar
Keyword(s):  

2021 ◽  
Author(s):  
Junho Lee ◽  
Gyeongsik Yang ◽  
Zhixiong Niu ◽  
Peng Cheng ◽  
Yongqiang Xiong ◽  
...  
Keyword(s):  

Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7289
Author(s):  
Monika Prakash ◽  
Atef Abdrabou

The multipath transmission control protocol (MPTCP) is considered a promising wireless multihoming solution, and the 3rd generation partnership project (3GPP) includes it as a standard feature in the fifth-generation (5G) networks. Currently, ns-3 (Network Simulator-3) is widely used to evaluate the performance of wireless networks and protocols, including the emerging MPTCP protocol. This paper investigates the fidelity of the Linux kernel implementation of MPTCP in the ns-3 direct code execution module. The fidelity of MPTCP simulation is tested by comparing its performance with a real Linux stack implementation of MPTCP using a hardware testbed for two different setups. One setup emulates the existence of a bottleneck link between the sending and receiving networks, whereas the other setup does not have such a bottleneck. The fidelity of ns-3’s simulation is tested for four congestion control algorithms, namely Cubic, linked-increases algorithm (LIA), opportunistic LIA (OLIA) and wVegas for relatively short and long data flows. It is found that the uplink MPTCP throughput performance exhibited by the ns-3 simulator matches the hardware testbed results only if the flows are long-lived and share no common bottleneck link. Likewise, the MPTCP throughput achieved during a downlink scenario using the ns-3 simulator and the hardware testbed are close to each other across all algorithms except wVegas regardless of the flow size if there is no bottleneck link. Moreover, it is observed that the impact of LTE handover on MPTCP throughput is less significant in the simulator than the real hardware testbed, and it is setup-dependent.


2019 ◽  
Author(s):  
Ram P Rustagi ◽  
Viraj Kumar

In the 21st century, the internet has become essential part of everyday tasks including banking, interacting with government services, education, entertainment, text/voice/video communication, etc. Individuals access the internet using client-side applications such as a browser or an app on their mobile phone or laptop/desktop. This client-side application communicates with a server-side application, typically running on a web server, which in turn may interact with other business applications. The underlying protocol is typically HTTP [1] running on top of the TCP/IP protocol [2][3]. A typical web server supports a large number (hundreds or thousands) of concurrent TCP connections. The most commonly deployed web servers in use today are Apache server [4], Nginx [5], or Microsoft Internet Information Server (IIS)[6]. Nginx is mostly used on Linux and IIS runs only on Windows OS. In contrast, Apache web server (which is almost as old as the web itself) is supported on all platforms (Linux, Windows, MacOS, etc.). In its initial release in 1995 (version 1.3), Apache server could serve only a few concurrent clients, but its current release (2.4.41) can support a huge number of concurrent clients. In this article (as well as Part II that will follow), we will present a simplified view of this evolution that nevertheless explains how current web manage such high levels of concurrency. To do so, we will delve into socket programming, which is at the heart of managing TCP connections, and we will examine the key role that it plays in delivering high performance. We have studied both transport layers protocol i.e., TCP [2] and UDP [7], in detail in the last few articles, and we have developed a basic understanding of the working of the transport layer. This is a communication-enabling layer used by applications to exchange application-level data. Simple working of applications using TCP (providing reliable delivery) and UDP (providing best effort delivery) socket programming are provided in [8]. In this article, however, we will discuss increasingly complex levels of socket programming, from simple socket connections to complex connection management that are necessary to attain high TCP performance. We will focus on TCP Socket programming only. UDP socket programming is simply a best effort delivery and socket implementation support does not impact the application communication performance.


2019 ◽  
Vol 2019 ◽  
pp. 1-12
Author(s):  
Marcos Talau ◽  
Mauro Fonseca ◽  
Emilio C. G. Wille

In the absence of losses, TCP constantly increases the amount of data sent per instant of time. This behavior leads to problems that affect its performance, especially when multiple devices share the same gateway. Several studies have been done to mitigate such problems, but many of them require TCP side changes or a meticulous configuration. Some studies have shown promise, such as the use of gateway techniques to change the receiver’s advertised window of ACK segments based on the amount of memory in the gateway; in this work, we use the term “network-return” to refer to these techniques. In this paper, we present a new network-return technique called early window tailoring (EWT). For its use, it does not require any modification in the TCP implementations at the sides and does not require that all routers in the path use the same congestion control mechanism, and the use in the gateway is sufficient. With the use of the simulator ns-3 and following the recommendations of RFC 7928, the new approach was tested in multiple scenarios. The EWT was compared to drop-tail, RED, ARED, and the two network-return techniques—explicit window adaptation (EWA) and active window management (AWM). In the results, it was observed that EWT was shown to be efficient in congestion control. Its use avoided losses of segments, bringing expressive gains in the transfer latency and goodput and maintaining fairness between the flows. However, unlike other approaches, the most prominent feature of EWT is its ability to maintain a very high number of active flows at a given level of segment loss rate. The EWT allowed the existence of a number of flows, which is on average 49.3% better than its best competitor and 75.8% better when no AQM scheme was used.


2019 ◽  
Vol 2019 ◽  
pp. 1-10
Author(s):  
Armir Bujari ◽  
Andrea Marin ◽  
Claudio E. Palazzi ◽  
Sabina Rossi

We consider the scenario in which several TCP connections share the same access point (AP) and a congestion avoidance/control mechanism is adopted with the aim of assigning the available bandwidth to the clients with a certain fairness. When UDP traffic with real-time requirements is present, the problem becomes even more challenging. Very well-known congestion avoidance mechanisms are the Random Early Detection (RED) and the Explicit Congestion Notification (ECN). More recently, the Smart Access Point with Limited Advertised Window (SAP-LAW) has been proposed. Its main idea is that of computing the maximum TCP rate for each connection at the bottleneck, taking into account the UDP traffic to keep a low queue size combined with a reasonable bandwidth utilization. In this paper, we propose a new congestion control mechanism, namely, Smart-RED, inspired by SAP-LAW heuristic formula. We study its performance by using mean field models and compare the behaviours of ECN/RED, SAP-LAW, and Smart-RED under different scenarios. We show that while Smart-RED maintains some of the desirable properties of the SAP-LAW, it solves the problems it may have in case of bursty UDP traffic or TCP connections with very different needs of bandwidth.


Author(s):  
Atef Abdrabou ◽  
Monika Prakash ◽  
Ahmed S. AlShehi ◽  
Sirag-Eldin Ahmed ◽  
Mohamed Darwish

Author(s):  
Đặng Văn Tuyên ◽  
Trương Thu Hương

The SDN/Openflow architecture opens new opportunities for effective solutions to address network security problems; however, it also brings new security challenges compared to the traditional network. One of those is the mechanism of reactive installation for new flow entries that can make the data plane and control plane easily become a target for resource saturation attacks with spoofing technique such as SYN flood. There are a number of solutions to this problem such as Connection Migration (CM) mechanism in Avant-Guard solution. However, most of them increase load to the commodity switches and/or split benign TCP connections, which can cause increase of packet latency and disable some features of the TCP protocol. This paper presents a solution called SDN-based SYN Flood Guard (SSG), which takes advantages of Openflow’s ability to match TCP Flags fields and the RST Cookie technique to authenticate three-way handshake processes of TCP connections in a separated device from SDN/Openflow switches. The experiment results reveal that SSG solves the aforementioned problems and improves the SYN Flood.


Sign in / Sign up

Export Citation Format

Share Document