packet arrival
Recently Published Documents


TOTAL DOCUMENTS

92
(FIVE YEARS 19)

H-INDEX

5
(FIVE YEARS 1)

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 437
Author(s):  
Sungsoo Kim ◽  
Joon Yoo ◽  
Jaehyuk Choi

Distinguishing between wireless and wired traffic in a network middlebox is an essential ingredient for numerous applications including security monitoring and quality-of-service (QoS) provisioning. The majority of existing approaches have exploited the greater delay statistics, such as round-trip-time and inter-packet arrival time, observed in wireless traffic to infer whether the traffic is originated from Ethernet (i.e., wired) or Wi-Fi (i.e., wireless) based on the assumption that the capacity of the wireless link is much slower than that of the wired link. However, this underlying assumption is no longer valid due to increases in wireless data rates over Gbps enabled by recent Wi-Fi technologies such as 802.11ac/ax. In this paper, we revisit the problem of identifying Wi-Fi traffic in network middleboxes as the wireless link capacity approaches the capacity of the wired. We present Weigh-in-Motion, a lightweight online detection scheme, that analyzes the traffic patterns observed at the middleboxes and infers whether the traffic is originated from high-speed Wi-Fi devices. To this end, we introduce the concept of ACKBunch that captures the unique characteristics of high-speed Wi-Fi, which is further utilized to distinguish whether the observed traffic is originated from a wired or wireless device. The effectiveness of the proposed scheme is evaluated via extensive real experiments, demonstrating its capability of accurately identifying wireless traffic from/to Gigabit 802.11 devices.


The article aims to develop a model for forecasting the characteristics of traffic flows in real-time based on the classification of applications using machine learning methods to ensure the quality of service. It is shown that the model can forecast the mean rate and frequency of packet arrival for the entire flow of each class separately. The prediction is based on information about the previous flows of this class and the first 15 packets of the active flow. Thus, the Random Forest Regression method reduces the prediction error by approximately 1.5 times compared to the standard mean estimate for transmitted packets issued at the switch interface.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Debabrata Singh ◽  
Jyotishree Bhanipati ◽  
Anil Kumar Biswal ◽  
Debabrata Samanta ◽  
Shubham Joshi ◽  
...  

Wireless sensor networks (WSNs) have attracted much more attention in recent years. Hence, nowadays, WSN is considered one of the most popular technologies in the networking field. The reason behind its increasing rate is only for its adaptability as it works through batteries which are energy efficient, and for these characteristics, it has covered a wide market worldwide. Transmission collision is one of the key reasons for the decrease in performance in WSNs which results in excessive delay and packet loss. The collision range should be minimized in order to mitigate the risk of these packet collisions. The WSNs that contribute to minimize the collision area and the statistics show that the collision area which exceeds equivalents transmission power has been significantly reduced by this technique. This proposed paper optimally reduced the power consumption and data loss through proper routing of packets and the method of congestion detection. WSNs typically require high data reliability to preserve identification and responsiveness capacity while also improving data reliability, transmission, and redundancy. Retransmission is determined by the probability of packet arrival as well as the average energy consumption.


2021 ◽  
Vol 5 (4 (113)) ◽  
pp. 12-19
Author(s):  
Tansaule Serikov ◽  
Ainur Zhetpisbayeva ◽  
Sharafat Mirzakulova ◽  
Kairatbek Zhetpisbayev ◽  
Zhanar Ibrayeva ◽  
...  

Time series data analysis and forecasting tool for studying the data on the use of network traffic is very important to provide acceptable and good quality network services, including network monitoring, resource management, and threat detection. More and more, the behavior of network traffic is described by the theory of deterministic chaos. The traffic of a modern network has a complex structure, an uneven rate of packet arrival for service by network devices. Predicting network traffic is still an important task, as forecast data provide the necessary information to solve the problem of managing network flows. Numerous studies of actually measured data confirm that they are nonstationary and their structure is multicomponent. This paper presents modeling using Nonlinear Autoregression Exogenous (NARX) algorithm for predicting network traffic datasets. NARX is one of the models that can be used to demonstrate non-linear systems, especially in modeling time series datasets. In other words, they called the categories of dynamic feedback networks covering several layers of the network. An artificial neural network (ANN) was developed, trained and tested using the LM learning algorithm (Levenberg-Macwardt). The initial data for the prediction is the actual measured network traffic of the packet rate. As a result of the study of the initial data, the best value of the smallest mean-square error MSE (Mean Squared Error) was obtained with the epoch value equal to 18. As for the regression R, its output ANN values in relation to the target for training, validation and testing were 0.97743. 0.9638 and 0.94907, respectively, with an overall regression value of 0.97134, which ensures that all datasets match exactly. Experimental results (MSE, R) have proven the method's ability to accurately estimate and predict network traffic


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Wenliang Xu ◽  
Futai Zou

Tor is an anonymous communication network used to hide the identities of both parties in communication. Apart from those who want to browse the web anonymously using Tor for a benign purpose, criminals can use Tor for criminal activities. It is recognized that Tor is easily intercepted by the censorship mechanism, so it uses a series of obfuscation mechanisms to avoid censorship, such as Meek, Format-Transforming Encryption (FTE), and Obfs4. In order to detect Tor traffic, we collect three kinds of obfuscated Tor traffic and then use a sliding window to extract 12 features from the stream according to the five-tuple, including the packet length, packet arrival time interval, and the proportion of the number of bytes sent and received. And finally, we use XGBoost, Random Forest, and other machine learning algorithms to identify obfuscated Tor traffic and its types. Our work provides a feasible method for countering obfuscated Tor network, which can identify the three kinds of obfuscated Tor traffic and achieve about 99% precision rate and recall rate.


Author(s):  
Ioannis Avgouleas ◽  
Nikolaos Pappas ◽  
Vangelis Angelakis

AbstractMultimedia content streaming from Internet-based sources emerges as one of the most demanded services by wireless users. In order to alleviate excessive traffic due to multimedia content transmission, many architectures (e.g., small cells, femtocells, etc.) have been proposed to offload such traffic to the nearest (or strongest) access point also called “helper”. However, the deployment of more helpers is not necessarily beneficial due to their potential of increasing interference. In this work, we evaluate a wireless system which can serve both cacheable and non-cacheable traffic. More specifically, we consider a general system in which a wireless user with limited cache storage requests cacheable content from a data center that can be directly accessed through a base station. The user can be assisted by a pair of wireless helpers that exchange non-cacheable content as well. Files not available from the helpers are transmitted by the base station. We analyze the system throughput and the delay experienced by the cached user and show how these performance metrics are affected by the packet arrival rate at the source helper, the availability of caching helpers, the caches’ parameters, and the user’s request rate by means of numerical results.


2021 ◽  
Vol 13 (3) ◽  
pp. 69
Author(s):  
Yi-Bing Lin ◽  
Chien-Chao Tseng ◽  
Ming-Hung Wang

Network slicing is considered a key technology in enabling the underlying 5G mobile network infrastructure to meet diverse service requirements. In this article, we demonstrate how transport network slicing accommodates the various network service requirements of Massive IoT (MIoT), Critical IoT (CIoT), and Mobile Broadband (MBB) applications. Given that most of the research conducted previously to measure 5G network slicing is done through simulations, we utilized SimTalk, an IoT application traffic emulator, to emulate large amounts of realistic traffic patterns in order to study the effects of transport network slicing on IoT and MBB applications. Furthermore, we developed several MIoT, CIoT, and MBB applications that operate sustainably on several campuses and directed both real and emulated traffic into a Programming Protocol-Independent Packet Processors (P4)-based 5G testbed. We then examined the performance in terms of throughput, packet loss, and latency. Our study indicates that applications with different traffic characteristics need different corresponding Committed Information Rate (CIR) ratios. The CIR ratio is the CIR setting for a P4 meter in physical switch hardware over the aggregated data rate of applications of the same type. A low CIR ratio adversely affects the application’s performance because P4 switches will dispatch application packets to the low-priority queue if the packet arrival rate exceeds the CIR setting for the same type of applications. In our testbed, both exemplar MBB applications required a CIR ratio of 140% to achieve, respectively, a near 100% throughput percentage with a 0.0035% loss rate and an approximate 100% throughput percentage with a 0.0017% loss rate. However, the exemplar CIoT and MIoT applications required a CIR ratio of 120% and 100%, respectively, to reach a 100% throughput percentage without any packet loss. With the proper CIR settings for the P4 meters, the proposed transport network slicing mechanism can enforce the committed rates and fulfill the latency and reliability requirements for 5G MIoT, CIoT, and MBB applications in both TCP and UDP.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1593
Author(s):  
Ismael Amezcua Valdovinos ◽  
Patricia Elizabeth Figueroa Millán ◽  
Jesús Arturo Pérez-Díaz ◽  
Cesar Vargas-Rosales

The Industrial Internet of Things (IIoT) is considered a key enabler for Industry 4.0. Modern wireless industrial protocols such as the IEEE 802.15.4e Time-Slotted Channel Hopping (TSCH) deliver high reliability to fulfill the requirements in IIoT by following strict schedules computed in a Scheduling Function (SF) to avoid collisions and to provide determinism. The standard does not define how such schedules are built. The SF plays an essential role in 6TiSCH networks since it dictates when and where the nodes are communicating according to the application requirements, thus directly influencing the reliability of the network. Moreover, typical industrial environments consist of heavy machinery and complementary wireless communication systems that can create interference. Hence, we propose a distributed SF, namely the Channel Ranking Scheduling Function (CRSF), for IIoT networks supporting IPv6 over the IEEE 802.15.4e TSCH mode. CRSF computes the number of cells required for each node using a buffer-based bandwidth allocation mechanism with a Kalman filtering technique to avoid sudden allocation/deallocation of cells. CRSF also ranks channel quality using Exponential Weighted Moving Averages (EWMAs) based on the Received Signal Strength Indicator (RSSI), Background Noise (BN) level measurements, and the Packet Delivery Rate (PDR) metrics to select the best available channel to communicate. We compare the performance of CRSF with Orchestra and the Minimal Scheduling Function (MSF), in scenarios resembling industrial environmental characteristics. Performance is evaluated in terms of PDR, end-to-end latency, Radio Duty Cycle (RDC), and the elapsed time of first packet arrival. Results show that CRSF achieves high PDR and low RDC across all scenarios with periodic and burst traffic patterns at the cost of increased end-to-end latency. Moreover, CRSF delivers the first packet earlier than Orchestra and MSF in all scenarios. We conclude that CRSF is a viable option for IIoT networks with a large number of nodes and interference. The main contributions of our paper are threefold: (i) a bandwidth allocation mechanism that uses Kalman filtering techniques to effectively calculate the number of cells required for a given time, (ii) a channel ranking mechanism that combines metrics such as the PDR, RSSI, and BN to select channels with the best performance, and (iii) a new Key Performance Indicator (KPI) that measures the elapsed time from network formation until the first packet reception at the root.


Sign in / Sign up

Export Citation Format

Share Document