The length of measurement period to determine the application profile for traffic classification in the Internet

Author(s):  
M. Ilvesmaki ◽  
S. Kaikkonen
Author(s):  
Marios Iliofotou ◽  
Hyun-chul Kim ◽  
Michalis Faloutsos ◽  
Michael Mitzenmacher ◽  
Prashanth Pappu ◽  
...  

2011 ◽  
Vol 22 (05) ◽  
pp. 1073-1098
Author(s):  
SHLOMI DOLEV ◽  
YUVAL ELOVICI ◽  
ALEX KESSELMAN ◽  
POLINA ZILBERMAN

As more and more services are provided by servers via the Internet, Denial-of-Service (DoS) attacks pose an increasing threat to the Internet community. A DoS attack overloads the target server with a large volume of adverse requests, thereby rendering the server unavailable to "well-behaved" users. In this paper, we propose two algorithms that allow attack targets to dynamically filter their incoming traffic based on a distributed policy. The proposed algorithms defend the target against DoS and distributed DoS (DDoS) attacks and simultaneously ensure that it continues to serve "well-behaved" users. In a nutshell, a target can define a filtering policy which consists of a set of traffic classification rules and the corresponding amounts of traffic for each rule. A filtering algorithm is enforced by the ISP's routers when a target is being overloaded with traffic. The goal is to maximize the amount of filtered traffic forwarded to the target, according to the filtering policy, from the ISP. The first proposed algorithm is a collaborative algorithm which computes and delivers to the target the best possible traffic mix in polynomial time. The second algorithm is a distributed non-collaborative algorithm for which we prove a lower bound on the worst-case performance.


Classification network traffic are becoming ever more relevant in understanding and addressing security issues inInternet applications. Virtual Private Networks (VPNs) have become one famous communication forms on the Internet. In this study, a new model for traffic classification into VPN or non-VPN is proposed. XGBoost algorithm is used to rank features and to build the classification model. The proposed model overwhelmed other classification algorithms. The proposed model achieved 91.6% accuracy which is the highest registered accuracy for the selected dataset. To illustrate the merit of the proposed model, a comparison was made with sixteen different classification algorithms


2011 ◽  
Vol 55 (8) ◽  
pp. 1909-1920 ◽  
Author(s):  
Marios Iliofotou ◽  
Hyun-chul Kim ◽  
Michalis Faloutsos ◽  
Michael Mitzenmacher ◽  
Prashanth Pappu ◽  
...  

2015 ◽  
Vol 72 (5) ◽  
Author(s):  
Tony Antonio ◽  
Adi Suryaputra Paramita

Feature selection technique has an important role for internet traffic classification. This technique will present more accurate data and more accurate internet traffic classification which will provide precise information for bandwidth optimization. One of the important considerations in the feature selection technique that should be looked into is how to choose the right features which can deliver better and more precise results for the classification process. This research will compare feature selection algorithms where the Internet traffic has the same correlation that could fit into the same class. Internet traffic dataset will be collected, formatted, classified and analyzed using Naïve Bayesian. Formerly, the Correlation Feature Selection (CFS) is used in the feature selection to find a collection of the best sub-sets data from the existing data but without the discriminant and principal of a body dataset. We plan to use Principal Component Analysis technique in order to find discriminant and principal feature for internet traffic classification. Moreover, this paper also studied the process to fit the features. The result also shows that the internet traffic classification using Naïve Bayesian and Correlation Feature Selection (CFS) have more than 90% accuracy while the classification accuracy reached 75% for feature selection using Principal Component Analysis (PCA).


Sign in / Sign up

Export Citation Format

Share Document