Evaluation
Increasing the accuracy of classification has been a constant challenge in the network security area. While expansively increasing in the volume of network traffic and advantage in network bandwidth, many classification algorithms used for intrusion detection and prevention face high false positive and false negative rates. A stream of network traffic data with many positive predictors might not necessary represent a true attack, and a seemingly anomaly-free stream could represent a novel attack. Depending on the infrastructure of a network system, traffic data can become very large. As a result of such large volumes of data, a very low misclassification rate can yield a large number of alarms; for example, a system with 22 million hourly traffics with a 1% misclassification rate could have approximately 75 alarms within a second (excluding repeated connections). Validating every such case for review is not practical. To address this challenge we can improve the data collection process and develop more robust algorithms. Unlike other research areas, such as the life sciences, healthcare, or economics, where an analysis can be achieved based on a single statistical approach, a robust intrusion detection scheme need to be constructed hierarchically with multiple algorithms. For example, profiling and classifying user behavior hierarchically, using hybrid algorithms (e.g., combining statistics and AI). On the other hand, we can improve the precision of classification by carefully evaluating the results. There are several key elements that are important for statistical evaluation in classification and prediction, such as reliability, sensitivity, specificity, misclassification, and goodness-of-fit. We also need to evaluate the goodness of the data (consistency and repeatability), goodness of the classification, and goodness of the model. We will discuss these topics in this chapter.