scholarly journals Comparative Study of Outlier Detection Algorithms via Fundamental Analysis Variables

Author(s):  
Senol Emir ◽  
Hasan Dincer ◽  
Umit Hacioglu ◽  
Serhat Yuksel

In a data set, an outlier refers to a data point that is considerably different from the others. Detecting outliers provides useful application-specific insights and leads to choosing right prediction models. Outlier detection (also known as anomaly detection or novelty detection) has been studied in statistics and machine learning for a long time. It is an essential preprocessing step of data mining process. In this study, outlier detection step in the data mining process is applied for identifying the top 20 outlier firms. Three outlier detection algorithms are utilized using fundamental analysis variables of firms listed in Borsa Istanbul for the 2011-2014 period. The results of each algorithm are presented and compared. Findings show that 15 different firms are identified by three different outlier detection methods. KCHOL and SAHOL have the greatest number of appearances with 12 observations among these firms. By investigating the results, it is concluded that each of three algorithms makes different outlier firm lists due to differences in their approaches for outlier detection.

Author(s):  
Fabrizio Angiulli

Data mining techniques can be grouped in four main categories: clustering, classification, dependency detection, and outlier detection. Clustering is the process of partitioning a set of objects into homogeneous groups, or clusters. Classification is the task of assigning objects to one of several predefined categories. Dependency detection searches for pairs of attribute sets which exhibit some degree of correlation in the data set at hand. The outlier detection task can be defined as follows: “Given a set of data points or objects, find the objects that are considerably dissimilar, exceptional or inconsistent with respect to the remaining data”. These exceptional objects as also referred to as outliers. Most of the early methods for outlier identification have been developed in the field of statistics (Hawkins, 1980; Barnett & Lewis, 1994). Hawkins’ definition of outlier clarifies the approach: “An outlier is an observation that deviates so much from other observations as to arouse suspicions that it was generated by a different mechanism”. Indeed, statistical techniques assume that the given data set has a distribution model. Outliers are those points that satisfy a discordancy test, that is, that are significantly far from what would be their expected position given the hypothesized distribution. Many clustering, classification and dependency detection methods produce outliers as a by-product of their main task. For example, in classification, mislabeled objects are considered outliers and thus they are removed from the training set to improve the accuracy of the resulting classifier, while in clustering, objects that do not strongly belong to any cluster are considered outliers. Nevertheless, it must be said that searching for outliers through techniques specifically designed for tasks different from outlier detection could not be advantageous. As an example, clusters can be distorted by outliers and, thus, the quality of the outliers returned is affected by their presence. Moreover, other than returning a solution of higher quality, outlier detection algorithms can be vastly more efficient than non ad-hoc algorithms. While in many contexts outliers are considered as noise that must be eliminated, as pointed out elsewhere, “one person’s noise could be another person’s signal”, and thus outliers themselves can be of great interest. Outlier mining is used in telecom or credit card frauds to detect the atypical usage of telecom services or credit cards, in intrusion detection for detecting unauthorized accesses, in medical analysis to test abnormal reactions to new medical therapies, in marketing and customer segmentations to identify customers spending much more or much less than average customer, in surveillance systems, in data cleaning, and in many other fields.


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3536
Author(s):  
Jakub Górski ◽  
Adam Jabłoński ◽  
Mateusz Heesch ◽  
Michał Dziendzikowski ◽  
Ziemowit Dworakowski

Condition monitoring is an indispensable element related to the operation of rotating machinery. In this article, the monitoring system for the parallel gearbox was proposed. The novelty detection approach is used to develop the condition assessment support system, which requires data collection for a healthy structure. The measured signals were processed to extract quantitative indicators sensitive to the type of damage occurring in this type of structure. The indicator’s values were used for the development of four different novelty detection algorithms. Presented novelty detection models operate on three principles: feature space distance, probability distribution, and input reconstruction. One of the distance-based models is adaptive, adjusting to new data flowing in the form of a stream. The authors test the developed algorithms on experimental and simulation data with a similar distribution, using the training set consisting mainly of samples generated by the simulator. Presented in the article results demonstrate the effectiveness of the trained models on both data sets.


2021 ◽  
Vol 50 (1) ◽  
pp. 138-152
Author(s):  
Mujeeb Ur Rehman ◽  
Dost Muhammad Khan

Recently, anomaly detection has acquired a realistic response from data mining scientists as a graph of its reputation has increased smoothly in various practical domains like product marketing, fraud detection, medical diagnosis, fault detection and so many other fields. High dimensional data subjected to outlier detection poses exceptional challenges for data mining experts and it is because of natural problems of the curse of dimensionality and resemblance of distant and adjoining points. Traditional algorithms and techniques were experimented on full feature space regarding outlier detection. Customary methodologies concentrate largely on low dimensional data and hence show ineffectiveness while discovering anomalies in a data set comprised of a high number of dimensions. It becomes a very difficult and tiresome job to dig out anomalies present in high dimensional data set when all subsets of projections need to be explored. All data points in high dimensional data behave like similar observations because of its intrinsic feature i.e., the distance between observations approaches to zero as the number of dimensions extends towards infinity. This research work proposes a novel technique that explores deviation among all data points and embeds its findings inside well established density-based techniques. This is a state of art technique as it gives a new breadth of research towards resolving inherent problems of high dimensional data where outliers reside within clusters having different densities. A high dimensional dataset from UCI Machine Learning Repository is chosen to test the proposed technique and then its results are compared with that of density-based techniques to evaluate its efficiency.


Data Mining ◽  
2013 ◽  
pp. 142-158
Author(s):  
Baoying Wang ◽  
Aijuan Dong

Clustering and outlier detection are important data mining areas. Online clustering and outlier detection generally work with continuous data streams generated at a rapid rate and have many practical applications, such as network instruction detection and online fraud detection. This chapter first reviews related background of online clustering and outlier detection. Then, an incremental clustering and outlier detection method for market-basket data is proposed and presented in details. This proposed method consists of two phases: weighted affinity measure clustering (WC clustering) and outlier detection. Specifically, given a data set, the WC clustering phase analyzes the data set and groups data items into clusters. Then, outlier detection phase examines each newly arrived transaction against the item clusters formed in WC clustering phase, and determines whether the new transaction is an outlier. Periodically, the newly collected transactions are analyzed using WC clustering to produce an updated set of clusters, against which transactions arrived afterwards are examined. The process is carried out continuously and incrementally. Finally, the future research trends on online data mining are explored at the end of the chapter.


2018 ◽  
Vol 64 ◽  
pp. 08006 ◽  
Author(s):  
Kummerow André ◽  
Nicolai Steffen ◽  
Bretschneider Peter

The scope of this survey is the uncovering of potential critical events from mixed PMU data sets. An unsupervised procedure is introduced with the use of different outlier detection methods. For that, different techniques for signal analysis are used to generate features in time and frequency domain as well as linear and non-linear dimension reduction techniques. That approach enables the exploration of critical grid dynamics in power systems without prior knowledge about existing failure patterns. Furthermore new failure patterns can be extracted for the creation of training data sets used for online detection algorithms.


2014 ◽  
Vol 635-637 ◽  
pp. 1723-1728
Author(s):  
Shi Bo Zhou ◽  
Wei Xiang Xu

Local outliers detection is an important issue in data mining. By analyzing the limitations of the existing outlier detection algorthms, a local outlier detection algorthm based on coefficient of variation is introduced. This algorthms applies K-means which is strong in outliers searching, divides data set into sections, puts outliers and their nearing clusters into a local neighbourhood, then figures out the local deviation factor of each local neighbourhood by coefficient of variation, as a result, local outliers can more likely be found.The heoretic analysis and experimental results indicate that the method is ef fective and efficient.


Cardiovascular disease is one of the focused areas is medical area because its origins sickness and death amongst the population of the entire world. Data mining techniques play an important role to convert the large amount of raw data into meaningful information which will help in prediction and decision of Cardiovascular disease. The prediction models were technologically advanced using diverse amalgamation structures and sorting techniques such as k-NN, Naive Bayes, LR, SVM, Neural Network, Decision Tree. It is very necessary for the recital of the prediction models to choose the exact amalgamation of momentous features. The main Aim of the propose System is to develop an develop an Intelligent System using data mining modeling technique. The proposed system retrieves the data set and compare the data set with the predefined trained data set. The existing decision support system cannot predict the complex question for diagnosing the heart disease but the proposed system predicts the complex queries which will help and assist the healthcare practitioners to take appropriate decisions. This proposed system aims to provide a web platform to predict the occurrences of disease on the basis of various symptoms. The user can select various symptoms and can find the diseases with their probabilistic figures.


2021 ◽  
Author(s):  
Peng Ni ◽  
Haili Jiang ◽  
Wurong Fu ◽  
Ye Xia ◽  
Limin Sun

<p>As the demand for the detections of outliers in the structural health monitoring data-set increases, numerous approaches are presented for it. However, the characteristics of the existing methods dealing with different kinds of measured data are not yet clear enough for practical use. Therefore, this paper conducts a comparative study of several popular rule-based methods based on monitoring data of an arch-tied bridge in China. For measured data, outliers are not known in advance. In this way, this study evaluates and compares the detection performances rely on two indicators: the quantity of the detected outliers and the extreme value of the outliers deviating from the mean of the data. Conclusions on the features and applicable situations of involved methods are given. Additionally, combining the results of different methods proves to be beneficial. Finally, a software incorporating the research results is developed for outlier detection.</p>


Sign in / Sign up

Export Citation Format

Share Document