scholarly journals Incremental Discriminant Analysis on Interval-Valued Parameters for Emitter Identification

2015 ◽  
Vol 2015 ◽  
pp. 1-11
Author(s):  
Xin Xu ◽  
Zhaohua Xiong ◽  
Wei Wang

Emitter identification has been widely recognized as one crucial issue for communication, electronic reconnaissance, and radar intelligence analysis. However, the measurements of emitter signal parameters typically take the form of uncertain intervals rather than precise values. In addition, the measurements are generally accumulated dynamically and continuously. As a result, one imminent task has become how to carry out discriminant analysis of interval-valued parameters incrementally for emitter identification. Existing machine learning approaches for interval-valued data analysis are unfit for this purpose as they generally assume a uniform distribution and are usually restricted to static data analysis. To address the above problems, we bring forward an incremental discriminant analysis method on interval-valued parameters (IDAIP) for emitter identification. Extensive experiments on both synthetic and real-life data sets have validated the efficiency and effectiveness of our method.

Interval data mining is used to extract unknown patterns, hidden rules, associations etc. associated in interval based data. The extraction of closed interval is important because by mining the set of closed intervals and their support counts, the support counts of any interval can be computed easily. In this work an incremental algorithm for computing closed intervals together with their support counts from interval dataset is proposed. Many methods for mining closed intervals are available. Most of these methods assume a static data set as input and hence the algorithms are non-incremental. Real life data sets are however dynamic by nature. An efficient incremental algorithm called CI-Tree has been already proposed for computing closed intervals present in dynamic interval data. However this method could not compute the support values of the closed intervals. The proposed algorithm called SCI-Tree extracts all closed intervals together with their support values incrementally from the given interval data. Also, all the frequent closed intervals can be computed for any user defined minimum support with a single scan of SCI-Tree without revisiting the dataset. The proposed method has been tested with real life and synthetic datasets and results have been reported.


2008 ◽  
pp. 1231-1249
Author(s):  
Jaehoon Kim ◽  
Seong Park

Much of the research regarding streaming data has focused only on real time querying and analysis of recent data stream allowable in memory. However, as data stream mining, or tracking of past data streams, is often required, it becomes necessary to store large volumes of streaming data in stable storage. Moreover, as stable storage has restricted capacity, past data stream must be summarized. The summarization must be performed periodically because streaming data flows continuously, quickly, and endlessly. Therefore, in this paper, we propose an efficient periodic summarization method with a flexible storage allocation. It improves the overall estimation error by flexibly adjusting the size of the summarized data of each local time section. Additionally, as the processing overhead of compression and the disk I/O cost of decompression can be an important factor for quick summarization, we also consider setting the proper size of data stream to be summarized at a time. Some experimental results with artificial data sets as well as real life data show that our flexible approach is more efficient than the existing fixed approach.


2008 ◽  
Vol 20 (4) ◽  
pp. 1042-1064
Author(s):  
Maciej Pedzisz ◽  
Danilo P. Mandic

A homomorphic feedforward network (HFFN) for nonlinear adaptive filtering is introduced. This is achieved by a two-layer feedforward architecture with an exponential hidden layer and logarithmic preprocessing step. This way, the overall input-output relationship can be seen as a generalized Volterra model, or as a bank of homomorphic filters. Gradient-based learning for this architecture is introduced, together with some practical issues related to the choice of optimal learning parameters and weight initialization. The performance and convergence speed are verified by analysis and extensive simulations. For rigor, the simulations are conducted on artificial and real-life data, and the performances are compared against those obtained by a sigmoidal feedforward network (FFN) with identical topology. The proposed HFFN proved to be a viable alternative to FFNs, especially in the critical case of online learning on small- and medium-scale data sets.


Author(s):  
SANGHAMITRA BANDYOPADHYAY ◽  
UJJWAL MAULIK ◽  
MALAY KUMAR PAKHIRA

An efficient partitional clustering technique, called SAKM-clustering, that integrates the power of simulated annealing for obtaining minimum energy configuration, and the searching capability of K-means algorithm is proposed in this article. The clustering methodology is used to search for appropriate clusters in multidimensional feature space such that a similarity metric of the resulting clusters is optimized. Data points are redistributed among the clusters probabilistically, so that points that are farther away from the cluster center have higher probabilities of migrating to other clusters than those which are closer to it. The superiority of the SAKM-clustering algorithm over the widely used K-means algorithm is extensively demonstrated for artificial and real life data sets.


2015 ◽  
Vol 25 ◽  
pp. S496
Author(s):  
M. Eftekhari ◽  
A. Berntsson ◽  
S. Svensson ◽  
J. Hjortsberg ◽  
E. Jedenius ◽  
...  

Author(s):  
Mohamed Ibrahim Mohamed ◽  
Laba Handique ◽  
Subrata Chakraborty ◽  
Nadeem Shafique Butt ◽  
Haitham M. Yousof

In this article an attempt is made to introduce a new extension of the Fréchet model called the Xgamma Fréchet model. Some of its properties are derived. The estimation of the parameters via different estimation methods are discussed. The performances of the proposed estimation methods are investigated through simulations as well as real life data sets. The potentiality of the proposed model is established through modelling of two real life data sets. The results have shown clear preference for the proposed model compared to several know competing ones.


2021 ◽  
Vol 17 (2) ◽  
pp. 59-74
Author(s):  
S. Qurat Ul Ain ◽  
K. Ul Islam Rather

Abstract In this article, an extension of exponentiated exponential distribution is familiarized by adding an extra parameter to the parent distribution using alpha power technique. The new distribution obtained is referred to as Alpha Power Exponentiated Exponential Distribution. Various statistical properties of the proposed distribution like mean, variance, central and non-central moments, reliability functions and entropies have been derived. Two real life data sets have been applied to check the flexibility of the proposed model. The new density model introduced provides the better fit when compared with other related statistical models.


Author(s):  
Adebisi Ade Ogunde ◽  
Gbenga Adelekan Olalude ◽  
Donatus Osaretin Omosigho

In this paper we introduced Gompertz Gumbel II (GG II) distribution which generalizes the Gumbel II distribution. The new distribution is a flexible exponential type distribution which can be used in modeling real life data with varying degree of asymmetry. Unlike the Gumbel II distribution which exhibits a monotone decreasing failure rate, the new distribution is useful for modeling unimodal (Bathtub-shaped) failure rates which sometimes characterised the real life data. Structural properties of the new distribution namely, density function, hazard function, moments, quantile function, moment generating function, orders statistics, Stochastic Ordering, Renyi entropy were obtained. For the main formulas related to our model, we present numerical studies that illustrate the practicality of computational implementation using statistical software. We also present a Monte Carlo simulation study to evaluate the performance of the maximum likelihood estimators for the GGTT model. Three life data sets were used for applications in order to illustrate the flexibility of the new model.


Author(s):  
Aliya Syed Malik ◽  
S.P. Ahmad

In this paper, a new generalization of Log Logistic Distribution using Alpha Power Transformation is proposed. The new distribution is named Alpha Power Log-Logistic Distribution. A comprehensive account of some of its statistical properties are derived. The maximum likelihood estimation procedure is used to estimate the parameters. The importance and utility of the proposed model are proved empirically using two real life data sets.


Sign in / Sign up

Export Citation Format

Share Document