A Homomorphic Neural Network for Modeling and Prediction

2008 ◽  
Vol 20 (4) ◽  
pp. 1042-1064
Author(s):  
Maciej Pedzisz ◽  
Danilo P. Mandic

A homomorphic feedforward network (HFFN) for nonlinear adaptive filtering is introduced. This is achieved by a two-layer feedforward architecture with an exponential hidden layer and logarithmic preprocessing step. This way, the overall input-output relationship can be seen as a generalized Volterra model, or as a bank of homomorphic filters. Gradient-based learning for this architecture is introduced, together with some practical issues related to the choice of optimal learning parameters and weight initialization. The performance and convergence speed are verified by analysis and extensive simulations. For rigor, the simulations are conducted on artificial and real-life data, and the performances are compared against those obtained by a sigmoidal feedforward network (FFN) with identical topology. The proposed HFFN proved to be a viable alternative to FFNs, especially in the critical case of online learning on small- and medium-scale data sets.

2008 ◽  
pp. 1231-1249
Author(s):  
Jaehoon Kim ◽  
Seong Park

Much of the research regarding streaming data has focused only on real time querying and analysis of recent data stream allowable in memory. However, as data stream mining, or tracking of past data streams, is often required, it becomes necessary to store large volumes of streaming data in stable storage. Moreover, as stable storage has restricted capacity, past data stream must be summarized. The summarization must be performed periodically because streaming data flows continuously, quickly, and endlessly. Therefore, in this paper, we propose an efficient periodic summarization method with a flexible storage allocation. It improves the overall estimation error by flexibly adjusting the size of the summarized data of each local time section. Additionally, as the processing overhead of compression and the disk I/O cost of decompression can be an important factor for quick summarization, we also consider setting the proper size of data stream to be summarized at a time. Some experimental results with artificial data sets as well as real life data show that our flexible approach is more efficient than the existing fixed approach.


Author(s):  
CHANGHUA YU ◽  
MICHAEL T. MANRY ◽  
JIANG LI

In the neural network literature, many preprocessing techniques, such as feature de-correlation, input unbiasing and normalization, are suggested to accelerate multilayer perceptron training. In this paper, we show that a network trained with an original data set and one trained with a linear transformation of the original data will go through the same training dynamics, as long as they start from equivalent states. Thus preprocessing techniques may not be helpful and are merely equivalent to using a different weight set to initialize the network. Theoretical analyses of such preprocessing approaches are given for conjugate gradient, back propagation and the Newton method. In addition, an efficient Newton-like training algorithm is proposed for hidden layer training. Experiments on various data sets confirm the theoretical analyses and verify the improvement of the new algorithm.


Author(s):  
SANGHAMITRA BANDYOPADHYAY ◽  
UJJWAL MAULIK ◽  
MALAY KUMAR PAKHIRA

An efficient partitional clustering technique, called SAKM-clustering, that integrates the power of simulated annealing for obtaining minimum energy configuration, and the searching capability of K-means algorithm is proposed in this article. The clustering methodology is used to search for appropriate clusters in multidimensional feature space such that a similarity metric of the resulting clusters is optimized. Data points are redistributed among the clusters probabilistically, so that points that are farther away from the cluster center have higher probabilities of migrating to other clusters than those which are closer to it. The superiority of the SAKM-clustering algorithm over the widely used K-means algorithm is extensively demonstrated for artificial and real life data sets.


Author(s):  
Mohamed Ibrahim Mohamed ◽  
Laba Handique ◽  
Subrata Chakraborty ◽  
Nadeem Shafique Butt ◽  
Haitham M. Yousof

In this article an attempt is made to introduce a new extension of the Fréchet model called the Xgamma Fréchet model. Some of its properties are derived. The estimation of the parameters via different estimation methods are discussed. The performances of the proposed estimation methods are investigated through simulations as well as real life data sets. The potentiality of the proposed model is established through modelling of two real life data sets. The results have shown clear preference for the proposed model compared to several know competing ones.


2021 ◽  
Vol 17 (2) ◽  
pp. 59-74
Author(s):  
S. Qurat Ul Ain ◽  
K. Ul Islam Rather

Abstract In this article, an extension of exponentiated exponential distribution is familiarized by adding an extra parameter to the parent distribution using alpha power technique. The new distribution obtained is referred to as Alpha Power Exponentiated Exponential Distribution. Various statistical properties of the proposed distribution like mean, variance, central and non-central moments, reliability functions and entropies have been derived. Two real life data sets have been applied to check the flexibility of the proposed model. The new density model introduced provides the better fit when compared with other related statistical models.


Author(s):  
Adebisi Ade Ogunde ◽  
Gbenga Adelekan Olalude ◽  
Donatus Osaretin Omosigho

In this paper we introduced Gompertz Gumbel II (GG II) distribution which generalizes the Gumbel II distribution. The new distribution is a flexible exponential type distribution which can be used in modeling real life data with varying degree of asymmetry. Unlike the Gumbel II distribution which exhibits a monotone decreasing failure rate, the new distribution is useful for modeling unimodal (Bathtub-shaped) failure rates which sometimes characterised the real life data. Structural properties of the new distribution namely, density function, hazard function, moments, quantile function, moment generating function, orders statistics, Stochastic Ordering, Renyi entropy were obtained. For the main formulas related to our model, we present numerical studies that illustrate the practicality of computational implementation using statistical software. We also present a Monte Carlo simulation study to evaluate the performance of the maximum likelihood estimators for the GGTT model. Three life data sets were used for applications in order to illustrate the flexibility of the new model.


Author(s):  
Aliya Syed Malik ◽  
S.P. Ahmad

In this paper, a new generalization of Log Logistic Distribution using Alpha Power Transformation is proposed. The new distribution is named Alpha Power Log-Logistic Distribution. A comprehensive account of some of its statistical properties are derived. The maximum likelihood estimation procedure is used to estimate the parameters. The importance and utility of the proposed model are proved empirically using two real life data sets.


2005 ◽  
Vol 2 (2) ◽  
Author(s):  
Matej Francetič ◽  
Mateja Nagode ◽  
Bojan Nastav

Clustering methods are among the most widely used methods in multivariate analysis. Two main groups of clustering methods can be distinguished: hierarchical and non-hierarchical. Due to the nature of the problem examined, this paper focuses on hierarchical methods such as the nearest neighbour, the furthest neighbour, Ward's method, between-groups linkage, within-groups linkage, centroid and median clustering. The goal is to assess the performance of different clustering methods when using concave sets of data, and also to figure out in which types of different data structures can these methods reveal and correctly assign group membership. The simulations were run in a two- and threedimensional space. Using different standard deviations of points around the skeleton further modified each of the two original shapes. In this manner various shapes of sets with different inter-cluster distances were generated. Generating the data sets provides the essential knowledge of cluster membership for comparing the clustering methods' performances. Conclusions are important and interesting since real life data seldom follow the simple convex-shaped structure, but need further work, such as the bootstrap application, the inclusion of the dendrogram-based analysis or other data structures. Therefore this paper can serve as a basis for further study of hierarchical clustering performance with concave sets.


2021 ◽  
Vol 40 (1) ◽  
pp. 1597-1608
Author(s):  
Ilker Bekmezci ◽  
Murat Ermis ◽  
Egemen Berki Cimen

Social network analysis offers an understanding of our modern world, and it affords the ability to represent, analyze and even simulate complex structures. While an unweighted model can be used for online communities, trust or friendship networks should be analyzed with weighted models. To analyze social networks, it is essential to produce realistic social models. However, there are serious differences between social network models and real-life data in terms of their fundamental statistical parameters. In this paper, a genetic algorithm (GA)-based social network improvement method is proposed to produce social networks more similar to real-life data sets. First, it creates a social model based on existing studies in the literature, and then it improves the model with the proposed GA-based approach based on the similarity of the average degree, the k-nearest neighbor, the clustering coefficient, degree distribution and link overlap. This study can be used to model the structural and statistical properties of large-scale societies more realistically. The performance results show that our approach can reduce the dissimilarity between the created social networks and the real-life data sets in terms of their primary statistical properties. It has been shown that the proposed GA-based approach can be used effectively not only in unweighted networks but also in weighted networks.


2013 ◽  
Vol 3 (4) ◽  
pp. 1-14 ◽  
Author(s):  
S. Sampath ◽  
B. Ramya

Cluster analysis is a branch of data mining, which plays a vital role in bringing out hidden information in databases. Clustering algorithms help medical researchers in identifying the presence of natural subgroups in a data set. Different types of clustering algorithms are available in the literature. The most popular among them is k-means clustering. Even though k-means clustering is a popular clustering method widely used, its application requires the knowledge of the number of clusters present in the given data set. Several solutions are available in literature to overcome this limitation. The k-means clustering method creates a disjoint and exhaustive partition of the data set. However, in some situations one can come across objects that belong to more than one cluster. In this paper, a clustering algorithm capable of producing rough clusters automatically without requiring the user to give as input the number of clusters to be produced. The efficiency of the algorithm in detecting the number of clusters present in the data set has been studied with the help of some real life data sets. Further, a nonparametric statistical analysis on the results of the experimental study has been carried out in order to analyze the efficiency of the proposed algorithm in automatic detection of the number of clusters in the data set with the help of rough version of Davies-Bouldin index.


Sign in / Sign up

Export Citation Format

Share Document