Identification of second-order Volterra filters driven by non-Gaussian stationary processes

1992 ◽  
Author(s):  
Abdelhak M. Zoubir
1993 ◽  
Vol 119 (2) ◽  
pp. 344-364 ◽  
Author(s):  
Sau‐Lon James Hu ◽  
Dongsheng Zhao

Author(s):  
Seyed Fakoorian ◽  
Mahmoud Moosavi ◽  
Reza Izanloo ◽  
Vahid Azimi ◽  
Dan Simon

Non-Gaussian noise may degrade the performance of the Kalman filter because the Kalman filter uses only second-order statistical information, so it is not optimal in non-Gaussian noise environments. Also, many systems include equality or inequality state constraints that are not directly included in the system model, and thus are not incorporated in the Kalman filter. To address these combined issues, we propose a robust Kalman-type filter in the presence of non-Gaussian noise that uses information from state constraints. The proposed filter, called the maximum correntropy criterion constrained Kalman filter (MCC-CKF), uses a correntropy metric to quantify not only second-order information but also higher-order moments of the non-Gaussian process and measurement noise, and also enforces constraints on the state estimates. We analytically prove that our newly derived MCC-CKF is an unbiased estimator and has a smaller error covariance than the standard Kalman filter under certain conditions. Simulation results show the superiority of the MCC-CKF compared with other estimators when the system measurement is disturbed by non-Gaussian noise and when the states are constrained.


1986 ◽  
Vol 23 (02) ◽  
pp. 529-535 ◽  
Author(s):  
R. J. Martin

A sufficiently large finite second-order stationary time series process on a line has approximately the same eigenvalues and eigenvectors of its dispersion matrix as its counterpart on a circle. It is shown here that this result can be extended to second-order stationary processes on a d-dimensional lattice.


Author(s):  
Juan J. González De la Rosa ◽  
Carlos G. Puntonet ◽  
A. Moreno-Muñoz

Power quality (PQ) event detection and classification is gaining importance due to worldwide use of delicate electronic devices. Things like lightning, large switching loads, non-linear load stresses, inadequate or incorrect wiring and grounding or accidents involving electric lines, can create problems to sensitive equipment, if it is designed to operate within narrow voltage limits, or if it does not incorporate the capability of filtering fluctuations in the electrical supply (Gerek et. al., 2006; Moreno et. al., 2006). The solution for a PQ problem implies the acquisition and monitoring of long data records from the energy distribution system, along with an automated detection and classification strategy which allows identify the cause of these voltage anomalies. Signal processing tools have been widely used for this purpose, and are mainly based in spectral analysis and wavelet transforms. These second-order methods, the most familiar to the scientific community, are based on the independence of the spectral components and evolution of the spectrum in the time domain. Other tools are threshold-based algorithms, linear classifiers and Bayesian networks. The goal of the signal processing analysis is to get a feature vector from the data record under study, which constitute the input to the computational intelligence modulus, which has the task of classification. Some recent works bring a different strategy, based in higher-order statistics (HOS), in dealing with the analysis of transients within PQ analysis (Gerek et. al., 2006; Moreno et. al., 2006) and other fields of Science (De la Rosa et. al., 2004, 2005, 2007). Without perturbation, the 50-Hz of the voltage waveform exhibits a Gaussian behaviour. Deviations from Gaussianity can be detected and characterized via HOS. Non-Gaussian processes need third and fourth order statistical characterization in order to be recognized. In order words, second-order moments and cumulants could be not capable of differentiate non-Gaussian events. The situation described matches the problem of differentiating between a transient of long duration named fault (within a signal period), and a short duration transient (25 per cent of a cycle). This one could also bring the 50-Hz voltage to zero instantly and, generally affects the sinusoid dramatically. By the contrary, the long-duration transient could be considered as a modulating signal (the 50-Hz signal is the carrier). These transients are intrinsically non-stationary, so it is necessary a battery of observations (sample registers) to obtain a reliable characterization. The main contribution of this work consists of the application of higher-order central cumulants to characterize PQ events, along with the use of a competitive layer as the classification tool. Results reveal that two different clusters, associated to both types of transients, can be recognized in the 2D graph. The successful results convey the idea that the physical underlying processes associated to the analyzed transients, generate different types of deviations from the typical effects that the noise cause in the 50-Hz sinusoid voltage waveform. The paper is organized as follows: Section on higher-order cumulants summarizes the main equations of the cumulants used in the paper. Then, we recall the competitive layer’s foundations, along with the Kohonen learning rule. The experience is described then, and the conclusions are drawn.


Author(s):  
Deniz Erdogmus ◽  
Jose C. Principe

Learning systems depend on three interrelated components: topologies, cost/performance functions, and learning algorithms. Topologies provide the constraints for the mapping, and the learning algorithms offer the means to find an optimal solution; but the solution is optimal with respect to what? Optimality is characterized by the criterion and in neural network literature, this is the least addressed component, yet it has a decisive influence in generalization performance. Certainly, the assumptions behind the selection of a criterion should be better understood and investigated. Traditionally, least squares has been the benchmark criterion for regression problems; considering classification as a regression problem towards estimating class posterior probabilities, least squares has been employed to train neural network and other classifier topologies to approximate correct labels. The main motivation to utilize least squares in regression simply comes from the intellectual comfort this criterion provides due to its success in traditional linear least squares regression applications – which can be reduced to solving a system of linear equations. For nonlinear regression, the assumption of Gaussianity for the measurement error combined with the maximum likelihood principle could be emphasized to promote this criterion. In nonparametric regression, least squares principle leads to the conditional expectation solution, which is intuitively appealing. Although these are good reasons to use the mean squared error as the cost, it is inherently linked to the assumptions and habits stated above. Consequently, there is information in the error signal that is not captured during the training of nonlinear adaptive systems under non-Gaussian distribution conditions when one insists on second-order statistical criteria. This argument extends to other linear-second-order techniques such as principal component analysis (PCA), linear discriminant analysis (LDA), and canonical correlation analysis (CCA). Recent work tries to generalize these techniques to nonlinear scenarios by utilizing kernel techniques or other heuristics. This begs the question: what other alternative cost functions could be used to train adaptive systems and how could we establish rigorous techniques for extending useful concepts from linear and second-order statistical techniques to nonlinear and higher-order statistical learning methodologies?


Sign in / Sign up

Export Citation Format

Share Document