Identification of Second-Order Volterra Filters in the Non-Gaussian Case

Frequenz ◽  
1995 ◽  
Vol 49 (7-8) ◽  
Author(s):  
Abdelhak M. Zoubir
1993 ◽  
Vol 119 (2) ◽  
pp. 344-364 ◽  
Author(s):  
Sau‐Lon James Hu ◽  
Dongsheng Zhao

Author(s):  
Seyed Fakoorian ◽  
Mahmoud Moosavi ◽  
Reza Izanloo ◽  
Vahid Azimi ◽  
Dan Simon

Non-Gaussian noise may degrade the performance of the Kalman filter because the Kalman filter uses only second-order statistical information, so it is not optimal in non-Gaussian noise environments. Also, many systems include equality or inequality state constraints that are not directly included in the system model, and thus are not incorporated in the Kalman filter. To address these combined issues, we propose a robust Kalman-type filter in the presence of non-Gaussian noise that uses information from state constraints. The proposed filter, called the maximum correntropy criterion constrained Kalman filter (MCC-CKF), uses a correntropy metric to quantify not only second-order information but also higher-order moments of the non-Gaussian process and measurement noise, and also enforces constraints on the state estimates. We analytically prove that our newly derived MCC-CKF is an unbiased estimator and has a smaller error covariance than the standard Kalman filter under certain conditions. Simulation results show the superiority of the MCC-CKF compared with other estimators when the system measurement is disturbed by non-Gaussian noise and when the states are constrained.


2009 ◽  
Vol 25 (5) ◽  
pp. 1180-1207 ◽  
Author(s):  
Norbert Christopeit

We consider weak convergence of sample averages of nonlinearly transformed stochastic triangular arrays satisfying a functional invariance principle. A fundamental paradigm for such processes is constituted by integrated processes. The results obtained are extensions of recent work in the literature to the multivariate and non-Gaussian case. As admissible nonlinear transformation, a new class of functionals (so-called locally p-integrable functions) is introduced that adapts the concept of locally integrable functions in Pötscher (2004, Econometric Theory 20, 1–22) to the multidimensional setting.


Entropy ◽  
2018 ◽  
Vol 21 (1) ◽  
pp. 22 ◽  
Author(s):  
Jordi Belda ◽  
Luis Vergara ◽  
Gonzalo Safont ◽  
Addisson Salazar

Conventional partial correlation coefficients (PCC) were extended to the non-Gaussian case, in particular to independent component analysis (ICA) models of the observed multivariate samples. Thus, the usual methods that define the pairwise connections of a graph from the precision matrix were correspondingly extended. The basic concept involved replacing the implicit linear estimation of conventional PCC with a nonlinear estimation (conditional mean) assuming ICA. Thus, it is better eliminated the correlation between a given pair of nodes induced by the rest of nodes, and hence the specific connectivity weights can be better estimated. Some synthetic and real data examples illustrate the approach in a graph signal processing context.


Author(s):  
Juan J. González De la Rosa ◽  
Carlos G. Puntonet ◽  
A. Moreno-Muñoz

Power quality (PQ) event detection and classification is gaining importance due to worldwide use of delicate electronic devices. Things like lightning, large switching loads, non-linear load stresses, inadequate or incorrect wiring and grounding or accidents involving electric lines, can create problems to sensitive equipment, if it is designed to operate within narrow voltage limits, or if it does not incorporate the capability of filtering fluctuations in the electrical supply (Gerek et. al., 2006; Moreno et. al., 2006). The solution for a PQ problem implies the acquisition and monitoring of long data records from the energy distribution system, along with an automated detection and classification strategy which allows identify the cause of these voltage anomalies. Signal processing tools have been widely used for this purpose, and are mainly based in spectral analysis and wavelet transforms. These second-order methods, the most familiar to the scientific community, are based on the independence of the spectral components and evolution of the spectrum in the time domain. Other tools are threshold-based algorithms, linear classifiers and Bayesian networks. The goal of the signal processing analysis is to get a feature vector from the data record under study, which constitute the input to the computational intelligence modulus, which has the task of classification. Some recent works bring a different strategy, based in higher-order statistics (HOS), in dealing with the analysis of transients within PQ analysis (Gerek et. al., 2006; Moreno et. al., 2006) and other fields of Science (De la Rosa et. al., 2004, 2005, 2007). Without perturbation, the 50-Hz of the voltage waveform exhibits a Gaussian behaviour. Deviations from Gaussianity can be detected and characterized via HOS. Non-Gaussian processes need third and fourth order statistical characterization in order to be recognized. In order words, second-order moments and cumulants could be not capable of differentiate non-Gaussian events. The situation described matches the problem of differentiating between a transient of long duration named fault (within a signal period), and a short duration transient (25 per cent of a cycle). This one could also bring the 50-Hz voltage to zero instantly and, generally affects the sinusoid dramatically. By the contrary, the long-duration transient could be considered as a modulating signal (the 50-Hz signal is the carrier). These transients are intrinsically non-stationary, so it is necessary a battery of observations (sample registers) to obtain a reliable characterization. The main contribution of this work consists of the application of higher-order central cumulants to characterize PQ events, along with the use of a competitive layer as the classification tool. Results reveal that two different clusters, associated to both types of transients, can be recognized in the 2D graph. The successful results convey the idea that the physical underlying processes associated to the analyzed transients, generate different types of deviations from the typical effects that the noise cause in the 50-Hz sinusoid voltage waveform. The paper is organized as follows: Section on higher-order cumulants summarizes the main equations of the cumulants used in the paper. Then, we recall the competitive layer’s foundations, along with the Kohonen learning rule. The experience is described then, and the conclusions are drawn.


Author(s):  
Deniz Erdogmus ◽  
Jose C. Principe

Learning systems depend on three interrelated components: topologies, cost/performance functions, and learning algorithms. Topologies provide the constraints for the mapping, and the learning algorithms offer the means to find an optimal solution; but the solution is optimal with respect to what? Optimality is characterized by the criterion and in neural network literature, this is the least addressed component, yet it has a decisive influence in generalization performance. Certainly, the assumptions behind the selection of a criterion should be better understood and investigated. Traditionally, least squares has been the benchmark criterion for regression problems; considering classification as a regression problem towards estimating class posterior probabilities, least squares has been employed to train neural network and other classifier topologies to approximate correct labels. The main motivation to utilize least squares in regression simply comes from the intellectual comfort this criterion provides due to its success in traditional linear least squares regression applications – which can be reduced to solving a system of linear equations. For nonlinear regression, the assumption of Gaussianity for the measurement error combined with the maximum likelihood principle could be emphasized to promote this criterion. In nonparametric regression, least squares principle leads to the conditional expectation solution, which is intuitively appealing. Although these are good reasons to use the mean squared error as the cost, it is inherently linked to the assumptions and habits stated above. Consequently, there is information in the error signal that is not captured during the training of nonlinear adaptive systems under non-Gaussian distribution conditions when one insists on second-order statistical criteria. This argument extends to other linear-second-order techniques such as principal component analysis (PCA), linear discriminant analysis (LDA), and canonical correlation analysis (CCA). Recent work tries to generalize these techniques to nonlinear scenarios by utilizing kernel techniques or other heuristics. This begs the question: what other alternative cost functions could be used to train adaptive systems and how could we establish rigorous techniques for extending useful concepts from linear and second-order statistical techniques to nonlinear and higher-order statistical learning methodologies?


Author(s):  
O. G. SMOLYANOV ◽  
H. v. WEIZSÄCKER

We compare different notions of differentiability of a measure along a vector field on a locally convex space. We consider in the L2-space of a differentiable measure the analog of the classical concepts of gradient, divergence and Laplacian (which coincides with the Ornstein–Uhlenbeck operator in the Gaussian case). We use these operators for the extension of the basic results of Malliavin and Stroock on the smoothness of finite dimensional image measures under certain nonsmooth mappings to the case of non-Gaussian measures. The proof of this extension is quite straight forward and does not use any Chaos-decomposition. Finally, the role of this Laplacian in the procedure of quantization of anharmonic oscillators is discussed.


Sign in / Sign up

Export Citation Format

Share Document