Probabilistic analysis of a learning matrix

1988 ◽  
Vol 20 (4) ◽  
pp. 695-705 ◽  
Author(s):  
William G. Faris ◽  
Robert S. Maier

A learning matrix is defined by a set of input and output pattern vectors. The entries in these vectors are zeros and ones. The matrix is the maximum of the outer products of the input and output pattern vectors. The entries in the matrix are also zeros and ones. The product of this matrix with a selected input pattern vector defines an activity vector. It is shown that when the patterns are taken to be random, then there are central limit and large deviation theorems for the activity vector. They give conditions for when the activity vector may be used to reconstruct the output pattern vector corresponding to the selected input pattern vector.


1988 ◽  
Vol 20 (04) ◽  
pp. 695-705 ◽  
Author(s):  
William G. Faris ◽  
Robert S. Maier

A learning matrix is defined by a set of input and output pattern vectors. The entries in these vectors are zeros and ones. The matrix is the maximum of the outer products of the input and output pattern vectors. The entries in the matrix are also zeros and ones. The product of this matrix with a selected input pattern vector defines an activity vector. It is shown that when the patterns are taken to be random, then there are central limit and large deviation theorems for the activity vector. They give conditions for when the activity vector may be used to reconstruct the output pattern vector corresponding to the selected input pattern vector.



Author(s):  
Roberto A. Vazquez ◽  
Humberto Sossa

An associative memory AM is a special kind of neural network that allows recalling one output pattern given an input pattern as a key that might be altered by some kind of noise (additive, subtractive or mixed). Most of these models have several constraints that limit their applicability in complex problems such as face recognition (FR) and 3D object recognition (3DOR). Despite of the power of these approaches, they cannot reach their full power without applying new mechanisms based on current and future study of biological neural networks. In this direction, we would like to present a brief summary concerning a new associative model based on some neurobiological aspects of human brain. In addition, we would like to describe how this dynamic associative memory (DAM), combined with some aspects of infant vision system, could be applied to solve some of the most important problems of pattern recognition: FR and 3DOR.



Author(s):  
Z. Q. Wang ◽  
L. Q. An ◽  
Z. Z. Peng

A probabilistic analysis method is developed for frequencies analysis of turbine blade with uncertain boundary condition at the root of blade. The Ritz method is used to derive the eigenvalues equation of the rotating blade with uncertain root boundary condition. The matrix perturbation technique is employed for the probabilistic analysis to obtain the deterministic part of natural frequencies and vibration modes, the sensitivity matrix, the covariance matrix and the coefficient of variance (COV) for the natural frequencies. The effects of variations in the expectation and the variance of joint stiffness on the expectation and the variance of the natural frequencies are investigated.



1996 ◽  
Vol 28 (04) ◽  
pp. 1051-1071 ◽  
Author(s):  
Mike Steel ◽  
Larry Goldstein ◽  
Michael S. Waterman

In phylogenetic analysis it is useful to study the distribution of the parsimony length of a tree under the null model, by which the leaves are independently assigned letters according to prescribed probabilities. Except in one special case, this distribution is difficult to describe exactly. Here we analyze this distribution by providing a recursive and readily computable description, establishing large deviation bounds for the parsimony length of a fixed tree on a single site and for the minimum length (maximum parsimony) tree over several sites. We also show that, under very general conditions, the former distribution converges asymptotically to the normal, thereby settling a recent conjecture. Furthermore, we show how the mean and variance of this distribution can be efficiently calculated. The proof of normality requires a number of new and recent results, as the parsimony length is not directly expressible as a sum of independent random variables, and so normality does not follow immediately from a standard central limit theorem.



Symmetry ◽  
2019 ◽  
Vol 11 (5) ◽  
pp. 638
Author(s):  
Xianjie Gao ◽  
Chao Zhang ◽  
Hongwei Zhang

Random matrices have played an important role in many fields including machine learning, quantum information theory, and optimization. One of the main research focuses is on the deviation inequalities for eigenvalues of random matrices. Although there are intensive studies on the large-deviation inequalities for random matrices, only a few works discuss the small-deviation behavior of random matrices. In this paper, we present the small-deviation inequalities for the largest eigenvalues of sums of random matrices. Since the resulting inequalities are independent of the matrix dimension, they are applicable to high-dimensional and even the infinite-dimensional cases.



2002 ◽  
Vol 39 (04) ◽  
pp. 829-838 ◽  
Author(s):  
Wen-Ming Hong

Moderate deviation principles are established in dimensionsd≥ 3 for super-Brownian motion with random immigration, where the immigration rate is governed by the trajectory of another super-Brownian motion. It fills in the gap between the central limit theorem and large deviation principles for this model which were obtained by Hong and Li (1999) and Hong (2001).



1991 ◽  
Vol 4 (4) ◽  
pp. 575-588 ◽  
Author(s):  
Stanley Xi Wang ◽  
Edward C. Waymire


Author(s):  
Ighodaro Osarobo ◽  
Akaeze Chika

It is common occurrence that the transportation of petroleum products via pipelines is susceptible to failure either naturally or intentionally. The paper is a diagnostic problem having continuous inputs of pattern recognition used in predicting pipeline failures. Our problem is to design a neural network that will recognize failure events in pipelines when fed with an input pattern denoting such a scenario. A neural network paradigm is selected, and encoding of input is done to obtain the input pattern. The selected model is simulated and trained to recognize the output pattern, which in our scenario after training, goes into operational mode.The neural network is fully implemented on a Pentium II MMX computer with a Borland C++ builder.



Author(s):  
Phanuel Mariano ◽  
Hugo Panzo

We prove a central limit theorem (CLT) for the product of a class of random singular matrices related to a random Hill’s equation studied by Adams–Bloch–Lagarias. The CLT features an explicit formula for the variance in terms of the distribution of the matrix entries and this allows for exact calculation in some examples. Our proof relies on a novel connection to the theory of [Formula: see text]-dependent sequences which also leads to an interesting and precise nondegeneracy condition.



2019 ◽  
Vol 09 (02) ◽  
pp. 2050001
Author(s):  
Renjie Feng ◽  
Gang Tian ◽  
Dongyi Wei

In [Spectrum of SYK model, preprint (2018), arXiv:1801.10073], we proved the almost sure convergence of eigenvalues of the SYK model, which can be viewed as a type of law of large numbers in probability theory; in [Spectrum of SYK model II: Central limit theorem, preprint (2018), arXiv:1806.05714], we proved that the linear statistic of eigenvalues satisfies the central limit theorem. In this paper, we continue to study another important theorem in probability theory — the concentration of measure theorem, especially for the Gaussian SYK model. We will prove a large deviation principle (LDP) for the normalized empirical measure of eigenvalues when [Formula: see text], in which case the eigenvalues can be expressed in terms of these of Gaussian random antisymmetric matrices. Such LDP result has its own independent interest in random matrix theory. For general [Formula: see text], we cannot prove the LDP, we will prove a concentration of measure theorem by estimating the Lipschitz norm of the Gaussian SYK model.



Sign in / Sign up

Export Citation Format

Share Document