"DYNAMICAL CONFINEMENT" IN NEURAL NETWORKS
Randomisation of a well known mathematical model is proposed (i.e. the Hopfield model for neural networks) in order to facilitate the study of its asymptotic behavior: in fact, we replace the determination of the stability basins for attractors and for stability boundaries by the study of a unique invariant measure, whose distribution function maxima (or respectively, percentile contour lines) correspond to the location of the attractors (or respectively, boundaries of their stability basins). We give the name of "confinement" to this localization of the mass of the invariant measure. We intend to show here that the study of the confinement is in certain cases easier than the study of underlying attractors, in particular if these last are numerous and possess small stability basins (for example, for the first time we calculate the invariant measure in the random Hopfield model in a case for which the deterministic version exhibits many attractors, and after in a case of phase transition).