PHASE MIXING AT THE EARLIER EVOLUTION STAGES OF SELFGRAVITATING SYSTEMS. I. DISK-LIKE SYSTEMS

2019 ◽  
Vol 21 (2) ◽  
pp. 65-69
Author(s):  
A.A. Muminov ◽  
S.N. Nuritdinov ◽  
F.U. Botirov

We study strongly non-stationary stochastic processes which take place in the phase space of disk-like self-gravitating systems at the early stage of their evolution. Numerical calculations were carried out based on the model of chaotic effects according to which the selected phase volume is experienced by random pushes that have diverse and complicated character.

2021 ◽  
pp. 83-88
Author(s):  
S. N. NURITDINOV ◽  
A. A. MUMINOV ◽  
F. U. BOTIROV

In this paper, we study the strong non-stationary stochastic processes that take place in the phase space of self-gravitating systems at the earlier non-stationary stage of their evolution. The numerical calculations of the compulsive phase mixing process were carried out according to the model of chaotic impacts, where the initially selected phase volume experiences random pushes that are of a diverse and complex nature. The application of the method for studying random impacts on a volume element in the case of three-dimensional space is carried out.


2016 ◽  
Vol 28 (12) ◽  
pp. 2853-2889 ◽  
Author(s):  
Hanyuan Hang ◽  
Yunlong Feng ◽  
Ingo Steinwart ◽  
Johan A. K. Suykens

This letter investigates the supervised learning problem with observations drawn from certain general stationary stochastic processes. Here by general, we mean that many stationary stochastic processes can be included. We show that when the stochastic processes satisfy a generalized Bernstein-type inequality, a unified treatment on analyzing the learning schemes with various mixing processes can be conducted and a sharp oracle inequality for generic regularized empirical risk minimization schemes can be established. The obtained oracle inequality is then applied to derive convergence rates for several learning schemes such as empirical risk minimization (ERM), least squares support vector machines (LS-SVMs) using given generic kernels, and SVMs using gaussian kernels for both least squares and quantile regression. It turns out that for independent and identically distributed (i.i.d.) processes, our learning rates for ERM recover the optimal rates. For non-i.i.d. processes, including geometrically [Formula: see text]-mixing Markov processes, geometrically [Formula: see text]-mixing processes with restricted decay, [Formula: see text]-mixing processes, and (time-reversed) geometrically [Formula: see text]-mixing processes, our learning rates for SVMs with gaussian kernels match, up to some arbitrarily small extra term in the exponent, the optimal rates. For the remaining cases, our rates are at least close to the optimal rates. As a by-product, the assumed generalized Bernstein-type inequality also provides an interpretation of the so-called effective number of observations for various mixing processes.


Sign in / Sign up

Export Citation Format

Share Document