finite dimensionality
Recently Published Documents


TOTAL DOCUMENTS

124
(FIVE YEARS 21)

H-INDEX

14
(FIVE YEARS 1)

2021 ◽  
Vol 32 (1) ◽  
Author(s):  
Hongyong Cui ◽  
Arthur C. Cunha ◽  
José A. Langa

AbstractFinite-dimensional attractors play an important role in finite-dimensional reduction of PDEs in mathematical modelization and numerical simulations. For non-autonomous random dynamical systems, Cui and Langa (J Differ Equ, 263:1225–1268, 2017) developed a random uniform attractor as a minimal compact random set which provides a certain description of the forward dynamics of the underlying system by forward attraction in probability. In this paper, we study the conditions that ensure a random uniform attractor to have finite fractal dimension. Two main criteria are given, one by a smoothing property and the other by a squeezing property of the system, and neither of the two implies the other. The upper bound of the fractal dimension consists of two parts: the fractal dimension of the symbol space plus a number arising from the smoothing/squeezing property. As an illustrative application, the random uniform attractor of a stochastic reaction–diffusion equation with scalar additive noise is studied, for which the finite-dimensionality in $$L^2$$ L 2 is established by the squeezing approach and that in $$H_0^1$$ H 0 1 by the smoothing framework. In addition, a random absorbing set that absorbs itself after a deterministic period of time is also constructed.


2021 ◽  
pp. 1-26
Author(s):  
M.M. Freitas ◽  
A.J.A. Ramos ◽  
M.J. Dos Santos ◽  
L.G.R. Miranda ◽  
J.L.L. Almeida

We investigated the asymptotic dynamics of a nonlinear system modeling binary mixture of solids with delay term. Using the recent quasi-stability methods introduced by Chueshov and Lasiecka, we prove the existence, smoothness and finite dimensionality of a global attractor. We also prove the existence of exponential attractors. Moreover, we study the upper semicontinuity of global attractors with respect to small perturbations of the delay terms.


Axioms ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 275
Author(s):  
Alexander A. Balinsky ◽  
Anatolij K. Prykarpatski

Finding effective finite-dimensional criteria for closed subspaces in Lp, endowed with some additional functional constraints, is a well-known and interesting problem. In this work, we are interested in some sufficient constraints on closed functional subspaces, Sp⊂Lp, whose finite dimensionality is not fixed a priori and can not be checked directly. This is often the case in diverse applications, when a closed subspace Sp⊂Lp is constructed by means of some additional conditions and constraints on Lp with no direct exemplification of the functional structure of its elements. We consider a closed topological subspace, Sp(q), of the functional Banach space, Lp(M,dμ), and, moreover, one assumes that additionally, Sp(q)⊂Lq(M,dν) is subject to a probability measure ν on M. Then, we show that closed subspaces of Lp(M,dμ)∩Lq(M,dν) for q>max{1,p},p>0 are finite dimensional. The finite dimensionality result concerning the case when q>p>0 is open and needs more sophisticated techniques, mainly based on analysis of the complementary subspaces to Lp(M,dμ)∩Lq(M,dν).


Nonlinearity ◽  
2021 ◽  
Vol 34 (9) ◽  
pp. 6173-6209
Author(s):  
Elie Abdo ◽  
Mihaela Ignatova

2021 ◽  
Vol 285 ◽  
pp. 383-428
Author(s):  
Hongyong Cui ◽  
Alexandre N. Carvalho ◽  
Arthur C. Cunha ◽  
José A. Langa

Author(s):  
Marat V. Markin

We provide a characterization of the finite dimensionality of vector spaces in terms of the right-sided invertibility of linear operators on them.


2021 ◽  
Vol 7 (1) ◽  
pp. 1445-1459
Author(s):  
Yiyuan Cheng ◽  
◽  
Yongquan Zhang ◽  
Xingxing Zha ◽  
Dongyin Wang ◽  
...  

<abstract><p>In this paper, we consider stochastic approximation algorithms for least-square and logistic regression with no strong-convexity assumption on the convex loss functions. We develop two algorithms with varied step-size motivated by the accelerated gradient algorithm which is initiated for convex stochastic programming. We analyse the developed algorithms that achieve a rate of $ O(1/n^{2}) $ where $ n $ is the number of samples, which is tighter than the best convergence rate $ O(1/n) $ achieved so far on non-strongly-convex stochastic approximation with constant-step-size, for classic supervised learning problems. Our analysis is based on a non-asymptotic analysis of the empirical risk (in expectation) with less assumptions that existing analysis results. It does not require the finite-dimensionality assumption and the Lipschitz condition. We carry out controlled experiments on synthetic and some standard machine learning data sets. Empirical results justify our theoretical analysis and show a faster convergence rate than existing other methods.</p></abstract>


Sign in / Sign up

Export Citation Format

Share Document