function space
Recently Published Documents


TOTAL DOCUMENTS

730
(FIVE YEARS 105)

H-INDEX

36
(FIVE YEARS 3)

Author(s):  
Emiel Lorist ◽  
Zoe Nieraeth

AbstractWe prove that scalar-valued sparse domination of a multilinear operator implies vector-valued sparse domination for tuples of quasi-Banach function spaces, for which we introduce a multilinear analogue of the $${{\,\mathrm{UMD}\,}}$$ UMD condition. This condition is characterized by the boundedness of the multisublinear Hardy-Littlewood maximal operator and goes beyond examples in which a $${{\,\mathrm{UMD}\,}}$$ UMD condition is assumed on each individual space and includes e.g. iterated Lebesgue, Lorentz, and Orlicz spaces. Our method allows us to obtain sharp vector-valued weighted bounds directly from scalar-valued sparse domination, without the use of a Rubio de Francia type extrapolation result. We apply our result to obtain new vector-valued bounds for multilinear Calderón-Zygmund operators as well as recover the old ones with a new sharp weighted bound. Moreover, in the Banach function space setting we improve upon recent vector-valued bounds for the bilinear Hilbert transform.


Author(s):  
Elias Polak ◽  
Cristina Elizabeth González-Espinoza ◽  
Martin J Gander ◽  
Tomasz A. Wesolowski

Author(s):  
Habib Rebei ◽  
Slaheddine Wannes

We introduce the quadratic analogue of the Bogolyubov endomorphisms of the canonical commutation relations (CCR) associated with the re-normalized square of white noise algebra (RSWN-algebra). We focus on the structure of a subclass of these endomorphisms: each of them is uniquely determined by a quadruple [Formula: see text], where [Formula: see text] are linear transformations from a test-function space [Formula: see text] into itself, while [Formula: see text] is anti-linear on [Formula: see text] and [Formula: see text] is real. Precisely, we prove that [Formula: see text] and [Formula: see text] are uniquely determined by two arbitrary complex-valued Borel functions of modulus [Formula: see text] and two maps of [Formula: see text], into itself. Under some additional analytic conditions on [Formula: see text] and [Formula: see text], we discover that we have only two equivalent classes of Bogolyubov endomorphisms, one of them corresponds to the case [Formula: see text] and the other corresponds to the case [Formula: see text]. Finally, we close the paper by building some examples in one and multi-dimensional cases.


Author(s):  
Masaki Kurokiba ◽  
Takayoshi Ogawa

AbstractWe consider a singular limit problem of the Cauchy problem for the Patlak–Keller–Segel equation in a scaling critical function space. It is shown that a solution to the Patlak–Keller–Segel system in a scaling critical function space involving the class of bounded mean oscillations converges to a solution to the drift-diffusion system of parabolic-elliptic type (simplified Keller–Segel model) strongly as the relaxation time parameter $$\tau \rightarrow \infty $$ τ → ∞ . For the proof, we show generalized maximal regularity for the heat equation in the homogeneous Besov spaces and the class of bounded mean oscillations and we utilize them systematically as well as the continuous embeddings between the interpolation spaces $$\dot{B}^s_{q,\sigma }({\mathbb {R}}^n)$$ B ˙ q , σ s ( R n ) and $$\dot{F}^s_{q,\sigma }({\mathbb {R}}^n)$$ F ˙ q , σ s ( R n ) for the proof of the singular limit. In particular, end-point maximal regularity in BMO and space time modified class introduced by Koch–Tataru is utilized in our proof.


2021 ◽  
Vol 2021 (12) ◽  
pp. 124010
Author(s):  
Ryo Karakida ◽  
Kazuki Osawa

Abstract Natural gradient descent (NGD) helps to accelerate the convergence of gradient descent dynamics, but it requires approximations in large-scale deep neural networks because of its high computational cost. Empirical studies have confirmed that some NGD methods with approximate Fisher information converge sufficiently fast in practice. Nevertheless, it remains unclear from the theoretical perspective why and under what conditions such heuristic approximations work well. In this work, we reveal that, under specific conditions, NGD with approximate Fisher information achieves the same fast convergence to global minima as exact NGD. We consider deep neural networks in the infinite-width limit, and analyze the asymptotic training dynamics of NGD in function space via the neural tangent kernel. In the function space, the training dynamics with the approximate Fisher information are identical to those with the exact Fisher information, and they converge quickly. The fast convergence holds in layer-wise approximations; for instance, in block diagonal approximation where each block corresponds to a layer as well as in block tri-diagonal and K-FAC approximations. We also find that a unit-wise approximation achieves the same fast convergence under some assumptions. All of these different approximations have an isotropic gradient in the function space, and this plays a fundamental role in achieving the same convergence properties in training. Thus, the current study gives a novel and unified theoretical foundation with which to understand NGD methods in deep learning.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Awad A. Bakery ◽  
Mustafa M. Mohammed

The topological and geometric behaviors of the variable exponent formal power series space, as well as the prequasi-ideal construction by s -numbers and this function space of complex variables, are investigated in this article. Upper bounds for s -numbers of infinite series of the weighted n th power forward and backward shift operator on this function space are being investigated, with applications to some entire functions.


2021 ◽  
Author(s):  
Bowen Dai ◽  
Daniel E Mattox ◽  
Chris Bailey-Kellogg

Glycans are found across the tree of life with remarkable structural diversity enabling critical contributions to diverse biological processes, ranging from facilitating host-pathogen interactions to regulating mitosis & DNA damage repair. While functional motifs within glycan structures are largely responsible for mediating interactions, the contexts in which the motifs are presented can drastically impact these interactions and their downstream effects. Here, we demonstrate the first deep learning method to represent both local and global context in the study of glycan structure-function relationships. Our method, glyBERT, encodes glycans with a branched biochemical language and employs an attention-based deep language model to learn biologically relevant glycan representations focused on the most important components within their global structures. Applying glyBERT to a variety of prediction tasks confirms the value of capturing rich context-dependent patterns in this attention-based model: the same monosaccharides and glycan motifs are represented differently in different contexts and thereby enable improved predictive performance relative to the previous state-of-the-art approaches. Furthermore, glyBERT supports generative exploration of context-dependent glycan structure-function space, moving from one glycan to "nearby" glycans so as to maintain or alter predicted functional properties. In a case study application to altering glycan immunogenicity, this generative process reveals the learned contextual determinants of immunogenicity while yielding both known and novel, realistic glycan structures with altered predicted immunogenicity. In summary, modeling the context dependence of glycan motifs is critical for investigating overall glycan functionality and can enable further exploration of glycan structure-function space to inform new hypotheses and synthetic efforts.


2021 ◽  
Author(s):  
Craig Poskanzer ◽  
Stefano Anzellotti

In this paper we propose a novel technique to investigate the nonlinear interactions between brain regions that captures both the strength and the type of the functional relationship. Inspired by the field of functional analysis, we propose that the relationship between activity in two different brain areas can be viewed as a point in function space, identified by coordinates along an infinite set of basis functions. Using Hermite Polynomials as basis functions, we estimate from fMRI data a truncated set of coordinates that serve as a "computational fingerprint," characterizing the interaction between two brain areas. We provide a proof of the convergence of the estimates in the limit, and we validate the method with simulations in which the ground truth is known, additionally showing that computational fingerprints detect statistical dependence also when correlations ("functional connectivity") is near zero. We then use computational fingerprints to examine the neural interactions with a seed region of choice: the Fusiform Face Area (FFA). Using k-means clustering across each voxel's computational fingerprint, we illustrate that the addition of the nonlinear basis functions allows for the discrimination of inter-regional interactions that are otherwise grouped together when only linear dependence is used. Finally, we show that regions in V5 and medial occipital and temporal lobes exhibit significant nonlinear interactions with the FFA.


2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Awad A. Bakery ◽  
Elsayed A. E. Mohamed

In this article, we develop and study a new complex function space formed by varying the weights and exponents under a definite function. We investigate the geometric and topological characteristics of mapping ideals created using s -numbers and this complex function space. Also, the action of shift mappings on this complex function space has been discussed. Finally, we introduced an extension of Caristi’s fixed point theorem on it.


Sign in / Sign up

Export Citation Format

Share Document