2020 ◽  
Vol 2020 (2) ◽  
pp. 76-84
Author(s):  
G.P. Ismatullaev ◽  
S.A. Bakhromov ◽  
R. Mirzakabilov

Author(s):  
Michael T Jury ◽  
Robert T W Martin

Abstract We extend the Lebesgue decomposition of positive measures with respect to Lebesgue measure on the complex unit circle to the non-commutative (NC) multi-variable setting of (positive) NC measures. These are positive linear functionals on a certain self-adjoint subspace of the Cuntz–Toeplitz $C^{\ast }-$algebra, the $C^{\ast }-$algebra of the left creation operators on the full Fock space. This theory is fundamentally connected to the representation theory of the Cuntz and Cuntz–Toeplitz $C^{\ast }-$algebras; any *−representation of the Cuntz–Toeplitz $C^{\ast }-$algebra is obtained (up to unitary equivalence), by applying a Gelfand–Naimark–Segal construction to a positive NC measure. Our approach combines the theory of Lebesgue decomposition of sesquilinear forms in Hilbert space, Lebesgue decomposition of row isometries, free semigroup algebra theory, NC reproducing kernel Hilbert space theory, and NC Hardy space theory.


2021 ◽  
Vol 14 (2) ◽  
pp. 201-214
Author(s):  
Danilo Croce ◽  
Giuseppe Castellucci ◽  
Roberto Basili

In recent years, Deep Learning methods have become very popular in classification tasks for Natural Language Processing (NLP); this is mainly due to their ability to reach high performances by relying on very simple input representations, i.e., raw tokens. One of the drawbacks of deep architectures is the large amount of annotated data required for an effective training. Usually, in Machine Learning this problem is mitigated by the usage of semi-supervised methods or, more recently, by using Transfer Learning, in the context of deep architectures. One recent promising method to enable semi-supervised learning in deep architectures has been formalized within Semi-Supervised Generative Adversarial Networks (SS-GANs) in the context of Computer Vision. In this paper, we adopt the SS-GAN framework to enable semi-supervised learning in the context of NLP. We demonstrate how an SS-GAN can boost the performances of simple architectures when operating in expressive low-dimensional embeddings; these are derived by combining the unsupervised approximation of linguistic Reproducing Kernel Hilbert Spaces and the so-called Universal Sentence Encoders. We experimentally evaluate the proposed approach over a semantic classification task, i.e., Question Classification, by considering different sizes of training material and different numbers of target classes. By applying such adversarial schema to a simple Multi-Layer Perceptron, a classifier trained over a subset derived from 1% of the original training material achieves 92% of accuracy. Moreover, when considering a complex classification schema, e.g., involving 50 classes, the proposed method outperforms state-of-the-art alternatives such as BERT.


Author(s):  
Wei Jiang ◽  
Zhong Chen ◽  
Ning Hu ◽  
Yali Chen

AbstractIn recent years, the study of fractional differential equations has become a hot spot. It is more difficult to solve fractional differential equations with nonlocal boundary conditions. In this article, we propose a multiscale orthonormal bases collocation method for linear fractional-order nonlocal boundary value problems. In algorithm construction, the solution is expanded by the multiscale orthonormal bases of a reproducing kernel space. The nonlocal boundary conditions are transformed into operator equations, which are involved in finding the collocation coefficients as constrain conditions. In theory, the convergent order and stability analysis of the proposed method are presented rigorously. Finally, numerical examples show the stability, accuracy and effectiveness of the method.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Mohammed Al-Smadi ◽  
Nadir Djeddi ◽  
Shaher Momani ◽  
Shrideh Al-Omari ◽  
Serkan Araci

AbstractOur aim in this paper is presenting an attractive numerical approach giving an accurate solution to the nonlinear fractional Abel differential equation based on a reproducing kernel algorithm with model endowed with a Caputo–Fabrizio fractional derivative. By means of such an approach, we utilize the Gram–Schmidt orthogonalization process to create an orthonormal set of bases that leads to an appropriate solution in the Hilbert space $\mathcal{H}^{2}[a,b]$ H 2 [ a , b ] . We investigate and discuss stability and convergence of the proposed method. The n-term series solution converges uniformly to the analytic solution. We present several numerical examples of potential interests to illustrate the reliability, efficacy, and performance of the method under the influence of the Caputo–Fabrizio derivative. The gained results have shown superiority of the reproducing kernel algorithm and its infinite accuracy with a least time and efforts in solving the fractional Abel-type model. Therefore, in this direction, the proposed algorithm is an alternative and systematic tool for analyzing the behavior of many nonlinear temporal fractional differential equations emerging in the fields of engineering, physics, and sciences.


Author(s):  
Nicolas Nagel ◽  
Martin Schäfer ◽  
Tino Ullrich

AbstractWe provide a new upper bound for sampling numbers $$(g_n)_{n\in \mathbb {N}}$$ ( g n ) n ∈ N associated with the compact embedding of a separable reproducing kernel Hilbert space into the space of square integrable functions. There are universal constants $$C,c>0$$ C , c > 0 (which are specified in the paper) such that $$\begin{aligned} g^2_n \le \frac{C\log (n)}{n}\sum \limits _{k\ge \lfloor cn \rfloor } \sigma _k^2,\quad n\ge 2, \end{aligned}$$ g n 2 ≤ C log ( n ) n ∑ k ≥ ⌊ c n ⌋ σ k 2 , n ≥ 2 , where $$(\sigma _k)_{k\in \mathbb {N}}$$ ( σ k ) k ∈ N is the sequence of singular numbers (approximation numbers) of the Hilbert–Schmidt embedding $$\mathrm {Id}:H(K) \rightarrow L_2(D,\varrho _D)$$ Id : H ( K ) → L 2 ( D , ϱ D ) . The algorithm which realizes the bound is a least squares algorithm based on a specific set of sampling nodes. These are constructed out of a random draw in combination with a down-sampling procedure coming from the celebrated proof of Weaver’s conjecture, which was shown to be equivalent to the Kadison–Singer problem. Our result is non-constructive since we only show the existence of a linear sampling operator realizing the above bound. The general result can for instance be applied to the well-known situation of $$H^s_{\text {mix}}(\mathbb {T}^d)$$ H mix s ( T d ) in $$L_2(\mathbb {T}^d)$$ L 2 ( T d ) with $$s>1/2$$ s > 1 / 2 . We obtain the asymptotic bound $$\begin{aligned} g_n \le C_{s,d}n^{-s}\log (n)^{(d-1)s+1/2}, \end{aligned}$$ g n ≤ C s , d n - s log ( n ) ( d - 1 ) s + 1 / 2 , which improves on very recent results by shortening the gap between upper and lower bound to $$\sqrt{\log (n)}$$ log ( n ) . The result implies that for dimensions $$d>2$$ d > 2 any sparse grid sampling recovery method does not perform asymptotically optimal.


Sign in / Sign up

Export Citation Format

Share Document