scholarly journals Harmonic analysis of network systems via kernels and their boundary realizations

2021 ◽  
Vol 0 (0) ◽  
pp. 0
Author(s):  
Palle Jorgensen ◽  
James Tian

<p style='text-indent:20px;'>With view to applications to harmonic and stochastic analysis of infinite network/graph models, we introduce new tools for realizations and transforms of positive definite kernels (p.d.) <inline-formula><tex-math id="M1">\begin{document}$ K $\end{document}</tex-math></inline-formula> and their associated reproducing kernel Hilbert spaces. With this we establish two kinds of factorizations: (i) Probabilistic: Starting with a positive definite kernel <inline-formula><tex-math id="M2">\begin{document}$ K $\end{document}</tex-math></inline-formula> we analyze associated Gaussian processes <inline-formula><tex-math id="M3">\begin{document}$ V $\end{document}</tex-math></inline-formula>. Properties of the Gaussian processes will be derived from certain factorizations of <inline-formula><tex-math id="M4">\begin{document}$ K $\end{document}</tex-math></inline-formula>, arising as a covariance kernel of <inline-formula><tex-math id="M5">\begin{document}$ V $\end{document}</tex-math></inline-formula>. (ii) Geometric analysis: We discuss families of measure spaces arising as boundaries for <inline-formula><tex-math id="M6">\begin{document}$ K $\end{document}</tex-math></inline-formula>. Our results entail an analysis of a partial order on families of p.d. kernels, a duality for operators and frames, optimization, Karhunen–Loève expansions, and factorizations. Applications include a new boundary analysis for the Drury-Arveson kernel, and for certain fractals arising as iterated function systems; and an identification of optimal feature spaces in machine learning models.</p>

2021 ◽  
Vol 41 (3) ◽  
pp. 283-300
Author(s):  
Daniel Alpay ◽  
Palle E.T. Jorgensen

We give two new global and algorithmic constructions of the reproducing kernel Hilbert space associated to a positive definite kernel. We further present a general positive definite kernel setting using bilinear forms, and we provide new examples. Our results cover the case of measurable positive definite kernels, and we give applications to both stochastic analysis and metric geometry and provide a number of examples.


2021 ◽  
Vol 15 (5) ◽  
Author(s):  
Monika Drewnik ◽  
Tomasz Miller ◽  
Zbigniew Pasternak-Winiarski

AbstractThe aim of the paper is to create a link between the theory of reproducing kernel Hilbert spaces (RKHS) and the notion of a unitary representation of a group or of a groupoid. More specifically, it is demonstrated on one hand how to construct a positive definite kernel and an RKHS for a given unitary representation of a group(oid), and on the other hand how to retrieve the unitary representation of a group or a groupoid from a positive definite kernel defined on that group(oid) with the help of the Moore–Aronszajn theorem. The kernel constructed from the group(oid) representation is inspired by the kernel defined in terms of the convolution of functions on a locally compact group. Several illustrative examples of reproducing kernels related with unitary representations of groupoids are discussed in detail. The paper is concluded with the brief overview of the possible applications of the proposed constructions.


2019 ◽  
Vol 39 (4) ◽  
pp. 497-541
Author(s):  
Palle Jorgensen ◽  
Feng Tian

We establish a duality for two factorization questions, one for general positive definite (p.d.) kernels \(K\), and the other for Gaussian processes, say \(V\). The latter notion, for Gaussian processes is stated via Ito-integration. Our approach to factorization for p.d. kernels is intuitively motivated by matrix factorizations, but in infinite dimensions, subtle measure theoretic issues must be addressed. Consider a given p.d. kernel \(K\), presented as a covariance kernel for a Gaussian process \(V\). We then give an explicit duality for these two seemingly different notions of factorization, for p.d. kernel \(K\), vs for Gaussian process \(V\). Our result is in the form of an explicit correspondence. It states that the analytic data which determine the variety of factorizations for \(K\) is the exact same as that which yield factorizations for \(V\). Examples and applications are included: point-processes, sampling schemes, constructive discretization, graph-Laplacians, and boundary-value problems.


2015 ◽  
Vol 27 (6) ◽  
pp. 1294-1320 ◽  
Author(s):  
Shao-Gao Lv

Gradient learning (GL), initially proposed by Mukherjee and Zhou ( 2006 ) has been proved to be a powerful tool for conducting variable selection and dimensional reduction simultaneously. This approach presents a nonparametric version of a gradient estimator with positive definite kernels without estimating the true function itself, so that the proposed version has wide applicability and allows for complex effects between predictors. In terms of theory, however, existing generalization bounds for GL depend on capacity-independent techniques, and the capacity of kernel classes cannot be characterized completely. Thus, this letter considers GL estimators that minimize the empirical convex risk. We prove generalization bounds for such estimators with rates that are faster than previous results. Moreover, we provide a novel upper bound for Rademacher chaos complexity of order two, which also plays an important role in general pairwise-type estimations, including ranking and score problems.


Author(s):  
David Kozak ◽  
Scott Holladay ◽  
Gregory Fasshauer

We provide a comprehensive framework for forecasting five minute load using Gaussian processes with a positive definite kernel specifically designed for load forecasts. Gaussian processes are probabilistic, enabling us to draw samples from a posterior distribution and provide rigorous uncertainty estimates to complement the point forecast, an important benefit for forecast consumers. As part of the modeling process, we discuss various methods for dimension reduction and explore their use in effectively incorporating weather data to the load forecast. We provide guidance for every step of the modeling process, from model construction through optimization and model combination. We provide results on data from the PJMISO for various periods in 2018. The process is transparent, mathematically motivated, and reproducible. The resulting model provides a probability density of five-minute forecasts for 24 hours.


Sign in / Sign up

Export Citation Format

Share Document