scholarly journals Kernel Regression Imputation in Manifolds via Bi-Linear Modeling: The Dynamic-MRI Case

Author(s):  
Konstantinos Slavakis ◽  
Gaurav Shetty ◽  
Loris Cannelli ◽  
Gesualdo Scutari ◽  
Ukash Nakarmi ◽  
...  

<div>This paper introduces a non-parametric approximation framework for imputation-by-regression on data with missing entries. The proposed framework, coined kernel regression imputation in manifolds (KRIM), is built on the hypothesis that features, generated by the measured data, lie close to an unknown-to-the-user smooth manifold. The feature space, where the smooth manifold is embedded in, takes the form of a reproducing kernel Hilbert space (RKHS). Aiming at concise data descriptions, KRIM identifies a small number of ``landmark points'' to define approximating ``linear patches'' in the feature space which mimic tangent spaces to smooth manifolds. This geometric information is infused into the design through a novel bi-linear model that allows for multiple approximating RKHSs. To effect imputation-by-regression, a bi-linear inverse problem is solved by an iterative algorithm with guaranteed convergence to a stationary point of a non-convex loss function. To showcase KRIM's modularity, the application of KRIM to dynamic magnetic resonance imaging (dMRI) is detailed, where reconstruction of images from severely under-sampled dMRI data is desired. Extensive numerical tests on synthetic and real dMRI data demonstrate the superior performance of KRIM over state-of-the-art approaches under several metrics and with a small computational footprint.</div>

2021 ◽  
Author(s):  
Konstantinos Slavakis ◽  
Gaurav Shetty ◽  
Loris Cannelli ◽  
Gesualdo Scutari ◽  
Ukash Nakarmi ◽  
...  

This paper introduces a non-parametric kernel-based modeling framework for imputation by regression on data that are assumed to lie close to an unknown-to-the-user smooth manifold in a Euclidean space. The proposed framework, coined kernel regression imputation in manifolds (KRIM), needs no training data to operate. Aiming at computationally efficient solutions, KRIM utilizes a small number of ``landmark'' data-points to extract geometric information from the measured data via parsimonious affine combinations (``linear patches''), which mimic the concept of tangent spaces to smooth manifolds and take place in functional approximation spaces, namely reproducing kernel Hilbert spaces (RKHSs). Multiple complex RKHSs are combined in a data-driven way to surmount the obstacle of pin-pointing the ``optimal'' parameters of a single kernel through cross-validation. The extracted geometric information is incorporated into the design via a novel bi-linear data-approximation model, and the imputation-by-regression task takes the form of an inverse problem which is solved by an iterative algorithm with guaranteed convergence to a stationary point of the non-convex loss function. To showcase the modular character and wide applicability of KRIM, this paper highlights the application of KRIM to dynamic magnetic resonance imaging (dMRI), where reconstruction of high-resolution images from severely under-sampled dMRI data is desired. Extensive numerical tests on synthetic and real dMRI data demonstrate the superior performance of KRIM over state-of-the-art approaches under several metrics and with a small computational footprint.<br>


2021 ◽  
Author(s):  
Konstantinos Slavakis ◽  
Gaurav Shetty ◽  
Loris Cannelli ◽  
Gesualdo Scutari ◽  
Ukash Nakarmi ◽  
...  

This paper introduces a non-parametric kernel-based modeling framework for imputation by regression on data that are assumed to lie close to an unknown-to-the-user smooth manifold in a Euclidean space. The proposed framework, coined kernel regression imputation in manifolds (KRIM), needs no training data to operate. Aiming at computationally efficient solutions, KRIM utilizes a small number of ``landmark'' data-points to extract geometric information from the measured data via parsimonious affine combinations (``linear patches''), which mimic the concept of tangent spaces to smooth manifolds and take place in functional approximation spaces, namely reproducing kernel Hilbert spaces (RKHSs). Multiple complex RKHSs are combined in a data-driven way to surmount the obstacle of pin-pointing the ``optimal'' parameters of a single kernel through cross-validation. The extracted geometric information is incorporated into the design via a novel bi-linear data-approximation model, and the imputation-by-regression task takes the form of an inverse problem which is solved by an iterative algorithm with guaranteed convergence to a stationary point of the non-convex loss function. To showcase the modular character and wide applicability of KRIM, this paper highlights the application of KRIM to dynamic magnetic resonance imaging (dMRI), where reconstruction of high-resolution images from severely under-sampled dMRI data is desired. Extensive numerical tests on synthetic and real dMRI data demonstrate the superior performance of KRIM over state-of-the-art approaches under several metrics and with a small computational footprint.<br>


2019 ◽  
Vol 11 (03n04) ◽  
pp. 1950006
Author(s):  
Hedi Xia ◽  
Hector D. Ceniceros

A new method for hierarchical clustering of data points is presented. It combines treelets, a particular multiresolution decomposition of data, with a mapping on a reproducing kernel Hilbert space. The proposed approach, called kernel treelets (KT), uses this mapping to go from a hierarchical clustering over attributes (the natural output of treelets) to a hierarchical clustering over data. KT effectively substitutes the correlation coefficient matrix used in treelets with a symmetric and positive semi-definite matrix efficiently constructed from a symmetric and positive semi-definite kernel function. Unlike most clustering methods, which require data sets to be numeric, KT can be applied to more general data and yields a multiresolution sequence of orthonormal bases on the data directly in feature space. The effectiveness and potential of KT in clustering analysis are illustrated with some examples.


2017 ◽  
Vol 17 (15&16) ◽  
pp. 1292-1306 ◽  
Author(s):  
Rupak Chatterjee ◽  
Ting Yu

The support vector machine (SVM) is a popular machine learning classification method which produces a nonlinear decision boundary in a feature space by constructing linear boundaries in a transformed Hilbert space. It is well known that these algorithms when executed on a classical computer do not scale well with the size of the feature space both in terms of data points and dimensionality. One of the most significant limitations of classical algorithms using non-linear kernels is that the kernel function has to be evaluated for all pairs of input feature vectors which themselves may be of substantially high dimension. This can lead to computationally excessive times during training and during the prediction process for a new data point. Here, we propose using both canonical and generalized coherent states to calculate specific nonlinear kernel functions. The key link will be the reproducing kernel Hilbert space (RKHS) property for SVMs that naturally arise from canonical and generalized coherent states. Specifically, we discuss the evaluation of radial kernels through a positive operator valued measure (POVM) on a quantum optical system based on canonical coherent states. A similar procedure may also lead to calculations of kernels not usually used in classical algorithms such as those arising from generalized coherent states.


Author(s):  
Yi Wang ◽  
Nan Xue ◽  
Xin Fan ◽  
Jiebo Luo ◽  
Risheng Liu ◽  
...  

Data stream analysis aims at extracting discriminative information for classification from continuously incoming samples. It is extremely challenging to detect novel data while updating the model in an efficient and stable fashion, especially for the chunk data. This paper proposes a fast factorization-free kernel learning method to unify novelty detection and incremental learning for unlabeled chunk data streams in one framework. The proposed method constructs a joint reproducing kernel Hilbert space from known class centers by solving a linear system in kernel space. Naturally, unlabeled data can be detected and classified among multi-classes by a single decision model. And projecting samples into the discriminative feature space turns out to be the product of two small-sized kernel matrices without needing such time-consuming factorization like QR-decomposition or singular value decomposition. Moreover, the insertion of a novel class can be treated as the addition of a new orthogonal basis to the existing feature space, resulting in fast and stable updating schemes. Both theoretical analysis and experimental validation on real-world datasets demonstrate that the proposed methods learn chunk data streams with significantly lower computational costs and comparable or superior accuracy than the state of the art.


AI Magazine ◽  
2019 ◽  
Vol 40 (3) ◽  
pp. 41-57
Author(s):  
Manisha Mishra ◽  
Pujitha Mannaru ◽  
David Sidoti ◽  
Adam Bienkowski ◽  
Lingyi Zhang ◽  
...  

A synergy between AI and the Internet of Things (IoT) will significantly improve sense-making, situational awareness, proactivity, and collaboration. However, the key challenge is to identify the underlying context within which humans interact with smart machines. Knowledge of the context facilitates proactive allocation among members of a human–smart machine (agent) collective that balances auto­nomy with human interaction, without displacing humans from their supervisory role of ensuring that the system goals are achievable. In this article, we address four research questions as a means of advancing toward proactive autonomy: how to represent the interdependencies among the key elements of a hybrid team; how to rapidly identify and characterize critical contextual elements that require adaptation over time; how to allocate system tasks among machines and agents for superior performance; and how to enhance the performance of machine counterparts to provide intelligent and proactive courses of action while considering the cognitive states of human operators. The answers to these four questions help us to illustrate the integration of AI and IoT applied to the maritime domain, where we define context as an evolving multidimensional feature space for heterogeneous search, routing, and resource allocation in uncertain environments via proactive decision support systems.


Author(s):  
Michael T Jury ◽  
Robert T W Martin

Abstract We extend the Lebesgue decomposition of positive measures with respect to Lebesgue measure on the complex unit circle to the non-commutative (NC) multi-variable setting of (positive) NC measures. These are positive linear functionals on a certain self-adjoint subspace of the Cuntz–Toeplitz $C^{\ast }-$algebra, the $C^{\ast }-$algebra of the left creation operators on the full Fock space. This theory is fundamentally connected to the representation theory of the Cuntz and Cuntz–Toeplitz $C^{\ast }-$algebras; any *−representation of the Cuntz–Toeplitz $C^{\ast }-$algebra is obtained (up to unitary equivalence), by applying a Gelfand–Naimark–Segal construction to a positive NC measure. Our approach combines the theory of Lebesgue decomposition of sesquilinear forms in Hilbert space, Lebesgue decomposition of row isometries, free semigroup algebra theory, NC reproducing kernel Hilbert space theory, and NC Hardy space theory.


Author(s):  
Nicolas Nagel ◽  
Martin Schäfer ◽  
Tino Ullrich

AbstractWe provide a new upper bound for sampling numbers $$(g_n)_{n\in \mathbb {N}}$$ ( g n ) n ∈ N associated with the compact embedding of a separable reproducing kernel Hilbert space into the space of square integrable functions. There are universal constants $$C,c>0$$ C , c > 0 (which are specified in the paper) such that $$\begin{aligned} g^2_n \le \frac{C\log (n)}{n}\sum \limits _{k\ge \lfloor cn \rfloor } \sigma _k^2,\quad n\ge 2, \end{aligned}$$ g n 2 ≤ C log ( n ) n ∑ k ≥ ⌊ c n ⌋ σ k 2 , n ≥ 2 , where $$(\sigma _k)_{k\in \mathbb {N}}$$ ( σ k ) k ∈ N is the sequence of singular numbers (approximation numbers) of the Hilbert–Schmidt embedding $$\mathrm {Id}:H(K) \rightarrow L_2(D,\varrho _D)$$ Id : H ( K ) → L 2 ( D , ϱ D ) . The algorithm which realizes the bound is a least squares algorithm based on a specific set of sampling nodes. These are constructed out of a random draw in combination with a down-sampling procedure coming from the celebrated proof of Weaver’s conjecture, which was shown to be equivalent to the Kadison–Singer problem. Our result is non-constructive since we only show the existence of a linear sampling operator realizing the above bound. The general result can for instance be applied to the well-known situation of $$H^s_{\text {mix}}(\mathbb {T}^d)$$ H mix s ( T d ) in $$L_2(\mathbb {T}^d)$$ L 2 ( T d ) with $$s>1/2$$ s > 1 / 2 . We obtain the asymptotic bound $$\begin{aligned} g_n \le C_{s,d}n^{-s}\log (n)^{(d-1)s+1/2}, \end{aligned}$$ g n ≤ C s , d n - s log ( n ) ( d - 1 ) s + 1 / 2 , which improves on very recent results by shortening the gap between upper and lower bound to $$\sqrt{\log (n)}$$ log ( n ) . The result implies that for dimensions $$d>2$$ d > 2 any sparse grid sampling recovery method does not perform asymptotically optimal.


Sign in / Sign up

Export Citation Format

Share Document