scholarly journals Generalized notions of sparsity and restricted isometry property. Part I: a unified framework

2019 ◽  
Vol 9 (1) ◽  
pp. 157-193 ◽  
Author(s):  
Marius Junge ◽  
Kiryung Lee

Abstract The restricted isometry property (RIP) is an integral tool in the analysis of various inverse problems with sparsity models. Motivated by the applications of compressed sensing and dimensionality reduction of low-rank tensors, we propose generalized notions of sparsity and provide a unified framework for the corresponding RIP, in particular when combined with isotropic group actions. Our results extend an approach by Rudelson and Vershynin to a much broader context including commutative and non-commutative function spaces. Moreover, our Banach space notion of sparsity applies to affine group actions. The generalized approach in particular applies to high-order tensor products.

Author(s):  
Mei Sun ◽  
Jinxu Tao ◽  
Zhongfu Ye ◽  
Bensheng Qiu ◽  
Jinzhang Xu ◽  
...  

Background: In order to overcome the limitation of long scanning time, compressive sensing (CS) technology exploits the sparsity of image in some transform domain to reduce the amount of acquired data. Therefore, CS has been widely used in magnetic resonance imaging (MRI) reconstruction. </P><P> Discussion: Blind compressed sensing enables to recover the image successfully from highly under- sampled measurements, because of the data-driven adaption of the unknown transform basis priori. Moreover, analysis-based blind compressed sensing often leads to more efficient signal reconstruction with less time than synthesis-based blind compressed sensing. Recently, some experiments have shown that nonlocal low-rank property has the ability to preserve the details of the image for MRI reconstruction. Methods: Here, we focus on analysis-based blind compressed sensing, and combine it with additional nonlocal low-rank constraint to achieve better MR images from fewer measurements. Instead of nuclear norm, we exploit non-convex Schatten p-functionals for the rank approximation. </P><P> Results & Conclusion: Simulation results indicate that the proposed approach performs better than the previous state-of-the-art algorithms.


Author(s):  
Bernd Carl

SynopsisIn this paper we determine the asymptotic behaviour of entropy numbers of embedding maps between Besov sequence spaces and Besov function spaces. The results extend those of M. Š. Birman, M. Z. Solomjak and H. Triebel originally formulated in the language of ε-entropy. It turns out that the characterization of embedding maps between Besov spaces by entropy numbers can be reduced to the characterization of certain diagonal operators by their entropy numbers.Finally, the entropy numbers are applied to the study of eigenvalues of operators acting on a Banach space which admit a factorization through embedding maps between Besov spaces.The statements of this paper are obtained by results recently proved elsewhere by the author.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Joshua T. Vogelstein ◽  
Eric W. Bridgeford ◽  
Minh Tang ◽  
Da Zheng ◽  
Christopher Douville ◽  
...  

AbstractTo solve key biomedical problems, experimentalists now routinely measure millions or billions of features (dimensions) per sample, with the hope that data science techniques will be able to build accurate data-driven inferences. Because sample sizes are typically orders of magnitude smaller than the dimensionality of these data, valid inferences require finding a low-dimensional representation that preserves the discriminating information (e.g., whether the individual suffers from a particular disease). There is a lack of interpretable supervised dimensionality reduction methods that scale to millions of dimensions with strong statistical theoretical guarantees. We introduce an approach to extending principal components analysis by incorporating class-conditional moment estimates into the low-dimensional projection. The simplest version, Linear Optimal Low-rank projection, incorporates the class-conditional means. We prove, and substantiate with both synthetic and real data benchmarks, that Linear Optimal Low-Rank Projection and its generalizations lead to improved data representations for subsequent classification, while maintaining computational efficiency and scalability. Using multiple brain imaging datasets consisting of more than 150 million features, and several genomics datasets with more than 500,000 features, Linear Optimal Low-Rank Projection outperforms other scalable linear dimensionality reduction techniques in terms of accuracy, while only requiring a few minutes on a standard desktop computer.


2021 ◽  
Vol 47 (2) ◽  
pp. 1-34
Author(s):  
Umberto Villa ◽  
Noemi Petra ◽  
Omar Ghattas

We present an extensible software framework, hIPPYlib, for solution of large-scale deterministic and Bayesian inverse problems governed by partial differential equations (PDEs) with (possibly) infinite-dimensional parameter fields (which are high-dimensional after discretization). hIPPYlib overcomes the prohibitively expensive nature of Bayesian inversion for this class of problems by implementing state-of-the-art scalable algorithms for PDE-based inverse problems that exploit the structure of the underlying operators, notably the Hessian of the log-posterior. The key property of the algorithms implemented in hIPPYlib is that the solution of the inverse problem is computed at a cost, measured in linearized forward PDE solves, that is independent of the parameter dimension. The mean of the posterior is approximated by the MAP point, which is found by minimizing the negative log-posterior with an inexact matrix-free Newton-CG method. The posterior covariance is approximated by the inverse of the Hessian of the negative log posterior evaluated at the MAP point. The construction of the posterior covariance is made tractable by invoking a low-rank approximation of the Hessian of the log-likelihood. Scalable tools for sample generation are also discussed. hIPPYlib makes all of these advanced algorithms easily accessible to domain scientists and provides an environment that expedites the development of new algorithms.


Author(s):  
Mikhail Krechetov ◽  
Jakub Marecek ◽  
Yury Maximov ◽  
Martin Takac

Low-rank methods for semi-definite programming (SDP) have gained a lot of interest recently, especially in machine learning applications. Their analysis often involves determinant-based or Schatten-norm penalties, which are difficult to implement in practice due to high computational efforts. In this paper, we propose Entropy-Penalized Semi-Definite Programming (EP-SDP), which provides a unified framework for a broad class of penalty functions used in practice to promote a low-rank solution. We show that EP-SDP problems admit an efficient numerical algorithm, having (almost) linear time complexity of the gradient computation; this makes it useful for many machine learning and optimization problems. We illustrate the practical efficiency of our approach on several combinatorial optimization and machine learning problems.


2000 ◽  
Vol 87 (2) ◽  
pp. 200
Author(s):  
Frédérique Watbled

Let $X$ be a Banach space compatible with its antidual $\overline{X^*}$, where $\overline{X^*}$ stands for the vector space $X^*$ where the multiplication by a scalar is replaced by the multiplication $\lambda \odot x^* = \overline{\lambda} x^*$. Let $H$ be a Hilbert space intermediate between $X$ and $\overline{X^*}$ with a scalar product compatible with the duality $(X,X^*)$, and such that $X \cap \overline{X^*}$ is dense in $H$. Let $F$ denote the closure of $X \cap \overline{X^*}$ in $\overline{X^*}$ and suppose $X \cap \overline{X^*}$ is dense in $X$. Let $K$ denote the natural map which sends $H$ into the dual of $X \cap F$ and for every Banach space $A$ which contains $X \cap F$ densely let $A'$ be the realization of the dual space of $A$ inside the dual of $X \cap F$. We show that if $\vert \langle K^{-1}a, K^{-1}b \rangle_H \vert \leq \parallel a \parallel_{X'} \parallel b \parallel_{F'}$ whenever $a$ and $b$ are both in $X' \cap F'$ then $(X, \overline{X^*})_{\frac12} = H$ with equality of norms. In particular this equality holds true if $X$ embeds in $H$ or $H$ embeds densely in $X$. As other particular cases we mention spaces $X$ with a $1$-unconditional basis and Köthe function spaces on $\Omega$ intermediate between $L^1(\Omega)$ and $L^\infty(\Omega)$.


Sign in / Sign up

Export Citation Format

Share Document