Matrix manifolds in MDA

2021 ◽  
pp. 45-88
Author(s):  
Nickolay Trendafilov ◽  
Michele Gallo
Keyword(s):  
2019 ◽  
Vol 40 (2) ◽  
pp. 774-799
Author(s):  
Bahar Arslan ◽  
Vanni Noferini ◽  
Françoise Tisseur

Author(s):  
P.-A. Absil ◽  
R. Mahony ◽  
Rodolphe Sepulchre

2016 ◽  
Vol 136 (2) ◽  
pp. 523-543 ◽  
Author(s):  
Wen Huang ◽  
P.-A. Absil ◽  
K. A. Gallivan

Author(s):  
HIROSHI HASEGAWA

We develop a non-parametric information geometry on finite-dimensional matrix manifolds by using the Fréchet differentiation. Taking the simplest prototype Riemannian metric form [Formula: see text], [Formula: see text] with the Fréchet derivative D on a pair of smooth functions g(ρ) and g*(ρ) of the density matrix ρ, we prove the WYD identification theorem: this metric is identified with the normalized Wigner–Yanase–Dyson skew information (the WYD metric), if and only if the metric form satisfies the monotonicity under every stochastic mapping T on ρ and A: [Formula: see text]. On this basis, we establish (a) a fine structure of the partial order in the set ℱ of all monotone metrics such that ℱ power ∪ℱ WYD forms a linearly ordered subset of ℱ with the same mini-max bound, where ℱ power (the power-mean metrics) interpolates the Bures and the WYD metrics (b) a characterization of the quasi-entropy S (ρ, σ)= Tr F (Δσ, ρ)ρ induced by the metric-characterizing function f WYD (x) of the WYD metric (c) an affine connection on the above metric which is torsionless to guarantee the quantum version of the ±α-connection, provided α∈[-3, 3].


2019 ◽  
Vol 31 (1) ◽  
pp. 156-175
Author(s):  
Tianci Liu ◽  
Zelin Shi ◽  
Yunpeng Liu

Modeling videos and image sets by linear subspaces has achieved great success in various visual recognition tasks. However, subspaces constructed from visual data are always notoriously embedded in a high-dimensional ambient space, which limits the applicability of existing techniques. This letter explores the possibility of proposing a geometry-aware framework for constructing lower-dimensional subspaces with maximum discriminative power from high-dimensional subspaces in the supervised scenario. In particular, we make use of Riemannian geometry and optimization techniques on matrix manifolds to learn an orthogonal projection, which shows that the learning process can be formulated as an unconstrained optimization problem on a Grassmann manifold. With this natural geometry, any metric on the Grassmann manifold can theoretically be used in our model. Experimental evaluations on several data sets show that our approach results in significantly higher accuracy than other state-of-the-art algorithms.


Sign in / Sign up

Export Citation Format

Share Document