riemannian optimization
Recently Published Documents


TOTAL DOCUMENTS

70
(FIVE YEARS 36)

H-INDEX

11
(FIVE YEARS 3)

Author(s):  
Bin Gao ◽  
P.-A. Absil

AbstractThe low-rank matrix completion problem can be solved by Riemannian optimization on a fixed-rank manifold. However, a drawback of the known approaches is that the rank parameter has to be fixed a priori. In this paper, we consider the optimization problem on the set of bounded-rank matrices. We propose a Riemannian rank-adaptive method, which consists of fixed-rank optimization, rank increase step and rank reduction step. We explore its performance applied to the low-rank matrix completion problem. Numerical experiments on synthetic and real-world datasets illustrate that the proposed rank-adaptive method compares favorably with state-of-the-art algorithms. In addition, it shows that one can incorporate each aspect of this rank-adaptive framework separately into existing algorithms for the purpose of improving performance.


2021 ◽  
Author(s):  
Oualid Benkarim ◽  
Casey Paquola ◽  
Bo-yong Park ◽  
Jessica Royer ◽  
Raúl Rodríguez-Cruces ◽  
...  

Ongoing brain function is largely determined by the underlying wiring of the brain, but the specific rules governing this relationship remain unknown. Emerging literature has suggested that functional interactions between brain regions emerge from the structural connections through mono- as well as polysynaptic mechanisms. Here, we propose a novel approach based on diffusion maps and Riemannian optimization to emulate this dynamic mechanism in the form of random walks on the structural connectome and predict functional interactions as a weighted combination of these random walks. Our proposed approach was evaluated in two different cohorts of healthy adults (Human Connectome Project, HCP; Microstructure-Informed Connectomics, MICs). Our approach outperformed existing approaches and showed that performance plateaus approximately around the third random walk. At macroscale, we found that the largest number of walks was required in nodes of the default mode and frontoparietal networks, underscoring an increasing relevance of polysynaptic communication mechanisms in transmodal cortical networks compared to primary and unimodal systems.


Author(s):  
Vanni Noferini ◽  
Federico Poloni

AbstractWe study the problem of finding the nearest $$\varOmega $$ Ω -stable matrix to a certain matrix A, i.e., the nearest matrix with all its eigenvalues in a prescribed closed set $$\varOmega $$ Ω . Distances are measured in the Frobenius norm. An important special case is finding the nearest Hurwitz or Schur stable matrix, which has applications in systems theory. We describe a reformulation of the task as an optimization problem on the Riemannian manifold of orthogonal (or unitary) matrices. The problem can then be solved using standard methods from the theory of Riemannian optimization. The resulting algorithm is remarkably fast on small-scale and medium-scale matrices, and returns directly a Schur factorization of the minimizer, sidestepping the numerical difficulties associated with eigenvalues with high multiplicity.


2021 ◽  
Vol 10 (3) ◽  
Author(s):  
Ilia Luchnikov ◽  
Alexander Ryzhov ◽  
Sergey Filippov ◽  
Henni Ouerdane

Many theoretical problems in quantum technology can be formulated and addressed as constrained optimization problems. The most common quantum mechanical constraints such as, e.g., orthogonality of isometric and unitary matrices, CPTP property of quantum channels, and conditions on density matrices, can be seen as quotient or embedded Riemannian manifolds. This allows to use Riemannian optimization techniques for solving quantum-mechanical constrained optimization problems. In the present work, we introduce QGOpt, the library for constrained optimization in quantum technology. QGOpt relies on the underlying Riemannian structure of quantum-mechanical constraints and permits application of standard gradient based optimization methods while preserving quantum mechanical constraints. Moreover, QGOpt is written on top of TensorFlow, which enables automatic differentiation to calculate necessary gradients for optimization. We show two application examples: quantum gate decomposition and quantum tomography.


2021 ◽  
Vol 10 (2) ◽  
Author(s):  
Markus Hauru ◽  
Maarten Van Damme ◽  
Jutho Haegeman

Several tensor networks are built of isometric tensors, i.e. tensors satisfying W\dagger W = \mathbb{1}W†W=1. Prominent examples include matrix product states (MPS) in canonical form, the multiscale entanglement renormalization ansatz (MERA), and quantum circuits in general, such as those needed in state preparation and quantum variational eigensolvers. We show how gradient-based optimization methods on Riemannian manifolds can be used to optimize tensor networks of isometries to represent e.g. ground states of 1D quantum Hamiltonians. We discuss the geometry of Grassmann and Stiefel manifolds, the Riemannian manifolds of isometric tensors, and review how state-of-the-art optimization methods like nonlinear conjugate gradient and quasi-Newton algorithms can be implemented in this context. We apply these methods in the context of infinite MPS and MERA, and show benchmark results in which they outperform the best previously-known optimization methods, which are tailor-made for those specific variational classes. We also provide open-source implementations of our algorithms.


2021 ◽  
Author(s):  
Mohamad Mahdi Mohades ◽  
Mohammad Hossein Kahaei

<p>The max-cut problem addresses the problem of finding a cut for a graph that splits the graph into two subsets of vertices so that the number of edges between these two subsets is as large as possible. However, this problem is NP-Hard, which may be solved by suboptimal algorithms. In this paper, we propose a fast and accurate Riemannian optimization algorithm for solving the max-cut problem. To do so, we develop a gradient descent algorithm and prove its convergence. Our simulation results show that the proposed method is extremely efficient on some already-investigated graphs. Specifically, our method is on average 50 times faster than the best well-known techniques with slightly losing the performance, which is on average 0.9729 of the max-cut value of the others.</p> <p></p>


Sign in / Sign up

Export Citation Format

Share Document