scholarly journals Differentiable Programming of Isometric Tensor Networks

Author(s):  
Chenhua Geng ◽  
Hong-Ye Hu ◽  
Yijian Zou

Abstract Differentiable programming is a new programming paradigm which enables large scale optimization through automatic calculation of gradients also known as auto-differentiation. This concept emerges from deep learning, and has also been generalized to tensor network optimizations. Here, we extend the differentiable programming to tensor networks with isometric constraints with applications to multiscale entanglement renormalization ansatz (MERA) and tensor network renormalization (TNR). By introducing several gradient-based optimization methods for the isometric tensor network and comparing with Evenbly-Vidal method, we show that auto-differentiation has a better performance for both stability and accuracy. We numerically tested our methods on 1D critical quantum Ising spin chain and 2D classical Ising model. We calculate the ground state energy for the 1D quantum model and internal energy for the classical model, and scaling dimensions of scaling operators and find they all agree with the theory well.

2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Boris Ponsioen ◽  
Fakher Assaad ◽  
Philippe Corboz

The excitation ansatz for tensor networks is a powerful tool for simulating the low-lying quasiparticle excitations above ground states of strongly correlated quantum many-body systems. Recently, the two-dimensional tensor network class of infinite projected entangled-pair states gained new ground state optimization methods based on automatic differentiation, which are at the same time highly accurate and simple to implement. Naturally, the question arises whether these new ideas can also be used to optimize the excitation ansatz, which has recently been implemented in two dimensions as well. In this paper, we describe a straightforward way to reimplement the framework for excitations using automatic differentiation, and demonstrate its performance for the Hubbard model at half filling.


2017 ◽  
Vol 9 (6) ◽  
pp. 249-429 ◽  
Author(s):  
Andrzej Cichocki ◽  
Namgil Lee ◽  
Ivan Oseledets ◽  
Anh-Huy Phan ◽  
Qibin Zhao ◽  
...  

2021 ◽  
Vol 10 (2) ◽  
Author(s):  
Markus Hauru ◽  
Maarten Van Damme ◽  
Jutho Haegeman

Several tensor networks are built of isometric tensors, i.e. tensors satisfying W\dagger W = \mathbb{1}W†W=1. Prominent examples include matrix product states (MPS) in canonical form, the multiscale entanglement renormalization ansatz (MERA), and quantum circuits in general, such as those needed in state preparation and quantum variational eigensolvers. We show how gradient-based optimization methods on Riemannian manifolds can be used to optimize tensor networks of isometries to represent e.g. ground states of 1D quantum Hamiltonians. We discuss the geometry of Grassmann and Stiefel manifolds, the Riemannian manifolds of isometric tensors, and review how state-of-the-art optimization methods like nonlinear conjugate gradient and quasi-Newton algorithms can be implemented in this context. We apply these methods in the context of infinite MPS and MERA, and show benchmark results in which they outperform the best previously-known optimization methods, which are tailor-made for those specific variational classes. We also provide open-source implementations of our algorithms.


Author(s):  
Feruza A. Amirkulova ◽  
Andrew N. Norris

We derive formulas for the gradients of the total scattering cross section (TSCS) with respect to positions of a set of cylindrical scatterers. Providing the analytic form of gradients enhances modeling capability when combined with optimization algorithms and parallel computing. This results in reducing number of function calls and time needed to converge, and improving solution accuracy for large scale optimization problems especially at high frequencies and with a large number of scatterers. As application of the method we design acoustic metamaterial structure based on a gradient-based minimization of TSCS for a set of cylindrical obstacles by incrementally re-positioning them so that they eventually act as an effective cloaking device. The method is illustrated through examples for clusters of hard cylinders in water. Computations are performed on Matlab using parallel optimization algorithms and a multistart optimization solver, and supplying the gradient of TSCS.


2019 ◽  
Vol 142 (1) ◽  
Author(s):  
Di Wu ◽  
G. Gary Wang

Abstract Practicing design engineers often have certain knowledge about a design problem. However, in the last decades, the design optimization community largely treats design functions as black-boxes. This paper discusses whether and how knowledge can help with optimization, especially for large-scale optimization problems. Existing large-scale optimization methods based on black-box functions are first reviewed, and the drawbacks of those methods are briefly discussed. To understand what knowledge is and what kinds of knowledge can be obtained and applied in a design, the concepts of knowledge in both artificial intelligence (AI) and in the area of the product design are reviewed. Existing applications of knowledge in optimization are reviewed and categorized. Potential applications of knowledge for optimization are discussed in more detail, in hope to identify possible directions for future research in knowledge-assisted optimization (KAO).


Sign in / Sign up

Export Citation Format

Share Document