tensor network
Recently Published Documents


TOTAL DOCUMENTS

393
(FIVE YEARS 216)

H-INDEX

34
(FIVE YEARS 11)

2022 ◽  
Vol 156 (2) ◽  
pp. 024101
Author(s):  
Amartya Bose ◽  
Peter L. Walters

2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Matthew Steinberg ◽  
Javier Prior

AbstractHyperinvariant tensor networks (hyMERA) were introduced as a way to combine the successes of perfect tensor networks (HaPPY) and the multiscale entanglement renormalization ansatz (MERA) in simulations of the AdS/CFT correspondence. Although this new class of tensor network shows much potential for simulating conformal field theories arising from hyperbolic bulk manifolds with quasiperiodic boundaries, many issues are unresolved. In this manuscript we analyze the challenges related to optimizing tensors in a hyMERA with respect to some quasiperiodic critical spin chain, and compare with standard approaches in MERA. Additionally, we show two new sets of tensor decompositions which exhibit different properties from the original construction, implying that the multitensor constraints are neither unique, nor difficult to find, and that a generalization of the analytical tensor forms used up until now may exist. Lastly, we perform randomized trials using a descending superoperator with several of the investigated tensor decompositions, and find that the constraints imposed on the spectra of local descending superoperators in hyMERA are compatible with the operator spectra of several minimial model CFTs.


2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Boris Ponsioen ◽  
Fakher Assaad ◽  
Philippe Corboz

The excitation ansatz for tensor networks is a powerful tool for simulating the low-lying quasiparticle excitations above ground states of strongly correlated quantum many-body systems. Recently, the two-dimensional tensor network class of infinite projected entangled-pair states gained new ground state optimization methods based on automatic differentiation, which are at the same time highly accurate and simple to implement. Naturally, the question arises whether these new ideas can also be used to optimize the excitation ansatz, which has recently been implemented in two dimensions as well. In this paper, we describe a straightforward way to reimplement the framework for excitations using automatic differentiation, and demonstrate its performance for the Hubbard model at half filling.


Author(s):  
Chenhua Geng ◽  
Hong-Ye Hu ◽  
Yijian Zou

Abstract Differentiable programming is a new programming paradigm which enables large scale optimization through automatic calculation of gradients also known as auto-differentiation. This concept emerges from deep learning, and has also been generalized to tensor network optimizations. Here, we extend the differentiable programming to tensor networks with isometric constraints with applications to multiscale entanglement renormalization ansatz (MERA) and tensor network renormalization (TNR). By introducing several gradient-based optimization methods for the isometric tensor network and comparing with Evenbly-Vidal method, we show that auto-differentiation has a better performance for both stability and accuracy. We numerically tested our methods on 1D critical quantum Ising spin chain and 2D classical Ising model. We calculate the ground state energy for the 1D quantum model and internal energy for the classical model, and scaling dimensions of scaling operators and find they all agree with the theory well.


2021 ◽  
Vol 104 (23) ◽  
Author(s):  
Dominic J. Williamson ◽  
Clement Delcamp ◽  
Frank Verstraete ◽  
Norbert Schuch

Universe ◽  
2021 ◽  
Vol 8 (1) ◽  
pp. 1
Author(s):  
Chun-Jun Cao

In this note, I review a recent approach to quantum gravity that “gravitizes” quantum mechanics by emerging geometry and gravity from complex quantum states. Drawing further insights from tensor network toy models in AdS/CFT, I propose that approximate quantum error correction codes, when re-adapted into the aforementioned framework, also have promise in emerging gravity in near-flat geometries.


2021 ◽  
Vol 104 (11) ◽  
Author(s):  
Wei Tang ◽  
X. C. Xie ◽  
Lei Wang ◽  
Hong-Hao Tu

Author(s):  
Samuel Yen-Chi Chen ◽  
Chih-Min Huang ◽  
Chia-Wei Hsing ◽  
Hsi-Sheng Goan ◽  
Ying-Jer Kao

Abstract Recent advance in classical reinforcement learning (RL) and quantum computation (QC) points to a promising direction of performing RL on a quantum computer. However, potential applications in quantum RL are limited by the number of qubits available in modern quantum devices. Here we present two frameworks of deep quantum RL tasks using a gradient-free evolution optimization: First, we apply the amplitude encoding scheme to the Cart-Pole problem, where we demonstrate the quantum advantage of parameter saving using the amplitude encoding; Second, we propose a hybrid framework where the quantum RL agents are equipped with a hybrid tensor network-variational quantum circuit (TN-VQC) architecture to handle inputs of dimensions exceeding the number of qubits. This allows us to perform quantum RL on the MiniGrid environment with 147-dimensional inputs. The hybrid TN-VQC architecture provides a natural way to perform efficient compression of the input dimension, enabling further quantum RL applications on noisy intermediate-scale quantum devices.


Author(s):  
Ian Convy ◽  
William Huggins ◽  
Haoran Liao ◽  
K Birgitta Whaley

Abstract Tensor networks have emerged as promising tools for machine learning, inspired by their widespread use as variational ansatze in quantum many-body physics. It is well known that the success of a given tensor network ansatz depends in part on how well it can reproduce the underlying entanglement structure of the target state, with different network designs favoring different scaling patterns. We demonstrate here how a related correlation analysis can be applied to tensor network machine learning, and explore whether classical data possess correlation scaling patterns similar to those found in quantum states which might indicate the best network to use for a given dataset. We utilize mutual information as measure of correlations in classical data, and show that it can serve as a lower-bound on the entanglement needed for a probabilistic tensor network classifier. We then develop a logistic regression algorithm to estimate the mutual information between bipartitions of data features, and verify its accuracy on a set of Gaussian distributions designed to mimic different correlation patterns. Using this algorithm, we characterize the scaling patterns in the MNIST and Tiny Images datasets, and find clear evidence of boundary-law scaling in the latter. This quantum-inspired classical analysis offers insight into the design of tensor networks which are best suited for specific learning tasks.


Sign in / Sign up

Export Citation Format

Share Document