scholarly journals Mutual Information Scaling for Tensor Network Machine Learning

Author(s):  
Ian Convy ◽  
William Huggins ◽  
Haoran Liao ◽  
K Birgitta Whaley

Abstract Tensor networks have emerged as promising tools for machine learning, inspired by their widespread use as variational ansatze in quantum many-body physics. It is well known that the success of a given tensor network ansatz depends in part on how well it can reproduce the underlying entanglement structure of the target state, with different network designs favoring different scaling patterns. We demonstrate here how a related correlation analysis can be applied to tensor network machine learning, and explore whether classical data possess correlation scaling patterns similar to those found in quantum states which might indicate the best network to use for a given dataset. We utilize mutual information as measure of correlations in classical data, and show that it can serve as a lower-bound on the entanglement needed for a probabilistic tensor network classifier. We then develop a logistic regression algorithm to estimate the mutual information between bipartitions of data features, and verify its accuracy on a set of Gaussian distributions designed to mimic different correlation patterns. Using this algorithm, we characterize the scaling patterns in the MNIST and Tiny Images datasets, and find clear evidence of boundary-law scaling in the latter. This quantum-inspired classical analysis offers insight into the design of tensor networks which are best suited for specific learning tasks.

2021 ◽  
Vol 8 ◽  
Author(s):  
Andrey Kardashin ◽  
Alexey Uvarov ◽  
Jacob Biamonte

Tensor network algorithms seek to minimize correlations to compress the classical data representing quantum states. Tensor network algorithms and similar tools—called tensor network methods—form the backbone of modern numerical methods used to simulate many-body physics and have a further range of applications in machine learning. Finding and contracting tensor network states is a computational task, which may be accelerated by quantum computing. We present a quantum algorithm that returns a classical description of a rank-r tensor network state satisfying an area law and approximating an eigenvector given black-box access to a unitary matrix. Our work creates a bridge between several contemporary approaches, including tensor networks, the variational quantum eigensolver (VQE), quantum approximate optimization algorithm (QAOA), and quantum computation.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 410
Author(s):  
Johnnie Gray ◽  
Stefanos Kourtis

Tensor networks represent the state-of-the-art in computational methods across many disciplines, including the classical simulation of quantum many-body systems and quantum circuits. Several applications of current interest give rise to tensor networks with irregular geometries. Finding the best possible contraction path for such networks is a central problem, with an exponential effect on computation time and memory footprint. In this work, we implement new randomized protocols that find very high quality contraction paths for arbitrary and large tensor networks. We test our methods on a variety of benchmarks, including the random quantum circuit instances recently implemented on Google quantum chips. We find that the paths obtained can be very close to optimal, and often many orders or magnitude better than the most established approaches. As different underlying geometries suit different methods, we also introduce a hyper-optimization approach, where both the method applied and its algorithmic parameters are tuned during the path finding. The increase in quality of contraction schemes found has significant practical implications for the simulation of quantum many-body systems and particularly for the benchmarking of new quantum chips. Concretely, we estimate a speed-up of over 10,000× compared to the original expectation for the classical simulation of the Sycamore `supremacy' circuits.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 541
Author(s):  
Samuel O. Scalet ◽  
Álvaro M. Alhambra ◽  
Georgios Styliaris ◽  
J. Ignacio Cirac

The mutual information is a measure of classical and quantum correlations of great interest in quantum information. It is also relevant in quantum many-body physics, by virtue of satisfying an area law for thermal states and bounding all correlation functions. However, calculating it exactly or approximately is often challenging in practice. Here, we consider alternative definitions based on Rényi divergences. Their main advantage over their von Neumann counterpart is that they can be expressed as a variational problem whose cost function can be efficiently evaluated for families of states like matrix product operators while preserving all desirable properties of a measure of correlations. In particular, we show that they obey a thermal area law in great generality, and that they upper bound all correlation functions. We also investigate their behavior on certain tensor network states and on classical thermal distributions.


2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Boris Ponsioen ◽  
Fakher Assaad ◽  
Philippe Corboz

The excitation ansatz for tensor networks is a powerful tool for simulating the low-lying quasiparticle excitations above ground states of strongly correlated quantum many-body systems. Recently, the two-dimensional tensor network class of infinite projected entangled-pair states gained new ground state optimization methods based on automatic differentiation, which are at the same time highly accurate and simple to implement. Naturally, the question arises whether these new ideas can also be used to optimize the excitation ansatz, which has recently been implemented in two dimensions as well. In this paper, we describe a straightforward way to reimplement the framework for excitations using automatic differentiation, and demonstrate its performance for the Hubbard model at half filling.


2020 ◽  
Vol 9 (3) ◽  
Author(s):  
Matthias Christandl ◽  
Angelo Lucia ◽  
Peter Vrana ◽  
Albert H. Werner

Tensor networks provide descriptions of strongly correlated quantum systems based on an underlying entanglement structure given by a graph of entangled states along the edges that identify the indices of the local tensors to be contracted. Considering a more general setting, where entangled states on edges are replaced by multipartite entangled states on faces, allows us to employ the geometric properties of multipartite entanglement in order to obtain representations in terms of superpositions of tensor network states with smaller effective dimension, leading to computational savings.


2009 ◽  
Vol 1 (2) ◽  
pp. 197-217 ◽  
Author(s):  
Eliana Colunga ◽  
Linda B. Smith ◽  
Michael Gasser

AbstractThe ontological distinction between discrete individuated objects and continuous substances, and the way this distinction is expressed in different languages has been a fertile area for examining the relation between language and thought. In this paper we combine simulations and a cross-linguistic word learning task as a way to gain insight into the nature of the learning mechanisms involved in word learning. First, we look at the effect of the different correlational structures on novel generalizations with two kinds of learning tasks implemented in neural networks—prediction and correlation. Second, we look at English- and Spanish-speaking 2-3-year-olds' novel noun generalizations, and find that count/mass syntax has a stronger effect on Spanish- than on English-speaking children's novel noun generalizations, consistent with the predicting networks. The results suggest that it is not just the correlational structure of different linguistic cues that will determine how they are learned, but the specific learning mechanism and task in which they are involved.


2021 ◽  
Vol 7 (7) ◽  
Author(s):  
Qian Wang ◽  
Jun Ye ◽  
Teng Xu ◽  
Ning Zhou ◽  
Zhongqiu Lu ◽  
...  

Identification of prokaryotic transposases (Tnps) not only gives insight into the spread of antibiotic resistance and virulence but the process of DNA movement. This study aimed to develop a classifier for predicting Tnps in bacteria and archaea using machine learning (ML) approaches. We extracted a total of 2751 protein features from the training dataset including 14852 Tnps and 14852 controls, and selected 75 features as predictive signatures using the combined mutual information and least absolute shrinkage and selection operator algorithms. By aggregating these signatures, an ensemble classifier that integrated a collection of individual ML-based classifiers, was developed to identify Tnps. Further validation revealed that this classifier achieved good performance with an average AUC of 0.955, and met or exceeded other common methods. Based on this ensemble classifier, a stand-alone command-line tool designated TnpDiscovery was established to maximize the convenience for bioinformaticians and experimental researchers toward Tnp prediction. This study demonstrates the effectiveness of ML approaches in identifying Tnps, facilitating the discovery of novel Tnps in the future.


Author(s):  
Johannes Hauschild ◽  
Frank Pollmann

Tensor product state (TPS) based methods are powerful tools to efficiently simulate quantum many-body systems in and out of equilibrium. In particular, the one-dimensional matrix-product (MPS) formalism is by now an established tool in condensed matter theory and quantum chemistry. In these lecture notes, we combine a compact review of basic TPS concepts with the introduction of a versatile tensor library for Python (TeNPy) [1]. As concrete examples, we consider the MPS based time-evolving block decimation and the density matrix renormalization group algorithm. Moreover, we provide a practical guide on how to implement abelian symmetries (e.g., a particle number conservation) to accelerate tensor operations.


Author(s):  
Pietro Silvi ◽  
Ferdinand Tschirsich ◽  
Matthias Gerster ◽  
Johannes Jünemann ◽  
Daniel Jaschke ◽  
...  

We present a compendium of numerical simulation techniques, based on tensor network methods, aiming to address problems of many-body quantum mechanics on a classical computer. The core setting of this anthology are lattice problems in low spatial dimension at finite size, a physical scenario where tensor network methods, both Density Matrix Renormalization Group and beyond, have long proven to be winning strategies. Here we explore in detail the numerical frameworks and methods employed to deal with low-dimensional physical setups, from a computational physics perspective. We focus on symmetries and closed-system simulations in arbitrary boundary conditions, while discussing the numerical data structures and linear algebra manipulation routines involved, which form the core libraries of any tensor network code. At a higher level, we put the spotlight on loop-free network geometries, discussing their advantages, and presenting in detail algorithms to simulate low-energy equilibrium states. Accompanied by discussions of data structures, numerical techniques and performance, this anthology serves as a programmer’s companion, as well as a self-contained introduction and review of the basic and selected advanced concepts in tensor networks, including examples of their applications.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Cole Miles ◽  
Annabelle Bohrdt ◽  
Ruihan Wu ◽  
Christie Chiu ◽  
Muqing Xu ◽  
...  

AbstractImage-like data from quantum systems promises to offer greater insight into the physics of correlated quantum matter. However, the traditional framework of condensed matter physics lacks principled approaches for analyzing such data. Machine learning models are a powerful theoretical tool for analyzing image-like data including many-body snapshots from quantum simulators. Recently, they have successfully distinguished between simulated snapshots that are indistinguishable from one and two point correlation functions. Thus far, the complexity of these models has inhibited new physical insights from such approaches. Here, we develop a set of nonlinearities for use in a neural network architecture that discovers features in the data which are directly interpretable in terms of physical observables. Applied to simulated snapshots produced by two candidate theories approximating the doped Fermi-Hubbard model, we uncover that the key distinguishing features are fourth-order spin-charge correlators. Our approach lends itself well to the construction of simple, versatile, end-to-end interpretable architectures, thus paving the way for new physical insights from machine learning studies of experimental and numerical data.


Sign in / Sign up

Export Citation Format

Share Document