tensor networks
Recently Published Documents


TOTAL DOCUMENTS

241
(FIVE YEARS 129)

H-INDEX

27
(FIVE YEARS 8)

2022 ◽  
Vol 303 ◽  
pp. 103649
Author(s):  
Samy Badreddine ◽  
Artur d'Avila Garcez ◽  
Luciano Serafini ◽  
Michael Spranger
Keyword(s):  

2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Matthew Steinberg ◽  
Javier Prior

AbstractHyperinvariant tensor networks (hyMERA) were introduced as a way to combine the successes of perfect tensor networks (HaPPY) and the multiscale entanglement renormalization ansatz (MERA) in simulations of the AdS/CFT correspondence. Although this new class of tensor network shows much potential for simulating conformal field theories arising from hyperbolic bulk manifolds with quasiperiodic boundaries, many issues are unresolved. In this manuscript we analyze the challenges related to optimizing tensors in a hyMERA with respect to some quasiperiodic critical spin chain, and compare with standard approaches in MERA. Additionally, we show two new sets of tensor decompositions which exhibit different properties from the original construction, implying that the multitensor constraints are neither unique, nor difficult to find, and that a generalization of the analytical tensor forms used up until now may exist. Lastly, we perform randomized trials using a descending superoperator with several of the investigated tensor decompositions, and find that the constraints imposed on the spectra of local descending superoperators in hyMERA are compatible with the operator spectra of several minimial model CFTs.


2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Boris Ponsioen ◽  
Fakher Assaad ◽  
Philippe Corboz

The excitation ansatz for tensor networks is a powerful tool for simulating the low-lying quasiparticle excitations above ground states of strongly correlated quantum many-body systems. Recently, the two-dimensional tensor network class of infinite projected entangled-pair states gained new ground state optimization methods based on automatic differentiation, which are at the same time highly accurate and simple to implement. Naturally, the question arises whether these new ideas can also be used to optimize the excitation ansatz, which has recently been implemented in two dimensions as well. In this paper, we describe a straightforward way to reimplement the framework for excitations using automatic differentiation, and demonstrate its performance for the Hubbard model at half filling.


Author(s):  
Chenhua Geng ◽  
Hong-Ye Hu ◽  
Yijian Zou

Abstract Differentiable programming is a new programming paradigm which enables large scale optimization through automatic calculation of gradients also known as auto-differentiation. This concept emerges from deep learning, and has also been generalized to tensor network optimizations. Here, we extend the differentiable programming to tensor networks with isometric constraints with applications to multiscale entanglement renormalization ansatz (MERA) and tensor network renormalization (TNR). By introducing several gradient-based optimization methods for the isometric tensor network and comparing with Evenbly-Vidal method, we show that auto-differentiation has a better performance for both stability and accuracy. We numerically tested our methods on 1D critical quantum Ising spin chain and 2D classical Ising model. We calculate the ground state energy for the 1D quantum model and internal energy for the classical model, and scaling dimensions of scaling operators and find they all agree with the theory well.


2022 ◽  
Vol 4 (1) ◽  
Author(s):  
Samuel Mugel ◽  
Carlos Kuchkovsky ◽  
Escolástico Sánchez ◽  
Samuel Fernández-Lorenzo ◽  
Jorge Luis-Hita ◽  
...  

2021 ◽  
Author(s):  
Luciano Serafini ◽  
Artur d’Avila Garcez ◽  
Samy Badreddine ◽  
Ivan Donadello ◽  
Michael Spranger ◽  
...  

The recent availability of large-scale data combining multiple data modalities has opened various research and commercial opportunities in Artificial Intelligence (AI). Machine Learning (ML) has achieved important results in this area mostly by adopting a sub-symbolic distributed representation. It is generally accepted now that such purely sub-symbolic approaches can be data inefficient and struggle at extrapolation and reasoning. By contrast, symbolic AI is based on rich, high-level representations ideally based on human-readable symbols. Despite being more explainable and having success at reasoning, symbolic AI usually struggles when faced with incomplete knowledge or inaccurate, large data sets and combinatorial knowledge. Neurosymbolic AI attempts to benefit from the strengths of both approaches combining reasoning with complex representation of knowledge and efficient learning from multiple data modalities. Hence, neurosymbolic AI seeks to ground rich knowledge into efficient sub-symbolic representations and to explain sub-symbolic representations and deep learning by offering high-level symbolic descriptions for such learning systems. Logic Tensor Networks (LTN) are a neurosymbolic AI system for querying, learning and reasoning with rich data and abstract knowledge. LTN introduces Real Logic, a fully differentiable first-order language with concrete semantics such that every symbolic expression has an interpretation that is grounded onto real numbers in the domain. In particular, LTN converts Real Logic formulas into computational graphs that enable gradient-based optimization. This chapter presents the LTN framework and illustrates its use on knowledge completion tasks to ground the relational predicates (symbols) into a concrete interpretation (vectors and tensors). It then investigates the use of LTN on semi-supervised learning, learning of embeddings and reasoning. LTN has been applied recently to many important AI tasks, including semantic image interpretation, ontology learning and reasoning, and reinforcement learning, which use LTN for supervised classification, data clustering, semi-supervised learning, embedding learning, reasoning and query answering. The chapter presents some of the main recent applications of LTN before analyzing results in the context of related work and discussing the next steps for neurosymbolic AI and LTN-based AI models.


Author(s):  
Ian Convy ◽  
William Huggins ◽  
Haoran Liao ◽  
K Birgitta Whaley

Abstract Tensor networks have emerged as promising tools for machine learning, inspired by their widespread use as variational ansatze in quantum many-body physics. It is well known that the success of a given tensor network ansatz depends in part on how well it can reproduce the underlying entanglement structure of the target state, with different network designs favoring different scaling patterns. We demonstrate here how a related correlation analysis can be applied to tensor network machine learning, and explore whether classical data possess correlation scaling patterns similar to those found in quantum states which might indicate the best network to use for a given dataset. We utilize mutual information as measure of correlations in classical data, and show that it can serve as a lower-bound on the entanglement needed for a probabilistic tensor network classifier. We then develop a logistic regression algorithm to estimate the mutual information between bipartitions of data features, and verify its accuracy on a set of Gaussian distributions designed to mimic different correlation patterns. Using this algorithm, we characterize the scaling patterns in the MNIST and Tiny Images datasets, and find clear evidence of boundary-law scaling in the latter. This quantum-inspired classical analysis offers insight into the design of tensor networks which are best suited for specific learning tasks.


Author(s):  
Simone Montangero ◽  
Enrique Rico ◽  
Pietro Silvi

This brief review introduces the reader to tensor network methods, a powerful theoretical and numerical paradigm spawning from condensed matter physics and quantum information science and increasingly exploited in different fields of research, from artificial intelligence to quantum chemistry. Here, we specialize our presentation on the application of loop-free tensor network methods to the study of high-energy physics problems and, in particular, to the study of lattice gauge theories where tensor networks can be applied in regimes where Monte Carlo methods are hindered by the sign problem. This article is part of the theme issue ‘Quantum technologies in particle physics’.


Author(s):  
Mohammad Maminur Islam ◽  
Somdeb Sarkhel ◽  
Deepak Venugopal
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document