von neumann
Recently Published Documents


TOTAL DOCUMENTS

4737
(FIVE YEARS 787)

H-INDEX

75
(FIVE YEARS 10)

2022 ◽  
Vol 18 (2) ◽  
pp. 1-22
Author(s):  
João Paulo Cardoso de Lima ◽  
Marcelo Brandalero ◽  
Michael Hübner ◽  
Luigi Carro

Accelerating finite-state automata benefits several emerging application domains that are built on pattern matching. In-memory architectures, such as the Automata Processor (AP), are efficient to speed them up, at least for outperforming traditional von-Neumann architectures. In spite of the AP’s massive parallelism, current APs suffer from poor memory density, inefficient routing architectures, and limited capabilities. Although these limitations can be lessened by emerging memory technologies, its architecture is still the major source of huge communication demands and lack of scalability. To address these issues, we present STAP , a Scalable TCAM-based architecture for Automata Processing . STAP adopts a reconfigurable array of processing elements, which are based on memristive Ternary CAMs (TCAMs), to efficiently implement Non-deterministic finite automata (NFAs) through proper encoding and mapping methods. The CAD tool for STAP integrates the design flow of automata applications, a specific mapping algorithm, and place and route tools for connecting processing elements by RRAM-based programmable interconnects. Results showed 1.47× higher throughput when processing 16-bit input symbols, and improvements of 3.9× and 25× on state and routing densities over the state-of-the-art AP, while preserving 10 4 programming cycles.


2022 ◽  
Vol 6 (POPL) ◽  
pp. 1-31
Author(s):  
Xiaodong Jia ◽  
Andre Kornell ◽  
Bert Lindenhovius ◽  
Michael Mislove ◽  
Vladimir Zamdzhiev

We consider a programming language that can manipulate both classical and quantum information. Our language is type-safe and designed for variational quantum programming, which is a hybrid classical-quantum computational paradigm. The classical subsystem of the language is the Probabilistic FixPoint Calculus (PFPC), which is a lambda calculus with mixed-variance recursive types, term recursion and probabilistic choice. The quantum subsystem is a first-order linear type system that can manipulate quantum information. The two subsystems are related by mixed classical/quantum terms that specify how classical probabilistic effects are induced by quantum measurements, and conversely, how classical (probabilistic) programs can influence the quantum dynamics. We also describe a sound and computationally adequate denotational semantics for the language. Classical probabilistic effects are interpreted using a recently-described commutative probabilistic monad on DCPO. Quantum effects and resources are interpreted in a category of von Neumann algebras that we show is enriched over (continuous) domains. This strong sense of enrichment allows us to develop novel semantic methods that we use to interpret the relationship between the quantum and classical probabilistic effects. By doing so we provide a very detailed denotational analysis that relates domain-theoretic models of classical probabilistic programming to models of quantum programming.


2022 ◽  
Vol 4 (1) ◽  
pp. 22-35
Author(s):  
Abhinash Kumar Roy ◽  
Sourabh Magare ◽  
Varun Srivastava ◽  
Prasanta K. Panigrahi

We investigate the dynamical evolution of genuine multipartite correlations for N-qubits in a common reservoir considering a non-dissipative qubits-reservoir model. We derive an exact expression for the time-evolved density matrix by modeling the reservoir as a set of infinite harmonic oscillators with a bilinear form of interaction Hamiltonian. Interestingly, we find that the choice of two-level systems corresponding to an initially correlated multipartite state plays a significant role in potential robustness against environmental decoherence. In particular, the generalized W-class Werner state shows robustness against the decoherence for an equivalent set of qubits, whereas a certain generalized GHZ-class Werner state shows robustness for inequivalent sets of qubits. It is shown that the genuine multipartite concurrence (GMC), a measure of multipartite entanglement of an initially correlated multipartite state, experiences an irreversible decay of correlations in the presence of a thermal reservoir. For the GHZ-class Werner state, the region of mixing parameters for which there exists GMC, shrinks with time and with increase in the temperature of the thermal reservoir. Furthermore, we study the dynamical evolution of the relative entropy of coherence and von-Neumann entropy for the W-class Werner state.


2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Giacomo De Palma ◽  
Lucas Hackl

We prove that the entanglement entropy of any pure initial state of a bipartite bosonic quantum system grows linearly in time with respect to the dynamics induced by any unstable quadratic Hamiltonian. The growth rate does not depend on the initial state and is equal to the sum of certain Lyapunov exponents of the corresponding classical dynamics. This paper generalizes the findings of [Bianchi et al., JHEP 2018, 25 (2018)], which proves the same result in the special case of Gaussian initial states. Our proof is based on a recent generalization of the strong subadditivity of the von Neumann entropy for bosonic quantum systems [De Palma et al., arXiv:2105.05627]. This technique allows us to extend our result to generic mixed initial states, with the squashed entanglement providing the right generalization of the entanglement entropy. We discuss several applications of our results to physical systems with (weakly) interacting Hamiltonians and periodically driven quantum systems, including certain quantum field theory models.


2022 ◽  
Vol 5 (1) ◽  
Author(s):  
Kirill P. Kalinin ◽  
Natalia G. Berloff

AbstractA promising approach to achieve computational supremacy over the classical von Neumann architecture explores classical and quantum hardware as Ising machines. The minimisation of the Ising Hamiltonian is known to be NP-hard problem yet not all problem instances are equivalently hard to optimise. Given that the operational principles of Ising machines are suited to the structure of some problems but not others, we propose to identify computationally simple instances with an ‘optimisation simplicity criterion’. Neuromorphic architectures based on optical, photonic, and electronic systems can naturally operate to optimise instances satisfying this criterion, which are therefore often chosen to illustrate the computational advantages of new Ising machines. As an example, we show that the Ising model on the Möbius ladder graph is ‘easy’ for Ising machines. By rewiring the Möbius ladder graph to random 3-regular graphs, we probe an intermediate computational complexity between P and NP-hard classes with several numerical methods. Significant fractions of polynomially simple instances are further found for a wide range of small size models from spin glasses to maximum cut problems. A compelling approach for distinguishing easy and hard instances within the same NP-hard class of problems can be a starting point in developing a standardised procedure for the performance evaluation of emerging physical simulators and physics-inspired algorithms.


Author(s):  
Dennis Valbjørn Christensen ◽  
Regina Dittmann ◽  
Bernabe Linares-Barranco ◽  
Abu Sebastian ◽  
Manuel Le Gallo ◽  
...  

Abstract Modern computation based on the von Neumann architecture is today a mature cutting-edge science. In the Von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 1018 calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this Roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The Roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this Roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community.


2022 ◽  
Author(s):  
Harikrishnan Ravichandran ◽  
Yikai Zheng ◽  
Thomas Schranghamer ◽  
Nicholas Trainor ◽  
Joan Redwing ◽  
...  

Abstract As the energy and hardware investments necessary for conventional high-precision digital computing continues to explode in the emerging era of artificial intelligence, deep learning, and Big-data [1-4], a change in paradigm that can trade precision for energy and resource efficiency is being sought for many computing applications. Stochastic computing (SC) is an attractive alternative since unlike digital computers, which require many logic gates and a high transistor volume to perform basic arithmetic operations such as addition, subtraction, multiplication, sorting etc., SC can implement the same using simple logic gates [5, 6]. While it is possible to accelerate SC using traditional silicon complementary metal oxide semiconductor (CMOS) [7, 8] technology, the need for extensive hardware investment to generate stochastic bits (s-bit), the fundamental computing primitive for SC, makes it less attractive. Memristor [9-11] and spin-based devices [12-15] offer natural randomness but depend on hybrid designs involving CMOS peripherals for accelerating SC, which increases area and energy burden. Here we overcome the limitations of existing and emerging technologies and experimentally demonstrate a standalone SC architecture embedded in memory based on two-dimensional (2D) memtransistors. Our monolithic and non-von Neumann SC architecture consumes a miniscule amount of energy < 1 nano Joules for s-bit generation and to perform arithmetic operations and occupy small hardware footprint highlighting the benefits of SC.


Sign in / Sign up

Export Citation Format

Share Document