scholarly journals Memcomputing NP-complete problems in polynomial time using polynomial resources and collective states

2015 ◽  
Vol 1 (6) ◽  
pp. e1500031 ◽  
Author(s):  
Fabio Lorenzo Traversa ◽  
Chiara Ramella ◽  
Fabrizio Bonani ◽  
Massimiliano Di Ventra

Memcomputing is a novel non-Turing paradigm of computation that uses interacting memory cells (memprocessors for short) to store and process information on the same physical platform. It was recently proven mathematically that memcomputing machines have the same computational power of nondeterministic Turing machines. Therefore, they can solve NP-complete problems in polynomial time and, using the appropriate architecture, with resources that only grow polynomially with the input size. The reason for this computational power stems from properties inspired by the brain and shared by any universal memcomputing machine, in particular intrinsic parallelism and information overhead, namely, the capability of compressing information in the collective state of the memprocessor network. We show an experimental demonstration of an actual memcomputing architecture that solves the NP-complete version of the subset sum problem in only one step and is composed of a number of memprocessors that scales linearly with the size of the problem. We have fabricated this architecture using standard microelectronic technology so that it can be easily realized in any laboratory setting. Although the particular machine presented here is eventually limited by noise—and will thus require error-correcting codes to scale to an arbitrary number of memprocessors—it represents the first proof of concept of a machine capable of working with the collective state of interacting memory cells, unlike the present-day single-state machines built using the von Neumann architecture.

Author(s):  
Zsolt Gazdag ◽  
Károly Hajagos ◽  
Szabolcs Iván

AbstractIt is known that polarizationless P systems with active membranes can solve $$\mathrm {PSPACE}$$ PSPACE -complete problems in polynomial time without using in-communication rules but using the classical (also called strong) non-elementary membrane division rules. In this paper, we show that this holds also when in-communication rules are allowed but strong non-elementary division rules are replaced with weak non-elementary division rules, a type of rule which is an extension of elementary membrane divisions to non-elementary membranes. Since it is known that without in-communication rules, these P systems can solve in polynomial time only problems in $$\mathrm {P}^{\text {NP}}$$ P NP , our result proves that these rules serve as a borderline between $$\mathrm {P}^{\text {NP}}$$ P NP and $$\mathrm {PSPACE}$$ PSPACE concerning the computational power of these P systems.


Author(s):  
Giacomo Pedretti

AbstractMachine learning requires to process large amount of irregular data and extract meaningful information. Von-Neumann architecture is being challenged by such computation, in fact a physical separation between memory and processing unit limits the maximum speed in analyzing lots of data and the majority of time and energy are spent to make information travel from memory to the processor and back. In-memory computing executes operations directly within the memory without any information travelling. In particular, thanks to emerging memory technologies such as memristors, it is possible to program arbitrary real numbers directly in a single memory device in an analog fashion and at the array level, execute algebraic operation in-memory and in one step. In this chapter the latest results in accelerating inverse operation, such as the solution of linear systems, in-memory and in a single computational cycle will be presented.


Author(s):  
Maryam Gholami Doborjeh ◽  
Zohreh Gholami Doborjeh ◽  
Akshay Raj Gollahalli ◽  
Kaushalya Kumarasinghe ◽  
Vivienne Breen ◽  
...  

Author(s):  
Giuseppe Primiero

This chapter starts with the analysis of the engineering foundation of computing which, proceeding in parallelwith themathematical foundation, led to the design and creation of physical computingmachines. It illustrates the historical evolution of the first generation of computing and their technical foundation, known as the von Neumann architecture. Fromthe conceptual point of view, the chapter clarifies the relation between the universal model of computation and the construction of an all-purpose machine.


Author(s):  
Marko Samer ◽  
Stefan Szeider

Parameterized complexity is a new theoretical framework that considers, in addition to the overall input size, the effects on computational complexity of a secondary measurement, the parameter. This two-dimensional viewpoint allows a fine-grained complexity analysis that takes structural properties of problem instances into account. The central notion is “fixed-parameter tractability” which refers to solvability in polynomial time for each fixed value of the parameter such that the order of the polynomial time bound is independent of the parameter. This chapter presents main concepts and recent results on the parameterized complexity of the satisfiability problem and it outlines fundamental algorithmic ideas that arise in this context. Among the parameters considered are the size of backdoor sets with respect to various tractable base classes and the treewidth of graph representations of satisfiability instances.


Science ◽  
2011 ◽  
Vol 334 (6052) ◽  
pp. 61-65 ◽  
Author(s):  
M. Mariantoni ◽  
H. Wang ◽  
T. Yamamoto ◽  
M. Neeley ◽  
R. C. Bialczak ◽  
...  

2011 ◽  
Vol 13 (8) ◽  
pp. 1228-1244 ◽  
Author(s):  
Robert W. Gehl

In Web 2.0, there is a social dichotomy at work based upon and reflecting the underlying Von Neumann Architecture of computers. In the hegemonic Web 2.0 business model, users are encouraged to process digital ephemera by sharing content, making connections, ranking cultural artifacts, and producing digital content, a mode of computing I call ‘affective processing.’ The Web 2.0 business model imagines users to be a potential superprocessor. In contrast, the memory possibilities of computers are typically commanded by Web 2.0 site owners. They seek to surveil every user action, store the resulting data, protect that data via intellectual property, and mine it for profit. Users are less likely to wield control over these archives. These archives are comprised of the products of affective processing; they are archives of affect, sites of decontextualized data which can be rearranged by the site owners to construct knowledge about Web 2.0 users.


Sign in / Sign up

Export Citation Format

Share Document