scholarly journals Implementing the Quantum von Neumann Architecture with Superconducting Circuits

Science ◽  
2011 ◽  
Vol 334 (6052) ◽  
pp. 61-65 ◽  
Author(s):  
M. Mariantoni ◽  
H. Wang ◽  
T. Yamamoto ◽  
M. Neeley ◽  
R. C. Bialczak ◽  
...  
Author(s):  
Maryam Gholami Doborjeh ◽  
Zohreh Gholami Doborjeh ◽  
Akshay Raj Gollahalli ◽  
Kaushalya Kumarasinghe ◽  
Vivienne Breen ◽  
...  

Author(s):  
Giuseppe Primiero

This chapter starts with the analysis of the engineering foundation of computing which, proceeding in parallelwith themathematical foundation, led to the design and creation of physical computingmachines. It illustrates the historical evolution of the first generation of computing and their technical foundation, known as the von Neumann architecture. Fromthe conceptual point of view, the chapter clarifies the relation between the universal model of computation and the construction of an all-purpose machine.


2011 ◽  
Vol 13 (8) ◽  
pp. 1228-1244 ◽  
Author(s):  
Robert W. Gehl

In Web 2.0, there is a social dichotomy at work based upon and reflecting the underlying Von Neumann Architecture of computers. In the hegemonic Web 2.0 business model, users are encouraged to process digital ephemera by sharing content, making connections, ranking cultural artifacts, and producing digital content, a mode of computing I call ‘affective processing.’ The Web 2.0 business model imagines users to be a potential superprocessor. In contrast, the memory possibilities of computers are typically commanded by Web 2.0 site owners. They seek to surveil every user action, store the resulting data, protect that data via intellectual property, and mine it for profit. Users are less likely to wield control over these archives. These archives are comprised of the products of affective processing; they are archives of affect, sites of decontextualized data which can be rearranged by the site owners to construct knowledge about Web 2.0 users.


Author(s):  
Michael Leventhal ◽  
Eric Lemoine

The XML chip is now more than six years old. The diffusion of this technology has been very limited, due, on the one hand, to the long period of evolutionary development needed to develop hardware capable of accelerating a significant portion of the XML computing workload and, on the other hand, to the fact that the chip was invented by start-up Tarari in a commercial context which required, for business reasons, a minimum of public disclosure of its design features. It remains, nevertheless, a significant landmark that the XML chip has been sold and continuously improved for the last six years. From the perspective of general computing history, the XML chip is an uncommon example of a successful workload-specific symbolic computing device. With respect to the specific interests of the XML community, the XML chip is a remarkable validation of one of its core founding principles: normalizing on a data format, whatever its imperfections, would enable the developers to, eventually, create tools to process it efficiently. This paper was prepared for the International Symposium on Processing XML Efficiently: Overcoming Limits on Space, Time, or Bandwidth, a day of discussion among, predominately, software developers working in the area of efficient XML processing. The Symposium is being held as a workshop within Balisage, a conference of specialists in markup theory. Given the interests of the audience this paper does not delve into the design features and principles of the chip itself; rather it presents a dialectic on the motivation for the development of an XML chip in view of related and potentially competing developments in scaling as it is commonly characterized as a manifestation of Moore's Law, parallelization through increasing the number of computing cores on general purpose processors (multicore Von Neumann architecture), and optimization of software.


1993 ◽  
Vol 2 (3) ◽  
pp. 23-35
Author(s):  
Allan R. Larrabee

The first digital computers consisted of a single processor acting on a single stream of data. In this so-called "von Neumann" architecture, computation speed is limited mainly by the time required to transfer data between the processor and memory. This limiting factor has been referred to as the "von Neumann bottleneck". The concern that the miniaturization of silicon-based integrated circuits will soon reach theoretical limits of size and gate times has led to increased interest in parallel architectures and also spurred research into alternatives to silicon-based implementations of processors. Meanwhile, sequential processors continue to be produced that have increased clock rates and an increase in memory locally available to a processor, and an increase in the rate at which data can be transferred to and from memories, networks, and remote storage. The efficiency of compilers and operating systems is also improving over time. Although such characteristics limit maximum performance, a large improvement in the speed of scientific computations can often be achieved by utilizing more efficient algorithms, particularly those that support parallel computation. This work discusses experiences with two tools for large grain (or "macro task") parallelism.


2008 ◽  
Vol 6 (2) ◽  
Author(s):  
I I Arikpo ◽  
F U Ogban ◽  
I E Eteng

Sign in / Sign up

Export Citation Format

Share Document