Some considerations on quantum computing at sub-atomic scales and its impact in the future of Moore's law

2020 ◽  
Vol 20 (1&2) ◽  
pp. 1-13
Author(s):  
Nicolas F. Lori ◽  
Jos Neves ◽  
Alex H. Blin ◽  
Victor Alves

The contemporary development of Quantum Computers has opened new possibilities for computation improvements, but the limits of Moore's law validity are starting to show. We analyze here the possibility that miniaturization will continue to be the source of Moore's law validity in the near future, and our conclusion is that miniaturization is no longer a reliable answer for the future development of computer science, but instead we suggest that lateralization is the correct approach. By lateralization, we mean the use of biology as the correct format for the implementation of ubiquitous computerized systems, a format that might in many circumstances eschew miniaturization as an overly expensive useless advantage whereas in other cases miniaturization might play a key role. Thus, the future of computer science is not towards a miniaturization that goes from the atom-scale (its present application scale) towards the nucleus-scale, but rather in developing more integrated circuits at the micrometer to nanometer scale, so as to better mimic and interact with biological systems. We analyze some "almost sci-fi" approaches to the development of better computer systems near the Bekenstein bound limit, and unsurprisingly they fail to have any realistic feasibility. Then, we use the difference between the classical vs. quantum version of the Hammerstein-Clifford theorem to explain why biological systems eschewed quantum computation to represent the world but have chosen classical computation instead. Finally, we analyze examples of recent work which indicate future possibilities of integration between computers and biological systems. As a corollary of that choice by the biological systems, we propose that the predicted lateralization-driven evolution in computer science will not be based in quantum computers, but rather in classical computers.

Author(s):  
David Segal

Chapter 3 highlights the critical role materials have in the development of digital computers. It traces developments from the cat’s whisker to valves through to relays and transistors. Accounts are given for transistors and the manufacture of integrated circuits (silicon chips) by use of photolithography. Future potential computing techniques, namely quantum computing and the DNA computer, are covered. The history of computability and Moore’s Law are discussed.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Prasanna Date ◽  
Davis Arthur ◽  
Lauren Pusey-Nazzaro

AbstractTraining machine learning models on classical computers is usually a time and compute intensive process. With Moore’s law nearing its inevitable end and an ever-increasing demand for large-scale data analysis using machine learning, we must leverage non-conventional computing paradigms like quantum computing to train machine learning models efficiently. Adiabatic quantum computers can approximately solve NP-hard problems, such as the quadratic unconstrained binary optimization (QUBO), faster than classical computers. Since many machine learning problems are also NP-hard, we believe adiabatic quantum computers might be instrumental in training machine learning models efficiently in the post Moore’s law era. In order to solve problems on adiabatic quantum computers, they must be formulated as QUBO problems, which is very challenging. In this paper, we formulate the training problems of three machine learning models—linear regression, support vector machine (SVM) and balanced k-means clustering—as QUBO problems, making them conducive to be trained on adiabatic quantum computers. We also analyze the computational complexities of our formulations and compare them to corresponding state-of-the-art classical approaches. We show that the time and space complexities of our formulations are better (in case of SVM and balanced k-means clustering) or equivalent (in case of linear regression) to their classical counterparts.


2010 ◽  
Vol 143-144 ◽  
pp. 67-71 ◽  
Author(s):  
Dong Ping Li ◽  
Zhi Ming Qu

The networking approach to the World Wide Web is defined not only by the exploration of architecture, but also by the confirmed need for interrupts. Given the current status of authenticated archetypes, steganographers dubiously desire the analysis of scatter/gather I/O. the focus in this position paper is not on whether Moore's Law can be made concurrent, distributed, and pervasive, but rather on proposing an analysis of 32 bit architectures (Grange). It is concluded that, using probabilistic and interactive information and based on relational modality, the machine system and kernels are verified, which is widely used in the future.


Author(s):  
Jan van Schoot ◽  
Kars Troost ◽  
Frank Bornebroek ◽  
Rob van Ballegoij ◽  
Sjoerd Lok ◽  
...  

Physics Today ◽  
2000 ◽  
Vol 53 (10) ◽  
pp. 106-108 ◽  
Author(s):  
Igor Fodor ◽  
Stan Williams
Keyword(s):  

2010 ◽  
Vol 98 (2) ◽  
pp. 253-266 ◽  
Author(s):  
Ronald G. Dreslinski ◽  
Michael Wieckowski ◽  
David Blaauw ◽  
Dennis Sylvester ◽  
Trevor Mudge

Author(s):  
Robert-H. Munnig Schmidt

The developments in lithographic tools for the production of an integrated circuit (IC) are ruled by ‘Moore’s Law’: the density of components on an IC doubles in about every two years . The corresponding size reduction of the smallest detail in an IC entails several technological breakthroughs. The wafer scanner, the exposure system that defines those details, is the determining factor in these developments. This review deals with those aspects of the positioning systems inside these wafer scanners that enable the extension of Moore’s Law into the future. The design of these systems is increasingly difficult because of the accuracy levels in the sub-nanometre range coupled with motion velocities of several metres per second. In addition to the use of feedback control for the reduction of errors, high-precision model-based feed-forward control is required with an almost ideally reproducible motion-system behaviour and a strict limitation of random disturbing events. The full mastering of this behaviour even includes material drift on an atomic scale and is decisive for the future success of these machines.


Sign in / Sign up

Export Citation Format

Share Document