14. Operating systems and basic software for high-performance parallel architecture*

1986 ◽  
pp. 157-178
Author(s):  
J. C. Browne
MRS Bulletin ◽  
1997 ◽  
Vol 22 (10) ◽  
pp. 5-6
Author(s):  
Horst D. Simon

Recent events in the high-performance computing industry have concerned scientists and the general public regarding a crisis or a lack of leadership in the field. That concern is understandable considering the industry's history from 1993 to 1996. Cray Research, the historic leader in supercomputing technology, was unable to survive financially as an independent company and was acquired by Silicon Graphics. Two ambitious new companies that introduced new technologies in the late 1980s and early 1990s—Thinking Machines and Kendall Square Research—were commercial failures and went out of business. And Intel, which introduced its Paragon supercomputer in 1994, discontinued production only two years later.During the same time frame, scientists who had finished the laborious task of writing scientific codes to run on vector parallel supercomputers learned that those codes would have to be rewritten if they were to run on the next-generation, highly parallel architecture. Scientists who are not yet involved in high-performance computing are understandably hesitant about committing their time and energy to such an apparently unstable enterprise.However, beneath the commercial chaos of the last several years, a technological revolution has been occurring. The good news is that the revolution is over, leading to five to ten years of predictable stability, steady improvements in system performance, and increased productivity for scientific applications. It is time for scientists who were sitting on the fence to jump in and reap the benefits of the new technology.


1991 ◽  
Author(s):  
Eric A. Brewer ◽  
Chrysanthos N. Dellarocas ◽  
Adrian Colbrook ◽  
William E. Weihl

1994 ◽  
Vol 6 (2) ◽  
pp. 131-136
Author(s):  
Yoshifumi Sasaki ◽  
◽  
Michitaka Kameyama

For intelligent robots, a robot vision system is usually required to perform three-dimensional (3-D) position estimation as well as object recognition at high speeds. In this paper, we propose an algorithm for 3-D object recognition and position estimation for the implementation of a VLSI processor The principle of the algorithm is based on model matching between an input image and models stored in memory. Because of enormous computation time, the development of a high-performance VLSI processor is essential. Highly parallel architecture is introduced in the VLSI processor to reduce the latency. As a result of highly parallel computing, the computational time is 10000 times faster than that of a 28.5 MIPS workstation.


2011 ◽  
Vol 2011 ◽  
pp. 1-9 ◽  
Author(s):  
W. Mansour ◽  
R. Ayoubi ◽  
H. Ziade ◽  
R. Velazco ◽  
W. EL Falou

The associative Hopfield memory is a form of recurrent Artificial Neural Network (ANN) that can be used in applications such as pattern recognition, noise removal, information retrieval, and combinatorial optimization problems. This paper presents the implementation of the Hopfield Neural Network (HNN) parallel architecture on a SRAM-based FPGA. The main advantage of the proposed implementation is its high performance and cost effectiveness: it requires O(1) multiplications and O(log⁡ N) additions, whereas most others require O(N) multiplications and O(N) additions.


Author(s):  
A. De Gloria ◽  
P. Faraboschi ◽  
M. Olivieri ◽  
E. Guidetti

2019 ◽  
Vol 18 (4) ◽  
pp. 31-42 ◽  
Author(s):  
Carlos Arango ◽  
Rémy Dernat ◽  
John Sanabria

Virtualization technologies have evolved along with the development of computational environments. Virtualization offered needed features at that time such as isolation, accountability, resource allocation, resource fair sharing and so on. Novel processor technologies bring to commodity computers the possibility to emulate diverse environments where a wide range of computational scenarios can be run. Along with processors evolution, developers have implemented different virtualization mechanisms exhibiting enhanced performance from previous virtualized environments. Recently, operating system-based virtualization technologies captured the attention of communities abroad because their important improvements on performance area. In this paper, the features of three container-based operating systems virtualization tools (LXC, Docker and Singularity) are presented. LXC, Docker, Singularity and bare metal are put under test through a customized single node HPL-Benchmark and a MPI-based application for the multi node testbed. Also the disk I/O performance, Memory (RAM) performance, Network bandwidth and GPU performance are tested for the COS technologies vs bare metal. Preliminary results and conclusions around them are presented and discussed.


Sign in / Sign up

Export Citation Format

Share Document