High performance computing and communications technology solutions for future smart distribution network operation

Author(s):  
L. De Alvaro Garcia ◽  
K. Diwold ◽  
G. Gershinsky ◽  
G.A. Taylor ◽  
A. Yunta Huete ◽  
...  
2020 ◽  
Vol 14 ◽  

Typically, the constant changes in computers and communications technology led to the need of on-demand network access to a shared computing resources to reduce cost and time and this is known as Cloud computing, which delivers computing services to users as a pay-as-you-go manner by emerging several distributed and high performance computing concepts. The cloud makes reaching any information or source possible from anywhere eliminating the setup and instillation step such that the user and the hardware may co-exist in different places. This comes beneficial for the users or the small companies that cannot effort to pay for the hardware, storage or resources as the big companies. Many of the studies on cloud computing was dedicated to the performance efficiency of task scheduling. Scheduling is a wide concept and it is one of the most important issues that generally work on mapping tasks to appropriate resources efficiently and effectively using one or more strategy. This paper have reviewed and classified the most recent scheduling algorithms in cloud computing and gave examples on each.


MRS Bulletin ◽  
1997 ◽  
Vol 22 (10) ◽  
pp. 5-6
Author(s):  
Horst D. Simon

Recent events in the high-performance computing industry have concerned scientists and the general public regarding a crisis or a lack of leadership in the field. That concern is understandable considering the industry's history from 1993 to 1996. Cray Research, the historic leader in supercomputing technology, was unable to survive financially as an independent company and was acquired by Silicon Graphics. Two ambitious new companies that introduced new technologies in the late 1980s and early 1990s—Thinking Machines and Kendall Square Research—were commercial failures and went out of business. And Intel, which introduced its Paragon supercomputer in 1994, discontinued production only two years later.During the same time frame, scientists who had finished the laborious task of writing scientific codes to run on vector parallel supercomputers learned that those codes would have to be rewritten if they were to run on the next-generation, highly parallel architecture. Scientists who are not yet involved in high-performance computing are understandably hesitant about committing their time and energy to such an apparently unstable enterprise.However, beneath the commercial chaos of the last several years, a technological revolution has been occurring. The good news is that the revolution is over, leading to five to ten years of predictable stability, steady improvements in system performance, and increased productivity for scientific applications. It is time for scientists who were sitting on the fence to jump in and reap the benefits of the new technology.


Sign in / Sign up

Export Citation Format

Share Document