TIDeFlow: A Parallel Execution Model for High Performance Computing Programs

Author(s):  
Daniel Orozco

Author(s):  
Nikolaos Triantafyllis ◽  
Ioannis E. Venetis ◽  
Ioannis Fountoulakis ◽  
Erion-Vasilis Pikoulis ◽  
Efthimios Sokos ◽  
...  

Abstract Automatic moment tensor (MT) determination is essential for real-time seismological applications. In this article, Gisola, a highly evolved software for MT determination, oriented toward high-performance computing, is presented. The program employs enhanced algorithms for waveform data selection via quality metrics, such as signal-to-noise ratio, waveform clipping, data and metadata inconsistency, long-period disturbances, and station evaluation based on power spectral density measurements in parallel execution. The inversion code, derived from ISOLated Asperities—an extensively used manual MT retrieval utility—has been improved by exploiting the performance efficiency of multiprocessing on the CPU and GPU. Gisola offers the ability for a 4D spatiotemporal adjustable MT grid search and multiple data resources interconnection to the International Federation of Digital Seismograph Networks Web Services (FDSNWS), the SeedLink protocol, and the SeisComP Data Structure standard. The new software publishes its results in various formats such as QuakeML and SC3ML, includes a website suite for MT solutions review, an e-mail notification system, and an integrated FDSNWS-event for MT solutions distribution. Moreover, it supports the ability to apply user-defined scripts, such as dispatching the MT solution to SeisComP. The operator has full control of all calculation aspects with an extensive and adjustable configuration. MT’s quality performance, for 531 manual MT solutions in Greece between 2012 and 2021, was measured and proved to be highly efficient.



Author(s):  
Azra Nasreen ◽  
Shobha G

<p>Video Retrieval is an important technology that helps to design video search engines and allow users to browse and retrieve videos of interest from huge databases. Though, there are many existing techniques to search and retrieve videos based on spatial and temporal features but are unable to perform well resulting in high ranking of irrelevant videos leading to poor user satisfaction. In this paper an efficient multi-featured method for matching and extraction is proposed in parallel paradigm to retrieve videos accurately and quickly from the collection. Proposed system is tested on datasets that contains various categories of videos of varying length such as traffic, sports, nature etc. Experimental results show that around 80% of accuracy is achieved in searching and retrieving video. Through the use of high performance computing, the parallel execution performs 5 times faster in locating and retrieving videos of intrest than the sequential execution.</p>



2011 ◽  
Vol 328-330 ◽  
pp. 2337-2342 ◽  
Author(s):  
Goldi Misra ◽  
Sandeep Agrawal ◽  
Nisha Kurkure ◽  
Shweta Das ◽  
Kapil Mathur ◽  
...  

The growth of serial and High Performance Computing (HPC) applications presents the challenge of porting of scientific and engineering applications. A number of key issues and trends in High Performance Computing will impact the delivery of breakthrough science and engineering in the future. ONAMA was developed to cope with increasing demands for HPC. ONAMA, which means a new beginning, is a desktop based Graphical User Interface which is developed using C and GTK. It aims to satisfy the research needs of academic institutions. ONAMA is a comprehensive package, comprising of applications covering many engineering branches. ONAMA provides tools that have a close affinity with practical simulation, thus making the learning process for students more applied. Most of the software tools and libraries are open source and supported on Linux, thereby promoting the use of open source software. It also provides tools to the researchers to solve their day-to-day as well as long term problems accurately in lesser time. The Execution Model of ONAMA serves to execute engineering and scientific applications either in sequential or in parallel on Linux computing clusters.



2013 ◽  
Vol 378 ◽  
pp. 534-538
Author(s):  
Fan Zhang ◽  
Xing Guo Luo ◽  
Xing Ming Zhang

In this paper, the design used Reconfigurable multi-processor Architecture (Reconfigurable Multi - Processors Architecture, RCMPA), the system can adapt to a variety of applications, through the multi-processor parallel execution and flexible configuration system. At the same time, each computing components in the system constitute by the general microprocessor, the reconfiguration of FPGA and SRAMs. General purpose microprocessor can realize the control of a variety of tasks, scheduling, and some computing functions. FPGA can offer sufficient flexibility, extensibility and high-speed Internet features. SRAMs can offer all kinds of storage structure of high-speed read and write speed and high density storage unit.



2020 ◽  
Vol 2020 ◽  
pp. 1-19 ◽  
Author(s):  
Paweł Czarnul ◽  
Jerzy Proficz ◽  
Krzysztof Drypczewski

This paper provides a review of contemporary methodologies and APIs for parallel programming, with representative technologies selected in terms of target system type (shared memory, distributed, and hybrid), communication patterns (one-sided and two-sided), and programming abstraction level. We analyze representatives in terms of many aspects including programming model, languages, supported platforms, license, optimization goals, ease of programming, debugging, deployment, portability, level of parallelism, constructs enabling parallelism and synchronization, features introduced in recent versions indicating trends, support for hybridity in parallel execution, and disadvantages. Such detailed analysis has led us to the identification of trends in high-performance computing and of the challenges to be addressed in the near future. It can help to shape future versions of programming standards, select technologies best matching programmers’ needs, and avoid potential difficulties while using high-performance computing systems.





MRS Bulletin ◽  
1997 ◽  
Vol 22 (10) ◽  
pp. 5-6
Author(s):  
Horst D. Simon

Recent events in the high-performance computing industry have concerned scientists and the general public regarding a crisis or a lack of leadership in the field. That concern is understandable considering the industry's history from 1993 to 1996. Cray Research, the historic leader in supercomputing technology, was unable to survive financially as an independent company and was acquired by Silicon Graphics. Two ambitious new companies that introduced new technologies in the late 1980s and early 1990s—Thinking Machines and Kendall Square Research—were commercial failures and went out of business. And Intel, which introduced its Paragon supercomputer in 1994, discontinued production only two years later.During the same time frame, scientists who had finished the laborious task of writing scientific codes to run on vector parallel supercomputers learned that those codes would have to be rewritten if they were to run on the next-generation, highly parallel architecture. Scientists who are not yet involved in high-performance computing are understandably hesitant about committing their time and energy to such an apparently unstable enterprise.However, beneath the commercial chaos of the last several years, a technological revolution has been occurring. The good news is that the revolution is over, leading to five to ten years of predictable stability, steady improvements in system performance, and increased productivity for scientific applications. It is time for scientists who were sitting on the fence to jump in and reap the benefits of the new technology.



Sign in / Sign up

Export Citation Format

Share Document