super computer
Recently Published Documents


TOTAL DOCUMENTS

75
(FIVE YEARS 14)

H-INDEX

4
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Martin Schreiber

<p>Running simulations on high-performance computers faces new challenges due to e.g. the stagnating or even decreasing per-core speed. This poses new restrictions and therefore challenges on solving PDEs within a particular time frame in the strong scaling case. Here, disruptive mathematical reformulations, which e.g. exploit additional degrees of parallelism also along the time dimension, gained increasing interest over the last two decades.</p><p>This talk will cover various examples of our current research on (parallel-in-)time integration methods in the context of weather and climate simulations such as rational approximation of exponential integrators, multi-level time integration of spectral deferred correction (PFASST) as well as other methods.</p><p>These methods are realized and studied with numerics similar to the ones used by the European Centre for Medium-Range Weather Forecasts (ECMWF). Our results motivate further investigation for operational weather/climate systems in order to cope with the hardware imposed restrictions of future super computer architectures.</p><p>I gratefully acknowledge contributions and more from Jed Brown, Francois Hamon, Terry S. Haut, Richard Loft, Michael L. Minion, Pedro S. Peixoto, Nathanaël Schaeffer, Raphael Schilling</p>


2021 ◽  
Vol 4 (3-4) ◽  
pp. 0
Author(s):  
Dmitry Evdokimov

The rapid development of modern technologies and the emergence of large-scale world projects with the involvement of a scientific, political and commercial conglomerate contributes to the comprehensive development of digitalization, including the IT industry, thereby promoting and improving developments related to electronic computers. The article is devoted to the study of the latest developments in the field of supercomputer technologies in Russia and abroad. An overview of operating supercomputers occupying the first lines in the world TOP-500 rating is given. The main Russian successes in this direction are formulated. Possible perspective tasks are described that would allow Russia to strengthen its positions in the international arena.


Author(s):  
Jeffrey Melby ◽  
Abigail Stehno ◽  
Thomas C. Massey ◽  
Shubhra Misra ◽  
Norberto Nadal-Caraballo ◽  
...  

Large scale flood risk computation has enjoyed a metamorphosis since Hurricane Katrina. Improved characterization of risk is the result of improved computational capabilities due to super computer capacity combined with coupled regional hydrodynamic models, improved local hydrodynamic models, improved joint probability models, inclusion of the most important uncertainties, metamodels and increased computational capacity for stochastic simulation. Improvements in our understanding of, and the ability to model, the coupled hydrodynamics of surge and waves has been well documented as has been improvements in the joint probability method with optimal sampling (JPM-OS) for synthesizing synthetic tropical cyclones (TC) that correctly span the practical hazard probability space. However, maintaining the coupled physics and multivariate probability integrity through the entire flood risk computation while incorporating epistemic uncertainty has had relatively little attention. This paper addresses this latter topic within the context of the Sabine Pass to Galveston Bay, TX Pre-Construction, Engineering and Design, Hurricane Coastal Storm Surge and Wave Hazard Assessment.Recorded Presentation from the vICCE (YouTube Link): https://youtu.be/qYFTO6l7UME


Author(s):  
Muhammad Abdullah Hanif ◽  
Faiq Khalid ◽  
Rachmad Vidya Wicaksana Putra ◽  
Mohammad Taghi Teimoori ◽  
Florian Kriebel ◽  
...  

AbstractThe drive for automation and constant monitoring has led to rapid development in the field of Machine Learning (ML). The high accuracy offered by the state-of-the-art ML algorithms like Deep Neural Networks (DNNs) has paved the way for these algorithms to being used even in the emerging safety-critical applications, e.g., autonomous driving and smart healthcare. However, these applications require assurance about the functionality of the underlying systems/algorithms. Therefore, the robustness of these ML algorithms to different reliability and security threats has to be thoroughly studied and mechanisms/methodologies have to be designed which result in increased inherent resilience of these ML algorithms. Since traditional reliability measures like spatial and temporal redundancy are costly, they may not be feasible for DNN-based ML systems which are already super computer and memory intensive. Hence, new robustness methods for ML systems are required. Towards this, in this chapter, we present our analyses illustrating the impact of different reliability and security vulnerabilities on the accuracy of DNNs. We also discuss techniques that can be employed to design ML algorithms such that they are inherently resilient to reliability and security threats. Towards the end, the chapter provides open research challenges and further research opportunities.


Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4808
Author(s):  
Juan-Antonio Fernández-Madrigal ◽  
Angeles Navarro ◽  
Rafael Asenjo ◽  
Ana Cruz-Martín

Time synchronization among sensor devices connected through non-deterministic media is a fundamental requirement for sensor fusion and other distributed tasks that need a common time reference. In many of the time synchronization methods existing in literature, the estimation of the relation between pairs of clocks is a core concept; moreover, in applications that do not have general connectivity among its devices but a simple pairwise topology, such as embedded systems, mobile robots or home automation, two-clock synchronization is actually the basic form of the time estimation problem. In these kinds of applications, especially for critical ones, not only the quality of the estimation of the relation between two clocks is important, but also the bounds the methods provide for the estimated values, and their computational effort (since many are small systems). In this paper, we characterize, with a thorough parameterization, the possible scenarios where two-clock synchronization is to be solved, and then conduct a rigorous statistical study of both scenarios and methods. The study is based on exhaustive simulations run in a super-computer. Our aim is to provide a sound basis to select the best clock synchronization algorithm depending on the application requirements and characteristics, and also to deduce which ones of these characteristics are most relevant, in general, when solving the problem. For our comparisons we have considered several representative methods for clock synchronization according to a novel taxonomy that we also propose in the paper, and in particular, a few geometrical ones that have special desirable characteristics for the two-clock problem. We illustrate the method selection procedure with practical use-cases of sensory systems where two-clock synchronization is essential.


2020 ◽  
Vol 220 ◽  
pp. 01082
Author(s):  
Yuri Kozhukhov ◽  
Serafima Tatchenkova ◽  
Sergey Kartashov ◽  
Vyacheslav Ivanov ◽  
Evgeniy Nikitin

This paper provides the results of the study of a spatial flow in a low-flow stage of a SVD-22 centrifugal compressor of computational fluid dynamics methods using the Ansys CFX 14.0 software package. Low flow stages are used as the last stages of multistage centrifugal compressors. Such multistage compressors are widely used in boosting compressor stations for natural gas, in chemical industries. The flow features in low-flow stages require independent research. This is due to the fact that the developed techniques for designing centrifugal compressor stages are created for medium-flow and high-flow stages and do not apply to low-flow stages. Generally at manufacturing new centrifugal compressors, it is impossible to make a control measurement of the parameters of the working process inside the flow path elements. Computational fluid dynamics methods are widely used to overcome this difficulties. However verification and validation of CFD methods are necessary for accurate modeling of the workflow. All calculations were conducted on one of the SPbPU clusters. Parameters of one cluster node: AMD Opteron 280 2 cores, 8GB RAM. The calculations were conducted using 4 nodes (HP MPI Distributed Parallel startup type) with their full load by parallelizing processes on each node.


2019 ◽  
Vol 8 (4) ◽  
pp. 12112-12120

People have used technology to improve themselves throughout the human history. From the ancient times, human beings tried to get their work done by human slaves or inanimate machines. To build intelligent agents each new technology has been exploited. Clockwise, hydraulics, telephone switching systems, holograms, analog computers and digital computers have all been suggested both as mechanisms for intelligent agent and as technological metaphors for intelligence. A new invention of computer system is known as Artificial Intelligence that can perform tasks with the help of human intelligence. Artificial Intelligence associated with computer systems which includes various types of intelligence: systems that understand new concepts and tasks, systems that are able to give reason and draw useful conclusion about the world around us, systems which can learn a natural language and comprehend a visual scene. Artificial Intelligence means intelligence that is demonstrated by machines. This is a device that recognizes its environment and takes action that increases the chances of achieving its goal. The research goal of Artificial Intelligence is to create technology which helps computers and machines to perform various tasks in an intelligent manner. Artificial Intelligence analysis the intelligent acts of computational agents. Computational agent is one whose decisions about his/her actions can be explained in terms of computation. His actions, firstly, may be broken down into primary operation that further can be applied in a physical device. Computations have many forms, for example: In humans, it is in the form of “wetware” and in computers it is in the form of “hardware”. Greatest advances have occurred in the field of game playing. Super computer named Deep Blue defeated world chess champion Gary Kasparov in May, 1997. This research article explains history, features and goals of artificial intelligence. It also explains various types of artificial intelligence like reactive machine,limited memory, theory of mind and self- awareness. This article focus on application of artificial intelligence in many fields like literacy, finance, heavy industries, hospitals, news, publishing, transportations, telecommunication maintenance, telephone and online customer services etc.


2019 ◽  
Vol 28 (12) ◽  
pp. 1950199 ◽  
Author(s):  
Zhilei Chai ◽  
Wei Liu ◽  
Qin Wu ◽  
Qunfang He ◽  
Wenjie Chen

FPGA (Field Programmable Gate Array) has the advantages of parallelism and reconfigurability, therefore, it is widely used in areas such as image processing, robotics and artificial intelligence. However, the development of FPGA currently involves too many hardware details, so it lacks extensibility for different platforms and flexibility for system level management and scheduling. In this paper, we propose an FPGA Virtualization Mechanism (FVM), which divides physical resources into pages (virtual resources). We use the technology of PR (Partial Reconfiguration) and the method of intermediate form to lift the extensibility and performance. We implement FVM in our platform VSC (Vary Super Computer System). Experiment results show that FVM can solve the problem of extensibility and flexibility, with high performance.


2019 ◽  
pp. 51-61
Author(s):  
Владимир Никитович Маслей

The subject of the work is the process of creating Earth remote sensing spacecraft (ERS SC) within the framework of solving the problem of obtaining a high-resolution optical system by achieving high temperature dimensional stability (TDS) while retaining the strength of load-bearing structures (LBSs) during SC operation in orbit. The purpose of the work is to substantiate the block diagram of a package of SC strength measures that has been developed and is being implemented at Yuzhnoye SDO in the process of creating ERS SC containing laminar composite materials in their load-bearing structures, in particular using in their LBSs laminar polymer composite materials (PCMs) with special patterns of reinforcing carbon fibers. Objectives of the work: to analyze the use of layered structures with a resulting coefficient of temperature linear elongation close to zero, including negative in the direction of developing a long-term satellite remote sensing; to consider options for compensating for the weakening of the stiffness and strength characteristics of nanocrystals for TRS layered structures, taking into account the properties of PCM, significantly different from the properties of metals and alloys in the direction of increasing the reliability of spacecraft at various stages of development; to evaluate the compensation for the high cost of spacecraft for remote sensing with the duration of the period of active work in orbit based on modern information technologies.The results consist in that the article formulates features of a package of measures directly linked with ensuring the strength, stiffness, durability, and operating life of LBSs in ERS SC. This package of the measures Yuzhnoye SDO has developed and been implementing in the last years has its first priority to attain the TDS of LBSs and covers the whole life cycle of SC: design, manufacture (technology), testing and service. The service includes storage, transportation, prelaunch tests, injection into orbit and operation in orbit. The scientific novelty consists in the fact that the package takes into account all possible technological processes, tests on several versions of mockups and models with maximally full imitation of standard service conditions according to the scheme: onboard equipment ® onboard systems ® SC in whole. Problems are solved using CALS-class information technologies, for the complete realization of which a super computer has been put into operation with a peak speed of 300 teraflops. This enables to continuously improve ERS SC using PCMs for the TDS of LBSs.


2019 ◽  
Vol 8 (3) ◽  
pp. 3873-3877

In this article proposes the load sharing performance of converters in supercomputers. A new control method is proposed for dc to dc switch controlled capacitor (SCC) - LLC converter. The switching frequency is utilized for controlling the regulation of output voltage. It can give the good frequency variation range and peak gain range compared to conventional converters. To attain load sharing the half wave switch controlled capacitor (SCC) is used to control the resonant frequency of each LLC stage. The simulation results are compared with experimental results. A 600w prototype model is developed to prove the feasibility


Sign in / Sign up

Export Citation Format

Share Document