Advances in Systems Analysis, Software Engineering, and High Performance Computing - Handbook of Research on Methodologies and Applications of Supercomputing
Latest Publications


TOTAL DOCUMENTS

21
(FIVE YEARS 21)

H-INDEX

0
(FIVE YEARS 0)

Published By IGI Global

9781799871569, 9781799871583

Author(s):  
Laura Dipietro ◽  
Seth Elkin-Frankston ◽  
Ciro Ramos-Estebanez ◽  
Timothy Wagner

The history of neuroscience has tracked with the evolution of science and technology. Today, neuroscience's trajectory is heavily dependent on computational systems and the availability of high-performance computing (HPC), which are becoming indispensable for building simulations of the brain, coping with high computational demands of analysis of brain imaging data sets, and developing treatments for neurological diseases. This chapter will briefly review the current and potential future use of supercomputers in neuroscience.


Author(s):  
Namik Delilovic

Searching for contents in present digital libraries is still very primitive; most websites provide a search field where users can enter information such as book title, author name, or terms they expect to be found in the book. Some platforms provide advanced search options, which allow the users to narrow the search results by specific parameters such as year, author name, publisher, and similar. Currently, when users find a book which might be of interest to them, this search process ends; only a full-text search or references at the end of the book may provide some additional pointers. In this chapter, the author is going to give an example of how a user could permanently get recommendations for additional contents even while reading the article, using present machine learning and artificial intelligence techniques.


Author(s):  
Veljko Milutinović ◽  
Miloš Kotlar ◽  
Ivan Ratković ◽  
Nenad Korolija ◽  
Miljan Djordjevic ◽  
...  

This chapter starts from the assumption that near future 100BTransistor SuperComputers-on-a-Chip will include N big multi-core processors, 1000N small many-core processors, a TPU-like fixed-structure systolic array accelerator for the most frequently used machine learning algorithms needed in bandwidth-bound applications, and a flexible-structure reprogrammable accelerator for less frequently used machine learning algorithms needed in latency-critical applications. The future SuperComputers-on-a-Chip should include effective interfaces to specific external accelerators based on quantum, optical, molecular, and biological paradigms, but these issues are outside the scope of this chapter.


Author(s):  
Akira Tsuda ◽  
Frank S. Henry

In this review, the authors outline the evidence that emerged some 30 years ago that the mechanisms thought responsible for the deposition of submicron particles in the respiratory region of the lung were inadequate to explain the measured rate of deposition. They then discuss the background and theory of what is believed to be the missing mechanism, namely chaotic mixing. Specifically, they outline how that the recirculating flow in the alveoli has a range of frequencies of oscillation and some of these resonate with the breathing frequency. If the system is perturbed, the resonating frequencies break into chaos, and they discuss a number of practical ways in which the system can be disturbed. The perturbation of fluid particle trajectories results in Hamiltonian chaos, which produces qualitative changes in those trajectories. They end the review with a discussion of the effects of chaotic mixing on the deposition of inhaled particles in the respiratory region of the lung.


Author(s):  
Severin Staudinger

In this chapter a heuristic forest fire model based on cellular automata is presented and realized for efficiency reasons with the DataFlow programming approach. Real-world images taken by satellites are analyzed and used as the basis for simulations. In the presented forest fire model, natural influences like wind strength and direction, burning behavior, as well as different levels of inflammability are considered. The DataFlow implementation on an FPGA-based Maxeler MAX3 Vectis card was compared to a sequential C version executed on an Intel Xeon E5-2650 2.0 GHz CPU. The author obtained speedups of up to 70 for a strong wind situation and 46 for a random wind setting while reducing energy consumption.


Author(s):  
Jurij Mihelič ◽  
Uroš Čibej ◽  
Luka Fürst

The subgraph isomorphism problem asks whether a given graph is a subgraph of another graph. It is one of the most general NP-complete problems since many other problems (e.g., Hamiltonian cycle, clique, independent set, etc.) have a natural reduction to subgraph isomorphism. Furthermore, there is a variety of practical applications where graph pattern matching is the core problem. Developing efficient algorithms and solvers for this problem thus enables good solutions to a variety of different practical problems. In this chapter, the authors present and experimentally explore various algorithmic refinements and code optimizations for improving the performance of subgraph isomorphism solvers. In particular, they focus on algorithms that are based on the backtracking approach and constraint satisfaction programming. They gather experiences from many state-of-the-art algorithms as well as from their engagement in this field. Lessons learned from engineering such a solver can be utilized in many other fields where backtracking is a prominent approach for solving a particular problem.


Author(s):  
Mehmet Dalkilic

This chapter is an abridged sort of “vision statement” on what supercomputing will be in the future. The main thrust of the argument is that most of the problem lies in the trafficking of data, not the computation. There needs to be a worldwide effort to put into place a means to move data efficiently and effectively. Further, there likely needs to be a fundamental shift in our model of computation where the computation is stationary and data moves to movement of computation to the data or even as the data is moving.


Author(s):  
Victor Potapenko ◽  
Malek Adjouadi ◽  
Naphtali Rishe

Modeling time-series data with asynchronous, multi-cardinal, and uneven patterns presents several unique challenges that may impede convergence of supervised machine learning algorithms, or significantly increase resource requirements, thus rendering modeling efforts infeasible in resource-constrained environments. The authors propose two approaches to multi-class classification of asynchronous time-series data. In the first approach, they create a baseline by reducing the time-series data using a statistical approach and training a model based on gradient boosted trees. In the second approach, they implement a fully convolutional network (FCN) and train it on asynchronous data without any special feature engineering. Evaluation of results shows that FCN performs as well as the gradient boosting based on mean F1-score without computationally complex time-series feature engineering. This work has been applied in the prediction of customer attrition at a large retail automotive finance company.


Author(s):  
Ivan Ratković ◽  
Miljan Djordjevic

Modern supercomputer designs fall into distinct categories – data and control flow supercomputer design paradigms. Control flow stands as the go-to design philosophy in all von Neumann machines dominating the market thus far. New types of problems demand a different flow mindset – this is where the data flow machines come in play. This chapter introduces control-flow concept as well as its state-of-the-art examples. Introduction section goes over definitions of terms used in succeeding chapters and gives their brief explanations. A brief explanation of supercomputing as a whole given in the Introduction section is then followed by explanations of the data and control flow design philosophies, with real-world examples of both – multi-core, many-core, vector processors, and GPUs in Control Flow Paradigm section. The third section covers real-world processing unit examples, with a rundown of the best standard and low power commercial and supercomputing market representatives.


Author(s):  
Marija Ilic ◽  
Rupamathi Jaddivada ◽  
Assefaw Gebremedhin

Large-scale computing, including machine learning (MI) and AI, offer a great promise in enabling sustainability and resiliency of electric energy systems. At present, however, there is no standardized framework for systematic modeling and simulation of system response over time to different continuous- and discrete-time events and/or changes in equipment status. As a result, there is generally a poor understanding of the effects of candidate technologies on the quality and cost of electric energy services. In this chapter, the authors discuss a unified, physically intuitive multi-layered modeling of system components and their mutual dynamic interactions. The fundamental concept underlying this modeling is the notion of interaction variables whose definition directly lends itself to capturing modular structure needed to manage complexity. As a direct result, the same modeling approach defines an information exchange structure between different system layers, and hence can be used to establish structure for the design of a dedicated computational architecture, including AI methods.


Sign in / Sign up

Export Citation Format

Share Document