Computing Gamma Calculus on Computer Cluster

Author(s):  
Hong Lin ◽  
Jeremy Kemp ◽  
Padraic Gilbert

Gamma Calculus is an inherently parallel, high-level programming model, which allows simple programming molecules to interact, creating a complex system with minimum of coding. Gamma calculus modeled programs were written on top of IBM’s TSpaces middleware, which is Java-based and uses a “Tuple Space” based model for communication, similar to that in Gamma. A parser was written in C++ to translate the Gamma syntax. This was implemented on UHD’s grid cluster (grid.uhd.edu), and in an effort to increase performance and scalability, existing Gamma programs are being transferred to Nvidia’s CUDA architecture. General Purpose GPU computing is well suited to run Gamma programs, as GPU’s excel at running the same operation on a large data set, potentially offering a large speedup.

2012 ◽  
pp. 2016-2026
Author(s):  
Hong Lin ◽  
Jeremy Kemp ◽  
Padraic Gilbert

Gamma Calculus is an inherently parallel, high-level programming model, which allows simple programming molecules to interact, creating a complex system with minimum of coding. Gamma calculus modeled programs were written on top of IBM’s TSpaces middleware, which is Java-based and uses a “Tuple Space” based model for communication, similar to that in Gamma. A parser was written in C++ to translate the Gamma syntax. This was implemented on UHD’s grid cluster (grid.uhd.edu), and in an effort to increase performance and scalability, existing Gamma programs are being transferred to Nvidia’s CUDA architecture. General Purpose GPU computing is well suited to run Gamma programs, as GPU’s excel at running the same operation on a large data set, potentially offering a large speedup.


2010 ◽  
Vol 1 (4) ◽  
pp. 42-52 ◽  
Author(s):  
Hong Lin ◽  
Jeremy Kemp ◽  
Padraic Gilbert

Gamma Calculus is an inherently parallel, high-level programming model, which allows simple programming molecules to interact, creating a complex system with minimum of coding. Gamma calculus modeled programs were written on top of IBM’s TSpaces middleware, which is Java-based and uses a “Tuple Space” based model for communication, similar to that in Gamma. A parser was written in C++ to translate the Gamma syntax. This was implemented on UHD’s grid cluster (grid.uhd.edu), and in an effort to increase performance and scalability, existing Gamma programs are being transferred to Nvidia’s CUDA architecture. General Purpose GPU computing is well suited to run Gamma programs, as GPU’s excel at running the same operation on a large data set, potentially offering a large speedup.


Information ◽  
2020 ◽  
Vol 11 (4) ◽  
pp. 193 ◽  
Author(s):  
Sebastian Raschka ◽  
Joshua Patterson ◽  
Corey Nolet

Smarter applications are making better use of the insights gleaned from data, having an impact on every industry and research discipline. At the core of this revolution lies the tools and the methods that are driving it, from processing the massive piles of data generated each day to learning from and taking useful action. Deep neural networks, along with advancements in classical machine learning and scalable general-purpose graphics processing unit (GPU) computing, have become critical components of artificial intelligence, enabling many of these astounding breakthroughs and lowering the barrier to adoption. Python continues to be the most preferred language for scientific computing, data science, and machine learning, boosting both performance and productivity by enabling the use of low-level libraries and clean high-level APIs. This survey offers insight into the field of machine learning with Python, taking a tour through important topics to identify some of the core hardware and software paradigms that have enabled it. We cover widely-used libraries and concepts, collected together for holistic comparison, with the goal of educating the reader and driving the field of Python machine learning forward.


2017 ◽  
Vol 73 (6) ◽  
pp. 478-487 ◽  
Author(s):  
Daniel Castaño-Díez

Dynamois a package for the processing of tomographic data. As a tool for subtomogram averaging, it includes different alignment and classification strategies. Furthermore, its data-management module allows experiments to be organized in groups of tomograms, while offering specialized three-dimensional tomographic browsers that facilitate visualization, location of regions of interest, modelling and particle extraction in complex geometries. Here, a technical description of the package is presented, focusing on its diverse strategies for optimizing computing performance.Dynamois built upon mbtools (middle layer toolbox), a general-purposeMATLABlibrary for object-oriented scientific programming specifically developed to underpinDynamobut usable as an independent tool. Its structure intertwines a flexibleMATLABcodebase with precompiled C++ functions that carry the burden of numerically intensive operations. The package can be delivered as a precompiled standalone ready for execution without aMATLABlicense. Multicore parallelization on a single node is directly inherited from the high-level parallelization engine provided forMATLAB, automatically imparting a balanced workload among the threads in computationally intense tasks such as alignment and classification, but also in logistic-oriented tasks such as tomogram binning and particle extraction.Dynamosupports the use of graphical processing units (GPUs), yielding considerable speedup factors both for nativeDynamoprocedures (such as the numerically intensive subtomogram alignment) and procedures defined by the user through itsMATLAB-based GPU library for three-dimensional operations. Cloud-based virtual computing environments supplied with a pre-installed version ofDynamocan be publicly accessed through the Amazon Elastic Compute Cloud (EC2), enabling users to rent GPU computing time on a pay-as-you-go basis, thus avoiding upfront investments in hardware and longterm software maintenance.


Author(s):  
Hannah S. Walsh ◽  
Andy Dong ◽  
Irem Y. Tumer ◽  
Guillaume Brat

Abstract When designing engineered systems, the potential for unintended consequences of design policies exists despite best intentions. The effect of risk factors for unintended consequences are often known only in hindsight. However, since historical knowledge is generally associated with a single event, it is difficult to uncover general trends in the formation and types of unintended consequences. In this research, archetypes of unintended consequences are learned from historical data. This research contributes toward the understanding of archetypes of unintended consequences by using machine learning over a large data set of lessons learned from adverse events at NASA. Sixty-six archetypes are identified because they share similar sets of risk factors such as complexity and human-machine interaction. To validate the learned archetypes, system dynamics representations of the archetypes are compared to known high-level archetypes of unintended consequences. The main contribution of the paper is a set of archetypes that apply to many engineered systems and a pattern of leading indicators that open a new path to manage unintended consequences and mitigate the magnitude of potentially adverse outcomes.


Author(s):  
Nikitas Papangelopoulos ◽  
Dimitrios Vlachakis ◽  
Arianna Filntisi ◽  
Paraskevas Fakourelis ◽  
Louis Papageorgiou ◽  
...  

The exponential growth of available biological data in recent years coupled with their increasing complexity has made their analysis a computationally challenging process. Traditional central processing unist (CPUs) are reaching their limit in processing power and are not designed primarily for multithreaded applications. Graphics processing units (GPUs) on the other hand are affordable, scalable computer powerhouses that, thanks to the ever increasing demand for higher quality graphics, have yet to reach their limit. Typically high-end CPUs have 8-16 cores, whereas GPUs can have more than 2,500 cores. GPUs are also, by design, highly parallel, multicore and multithreaded, able of handling thousands of threads doing the same calculation on different subsets of a large data set. This ability is what makes them perfectly suited for biological analysis tasks. Lately this potential has been realized by many bioinformatics researches and a huge variety of tools and algorithms have been ported to GPUs, or designed from the ground up to maximize the usage of available cores. Here, we present a comprehensive review of available bioinformatics tools ranging from sequence and image analysis to protein structure prediction and systems biology that use NVIDIA Compute Unified Device Architecture (CUDA) general-purpose computing on graphics processing units (GPGPU) programming language.


2020 ◽  
pp. 0887302X2093119 ◽  
Author(s):  
Rachel Rose Getman ◽  
Denise Nicole Green ◽  
Kavita Bala ◽  
Utkarsh Mall ◽  
Nehal Rawat ◽  
...  

With the proliferation of digital photographs and the increasing digitization of historical imagery, fashion studies scholars must consider new methods for interpreting large data sets. Computational methods to analyze visual forms of big data have been underway in the field of computer science through computer vision, where computers are trained to “read” images through a process called machine learning. In this study, fashion historians and computer scientists collaborated to explore the practical potential of this emergent method by examining a trend related to one particular fashion item—the baseball cap—across two big data sets—the Vogue Runway database (2000–2018) and the Matzen et al. Streetstyle-27K data set (2013–2016). We illustrate one implementation of high-level concept recognition to map a fashion trend. Tracking trend frequency helps visualize larger patterns and cultural shifts while creating sociohistorical records of aesthetics, which benefits fashion scholars and industry alike.


2018 ◽  
Vol 2018 ◽  
pp. 1-11
Author(s):  
Haoduo Yang ◽  
Huayou Su ◽  
Qiang Lan ◽  
Mei Wen ◽  
Chunyuan Zhang

The growing use of graph in many fields has sparked a broad interest in developing high-level graph analytics programs. Existing GPU implementations have limited performance with compromising on productivity. HPGraph, our high-performance bulk-synchronous graph analytics framework based on the GPU, provides an abstraction focused on mapping vertex programs to generalized sparse matrix operations on GPU as the backend. HPGraph strikes a balance between performance and productivity by coupling high-performance GPU computing primitives and optimization strategies with a high-level programming model for users to implement various graph algorithms with relatively little effort. We evaluate the performance of HPGraph for four graph primitives (BFS, SSSP, PageRank, and TC). Our experiments show that HPGraph matches or even exceeds the performance of high-performance GPU graph libraries such as MapGraph, nvGraph, and Gunrock. HPGraph also runs significantly faster than advanced CPU graph libraries.


Sign in / Sign up

Export Citation Format

Share Document