Fortran: a modern standard programming language for parallel scalable high performance technical computing

Author(s):  
Loveman
2015 ◽  
Vol 2015 ◽  
pp. 1-13
Author(s):  
Stergios Papadimitriou ◽  
Kirsten Schwark ◽  
Seferina Mavroudi ◽  
Kostas Theofilatos ◽  
Spiridon Likothanasis

ScalaLab and GroovyLab are both MATLAB-like environments for the Java Virtual Machine. ScalaLab is based on the Scala programming language and GroovyLab is based on the Groovy programming language. They present similar user interfaces and functionality to the user. They also share the same set of Java scientific libraries and of native code libraries. From the programmer's point of view though, they have significant differences. This paper compares some aspects of the two environments and highlights some of the strengths and weaknesses of Scala versus Groovy for scientific computing. The discussion also examines some aspects of the dilemma of using dynamic typing versus static typing for scientific programming. The performance of the Java platform is continuously improved at a fast pace. Today Java can effectively support demanding high-performance computing and scales well on multicore platforms. Thus, both systems can challenge the performance of the traditional C/C++/Fortran scientific code with an easier to use and more productive programming environment.


2002 ◽  
Vol 12 (02) ◽  
pp. 193-210 ◽  
Author(s):  
CHRISTOPH A. HERRMANN ◽  
CHRISTIAN LENGAUER

Metaprogramming is a paradigm for enhancing a general-purpose programming language with features catering for a special-purpose application domain, without a need for a reimplementation of the language. In a staged compilation, the special-purpose features are translated and optimised by a domain-specific preprocessor, which hands over to the general-purpose compiler for translation of the domain-independent part of the program. The domain we work in is high-performance parallel computing. We use metaprogramming to enhance the functional language Haskell with features for the efficient, parallel implementation of certain computational patterns, called skeletons.


Background/Objectives: In the field of software development, the diversity of programming languages increases dramatically with the increase in their complexity. This leads both programmers and researchers to develop and investigate automated tools to distinguish these programming languages. Different efforts were conducted to achieve this task using keywords of source codes of these programming languages. Therefore, instead of using keywords classification for recognition, this work is conducted to investigate the ability to detect the pattern of a programming language characteristic by using NeMo(High-performance spiking neural network simulator) of neural network and testing the ability of this toolkit to provide detailed analyzable results. Methods/Statistical analysis: the method of achieving these objectives is by using a back propagation neural network via NeMo based on pattern recognition methodology. Findings: The results show that the NeMo neural network of pattern recognition can identify and recognize the pattern of python programming language with high accuracy. It also shows the ability of the NeMo toolkit to represent the analyzable results through a percentage of certainty. Improvements/Applications: it can be noticed from the results the ability of NeMo simulator to provide beneficial platform for studying and analyzing the complexity of the backpropagation neural network model.


2019 ◽  
Vol 52 (4) ◽  
pp. 882-897 ◽  
Author(s):  
A. Boulle ◽  
J. Kieffer

The Python programming language, combined with the numerical computing library NumPy and the scientific computing library SciPy, has become the de facto standard for scientific computing in a variety of fields. This popularity is mainly due to the ease with which a Python program can be written and executed (easy syntax, dynamical typing, no compilation etc.), coupled with the existence of a large number of specialized third-party libraries that aim to lift all the limitations of the raw Python language. NumPy introduces vector programming, improving execution speeds, whereas SciPy brings a wealth of highly optimized and reliable scientific functions. There are cases, however, where vector programming alone is not sufficient to reach optimal performance. This issue is addressed with dedicated compilers that aim to translate Python code into native and statically typed code with support for the multi-core architectures of modern processors. In the present article it is shown how these approaches can be efficiently used to tackle different problems, with increasing complexity, that are relevant to crystallography: the 2D Laue function, scattering from a strained 2D crystal, scattering from 3D nanocrystals and, finally, diffraction from films and multilayers. For each case, detailed implementations and explanations of the functioning of the algorithms are provided. Different Python compilers (namely NumExpr, Numba, Pythran and Cython) are used to improve performance and are benchmarked against state-of-the-art NumPy implementations. All examples are also provided as commented and didactic Python (Jupyter) notebooks that can be used as starting points for crystallographers curious to enter the Python ecosystem or wishing to accelerate their existing codes.


Author(s):  
Masahiro Nakao ◽  
Tetsuya Odajima ◽  
Hitoshi Murai ◽  
Akihiro Tabuchi ◽  
Norihisa Fujita ◽  
...  

Accelerated clusters, which are cluster systems equipped with accelerators, are one of the most common systems in parallel computing. In order to exploit the performance of such systems, it is important to reduce communication latency between accelerator memories. In addition, there is also a need for a programming language that facilitates the maintenance of high performance by such systems. The goal of the present article is to evaluate XcalableACC (XACC), a parallel programming language, with tightly coupled accelerators/InfiniBand (TCAs/IB) hybrid communication on an accelerated cluster. TCA/IB hybrid communication is a combination of low-latency communication with TCA and high bandwidth with IB. The XACC language, which is a directive-based language for accelerated clusters, enables programmers to use TCA/IB hybrid communication with ease. In order to evaluate the performance of XACC with TCA/IB hybrid communication, we implemented the lattice quantum chromodynamics (LQCD) mini-application and evaluated the application on our accelerated cluster using up to 64 compute nodes. We also implemented the LQCD mini-application using a combination of CUDA and MPI (CUDA + MPI) and that of OpenACC and MPI (OpenACC + MPI) for comparison with XACC. Performance evaluation revealed that the performance of XACC with TCA/IB hybrid communication is 9% better than that of CUDA + MPI and 18% better than that of OpenACC + MPI. Furthermore, the performance of XACC was found to further increase by 7% by new expansion to XACC. Productivity evaluation revealed that XACC requires much less change from the serial LQCD code to implement the parallel LQCD code than CUDA + MPI and OpenACC + MPI. Moreover, since XACC can perform parallelization while maintaining the sequential code image, XACC is highly readable and shows excellent portability due to its directive-based approach.


2002 ◽  
Vol 12 (4-5) ◽  
pp. 359-374 ◽  
Author(s):  
SIMON MARLOW

Server applications, and in particular network-based server applications, place a unique combination of demands on a programming language: lightweight concurrency, high I/O throughput, and fault tolerance are all important. This paper describes a prototype web server written in Concurrent Haskell (with extensions), and presents two useful results: firstly, a conforming server could be written with minimal effort, leading to an implementation in less than 1500 lines of code, and secondly the naive implementation produced reasonable performance. Furthermore, making minor modifications to a few time-critical components improved performance to a level acceptable for anything but the most heavily loaded web servers.


2018 ◽  
Vol 34 (19) ◽  
pp. 3387-3389 ◽  
Author(s):  
Brent S Pedersen ◽  
Aaron R Quinlan

Abstract Motivation Extracting biological insight from genomic data inevitably requires custom software. In many cases, this is accomplished with scripting languages, owing to their accessibility and brevity. Unfortunately, the ease of scripting languages typically comes at a substantial performance cost that is especially acute with the scale of modern genomics datasets. Results We present hts-nim, a high-performance library written in the Nim programming language that provides a simple, scripting-like syntax without sacrificing performance. Availability and implementation hts-nim is available at https://github.com/brentp/hts-nim and the example tools are at https://github.com/brentp/hts-nim-tools both under the MIT license.


2019 ◽  
Author(s):  
Hannah L. Weeks ◽  
Cole Beck ◽  
Elizabeth McNeer ◽  
Cosmin A. Bejan ◽  
Joshua C. Denny ◽  
...  

ABSTRACTObjectiveWe developed medExtractR, a natural language processing system to extract medication dose and timing information from clinical notes. Our system facilitates creation of medication-specific research datasets from electronic health records.Materials and MethodsWritten using the R programming language, medExtractR combines lexicon dictionaries and regular expression patterns to identify relevant medication information (‘drug entities’). The system is designed to extract particular medications of interest, rather than all possible medications mentioned in a clinical note. MedExtractR was developed on notes from Vanderbilt University’s Synthetic Derivative, using two medications (tacrolimus and lamotrigine) prescribed with varying complexity, and with a third drug (allopurinol) used for testing generalizability of results. We evaluated medExtractR and compared it to three existing systems: MedEx, MedXN, and CLAMP.ResultsOn 50 test notes for each development drug and 110 test notes for the additional drug, medExtractR achieved high overall performance (F-measures > 0.95). This exceeded the performance of the three existing systems across all drugs, with the exception of a couple specific entity-level evaluations including dose amount for lamotrigine and allopurinol.DiscussionMedExtractR successfully extracted medication entities for medications of interest. High performance in entity-level extraction tasks provides a strong foundation for developing robust research datasets for pharmacological research. However, its targeted approach provides a narrower scope compared with existing systems.ConclusionMedExtractR (available as an R package) achieved high performance values in extracting specific medications from clinical text, leading to higher quality research datasets for drug-related studies than some existing general-purpose medication extraction tools.


Sign in / Sign up

Export Citation Format

Share Document