Modern Fortran Explained

Author(s):  
Michael Metcalf ◽  
John Reid ◽  
Malcolm Cohen

Fortran marches on, remaining one of the principal programming languages used in high-performance scientific, numerical, and engineering computing. A series of significant revisions to the standard versions of the language have progressively enhanced its capabilities, and the latest standard—Fortran 2018—includes many additions and improvements. This second edition of Modern Fortran Explained expands on the first. Given the release of updated versions of Fortran compilers, the separate descriptions of Fortran 2003 and Fortran 2008 have been incorporated into the main text, which thereby becomes a unified description of the full Fortran 2008 version of the language. This is much cleaner, many deficiencies and irregularities in the earlier language versions having been resolved. It includes object orientation and parallel processing with coarrays. Four completely new chapters describe the additional features of Fortran 2018, with its enhancements to coarrays for parallel programming, interoperability with C, IEEE arithmetic, and various other improvements. Written by leading experts in the field, two of whom have actively contributed to Fortran 2018, this is a complete and authoritative description of Fortran in its latest form. It is intended for new and existing users of the language, and for all those involved in scientific and numerical computing. It is suitable as a textbook for teaching and, with its index, as a handy reference for practitioners.

Author(s):  
Инна Николаевна Заризенко ◽  
Артём Евгеньевич Перепелицын

This article has analyzed the most effective integrated development environments from leading programmable logical device (PLD) manufacturers. Heterogeneous calculations and the applicability of a general approach to the description of hardware accelerator designs are considered. An analytical review of the use of the OpenCL language in the construction of high-performance FPGA-based solutions is performed. The features of OpenCL language usage for heterogeneous computing for FPGA-based accelerators are discussed. The experience of a unified description of projects for solutions based on CPU, GPU, signal processors and FPGA is analyzed. The advantages of using such a description for tasks that perform parallel processing are shown. Differences in productivity and labor costs for developing FPHA systems with parallel data processing for hardware description languages and OpenCL language are shown. The results of comparing commercially available solutions for building services with FPGA accelerators are presented. The advantages of the Xilinx platform and tools for building an FPGA service are discussed. The stages of creating solutions based on FaaS are proposed. Some FaaS related tasks are listed and development trends are discussed. The SDAccel platform of the Xilinx SDx family is considered, as well as the possible role of these tools in creating the FPGA computing platform as a service. An example of using SDAccel to develop parallel processing based on FPGA is given. The advantages and disadvantages of the use of hardware description languages with such design automation tools are discussed. The results of comparing the performance of the simulation speed of the system described with the use of programming languages and hardware description languages are presented. The advantages of modeling complex systems are discussed, especially for testing solutions involving the processing of tens of gigabytes of data and the impossibility of creating truncated test sets. Based on practical experience, the characteristics of development environments, including undocumented ones, are formulated.


2013 ◽  
Vol 756-759 ◽  
pp. 2825-2828
Author(s):  
Xue Chun Wang ◽  
Quan Lu Zheng

Parallel computing is in parallel computer system for parallel processing of data and information, often also known as the high performance computing or super computing. The content of parallel computing were introduced, the realization of parallel computing and MPI parallel programming under Linux environment were described. The parallel algorithm based on divide and conquer method to solve rectangle placemen problem was designed and implemented with two processors. Finally, Through the performance testing and comparison, we verified the efficiency of parallel computing.


2021 ◽  
Vol 10 (3) ◽  
Author(s):  
Zina A. Aziz ◽  
Diler Naseradeen Abdulqader ◽  
Amira Bibo Sallow ◽  
Herman Khalid Omer

Parallel and multiprocessing algorithms break down significant numerical problems into smaller subtasks, reducing the total computing time on multiprocessor and multicore computers. Parallel programming is well supported in proven programming languages such as C and Python, which are well suited to “heavy-duty” computational tasks. Historically, Python has been regarded as a strong supporter of parallel programming due to the global interpreter lock (GIL). However, times have changed. Parallel programming in Python is supported by the creation of a diverse set of libraries and packages. This review focused on Python libraries that support parallel processing and multiprocessing, intending to accelerate computation in various fields, including multimedia, attack detection, supercomputers, and genetic algorithms. Furthermore, we discussed some Python libraries that can be used for this purpose.


2012 ◽  
Vol 182-183 ◽  
pp. 639-643 ◽  
Author(s):  
Xiang Li ◽  
Fei Li ◽  
Chang Hao Wang

In this paper, five kinds of typical multi-core processers are compared from thread, cache, inter-core interconnect and etc. Two kinds of multi-core programming environments and some new programming languages are introduced. Thread-level speculation (TLS) and transactional memory (TM) are introduced to solve the problem of parallelization of sequential program. TLS automatically analyze and speculate the part of sequential process which can be parallel implement, and then automatically generate parallel code. TM systems provide an efficient and easy mechanism for parallel programming on multi-core processors. Typical TM likes TCC, UTM, LogTM, LogTM-SE and SigTM are introduced. Combined the TLS and TM can more effectively improve the sequential program running on the multi-core processors. Typical extended TM systems to support TLS likes TCC, TTM, PTT and STMlite are introduced.


Sign in / Sign up

Export Citation Format

Share Document