compiler techniques
Recently Published Documents


TOTAL DOCUMENTS

63
(FIVE YEARS 3)

H-INDEX

11
(FIVE YEARS 0)

2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Jing Ge Feng ◽  
Ye Ping He ◽  
Qiu Ming Tao

Automatic vectorization is an important technique for compilers to improve the parallelism of programs. With the widespread usage of SIMD (Single Instruction Multiple Data) extensions in modern processors, automatic vectorization has become a hot topic in the research of compiler techniques. Accurately evaluating the effectiveness of automatic vectorization in typical compilers is quite valuable for compiler optimization and design. This paper evaluates the effectiveness of automatic vectorization, analyzes the limitation of automatic vectorization and the main causes, and improves the automatic vectorization technology. This paper firstly classifies the programs by two main factors: program characteristics and transformation methods. Then, it evaluates the effectiveness of automatic vectorization in three well-known compilers (GCC, LLVM, and ICC, including their multiple versions in recent 5 years) through TSVC (Test Suite for Vectorizing Compilers) benchmark. Furthermore, this paper analyzes the limitation of automatic vectorization based on source code analysis, and introduces the differences between academic research and engineering practice in automatic vectorization and the main causes, Finally, it gives some suggestions as to how to improve automatic vectorization capability.


2020 ◽  
Vol 43 (2-3) ◽  
pp. 89-108
Author(s):  
Angelo Fraietta ◽  
Oliver Bown ◽  
Sam Ferguson ◽  
Sam Gillespie ◽  
Liam Bray

This article introduces an open-source Java-based programming environment for creative coding of agglomerative systems using Internet-of-Things (IoT) technologies. Our software originally focused on digital signal processing of audio—including synthesis, sampling, granular sample playback, and a suite of basic effects—but composers now use it to interface with sensors and peripherals through general-purpose input/output and external networked systems. This article examines and addresses the strategies required to integrate novel embedded musical interfaces and creative coding paradigms through an IoT infrastructure. These include: the use of advanced tooling features of a professional integrated development environment as a composition or performance interface rather than just as a compiler; techniques to create media works using features such as autodetection of sensors; seamless and serverless communication among devices on the network; and uploading, updating, and running of new compositions to the device without interruption. Furthermore, we examined the difficulties many novice programmers experience when learning to write code, and we developed strategies to address these difficulties without restricting the potential available in the coding environment. We also examined and developed methods to monitor and debug devices over the network, allowing artists and programmers to set and retrieve current variable values to or from these devices during the performance and composition stages. Finally, we describe three types of art work that demonstrate how the software, called HappyBrackets, is being used in live-coding and dance performances, in interactive sound installations, and as an advanced composition and performance tool for multimedia works.


2017 ◽  
Vol 59 (3) ◽  
Author(s):  
David Broneske ◽  
Gunter Saake

AbstractIn main-memory database systems, the time to process the data has become a limiting factor due to the missing access gap. With changing processing capabilities (e.g., branch prediction, pipelining) in every new CPU architecture, code that was optimal once will probably not stay the best code forever. In this article, we analyze processing capabilities of the classical CPU and describe code optimizations to exploit the capabilities. Furthermore, we present state-of-the-art compiler techniques that already implement code optimizations, while also showing gaps for further code optimization integration.


2012 ◽  
Vol 21 (02) ◽  
pp. 1240002
Author(s):  
SANTHOSH VERMA ◽  
DAVID M. KOPPELMAN

A major performance limiter in modern processors is the long latencies caused by data cache misses. Both compiler- and hardware-based prefetching schemes help hide these latencies and so improve performance. Compiler techniques infer memory access patterns through code analysis, and insert appropriate prefetch instructions. Hardware prefetching techniques work independently from the compiler by monitoring an access stream, detecting patterns in this stream and issuing prefetches based on these patterns. This paper looks at the interplay between compiler and hardware architecture-based prefetching techniques. Does either technique make the other one unnecessary? First, compilers' ability to achieve good results without extreme expertise is evaluated by preparing binaries with no prefetch, one-flag prefetch (no tuning), and expertly tuned prefetch. From runs of SPECcpu2006 binaries, we find that expertise avoids minor slowdown in a few benchmarks and provides substantial speedup in others. We compare software schemes to hardware prefetching schemes and our simulations show software alone substantially outperforms hardware alone on about half of a selection of benchmarks. While hardware matches or exceeds software in a few cases, software is better on average. Analysis reveals that in many cases hardware is not prefetching access patterns that it is capable of recognizing, due to some irregularities in the observed miss sequence. Hardware outperforms software on address sequences that the compiler would not guess. In general, while software is better at prefetching individual loads, hardware partly compensates for this by identifying more loads to prefetch. Using the two schemes together provides further benefits, but less than the sum of the contributions of each alone.


Sign in / Sign up

Export Citation Format

Share Document