code transformations
Recently Published Documents


TOTAL DOCUMENTS

98
(FIVE YEARS 20)

H-INDEX

13
(FIVE YEARS 0)

2021 ◽  
Vol 18 (4) ◽  
pp. 1-23
Author(s):  
Tobias Gysi ◽  
Christoph Müller ◽  
Oleksandr Zinenko ◽  
Stephan Herhut ◽  
Eddie Davis ◽  
...  

Most compilers have a single core intermediate representation (IR) (e.g., LLVM) sometimes complemented with vaguely defined IR-like data structures. This IR is commonly low-level and close to machine instructions. As a result, optimizations relying on domain-specific information are either not possible or require complex analysis to recover the missing information. In contrast, multi-level rewriting instantiates a hierarchy of dialects (IRs), lowers programs level-by-level, and performs code transformations at the most suitable level. We demonstrate the effectiveness of this approach for the weather and climate domain. In particular, we develop a prototype compiler and design stencil- and GPU-specific dialects based on a set of newly introduced design principles. We find that two domain-specific optimizations (500 lines of code) realized on top of LLVM’s extensible MLIR compiler infrastructure suffice to outperform state-of-the-art solutions. In essence, multi-level rewriting promises to herald the age of specialized compilers composed from domain- and target-specific dialects implemented on top of a shared infrastructure.


Author(s):  
Christopher J ◽  
Jinwoo Yom ◽  
Changwoo Min ◽  
Yeongjin Jang

Address Space Layout Randomization (ASLR) was a great role model being a light-weight defense technique that could prevent early return-oriented programming attacks. Simple yet effective, ASLR was quickly widely-adopted. Conversely, today only a trickle of defense techniques are being integrated or adopted mainstream. As code reuse attacks have evolved, defenses have strived to keep up. To do so, many have had to take unfavorable tradeoffs like using background threads or protecting only a subset of sensitive code. In reality, these tradeoffs were unavoidable steps necessary to improve the strength of the state-of-the-art. We present Goose, an on-demand system-wide runtime re-randomization technique capable of scalable protection of application as well as shared library code most defenses have forgone. We achieve code sharing with diversification by implementing reactive and scalable, rather than continuous or one-time diversification. Enabling code sharing further removes redundant computation like tracking, patching, along with memory overheads required by prior randomization techniques. In its baseline state, the code transformations needed for Goose security hardening incur a reasonable performance overhead of 5.5% on SPEC and minimal degradation of 4.4% in NGINX, demonstrating its applicability to both compute-intensive and scalable real-world applications. Even when under attack, Goose only adds from less than 1% up to 15% depending on application complexity.


2021 ◽  
Vol 11 (3) ◽  
pp. 28
Author(s):  
Julie Dumas ◽  
Henri-Pierre Charles ◽  
Kévin Mambu ◽  
Maha Kooli

This article describes a software environment called HybroGen, which helps to experiment binary code generation at run time. As computing architectures are getting more complex, the application performance is becoming data-dependent. The proposed experimental platform is helpful in programming applications that can be reconfigured at run time in order to be adapted for a new data environment. The HybroGen platform is adapted to heterogeneous architectures and can generate instructions for different targets. This platform allows to go farther than classical JIT compilation in many directions: the code generator is smaller by three orders of magnitude and faster by three orders of magnitude, compared to JIT (Just-In-Time) platforms, and allows making code transformation that is impossible in traditional compilation schemes, such as code generation for non von Neumann accelerators or dynamic code transformations for transprecision. The latter is illustrated in a code example: the square root with Newton’s algorithm. We also illustrate the proposed HybroGen platform with two other examples: a multiplication with a specialization on a value determined at run time, and a conversion of degrees Celsius to degrees Fahrenheit. This article presents a proof of concept of the proposed HybroGen platform in terms of its functionalities, and demonstrates the working status.


Author(s):  
Aziz Srai ◽  
Fatima Guerouate ◽  
Hilal Drissi Lahsini

Today with the growth of the internet, the use of social networks, mobile telephony, connected and communicating objects. The data has become so big, hence the need to exploit that data has become primordial. In practice, a very large number of companies specializing in the health sector, the banking and financial sector, insurance, manufacturing industry, etc… are based on traditional databases which are often well organized of customer data, machine data, etc ... but in most cases, very large volumes of data from these databases, and the speed with which they must be analyzed to meet the business needs of the company are real challenges.This article aims to respond to a problem of generating NoSQL MongoDB databases by applying an approach based on model-driven engineering (Model Driven Architecture Approach). We provide Model to Model (using the QVT model transformation language), and Model to Code transformations (using the code generator, Acceleo). We also propose vertical and horizontal transformations to demonstrate the validity of our approach on NoSQL MongoDB databases. We have studied in this article the PSM transformations towards the implementation. PIM to PSM transformations are the subject of another work.


2021 ◽  
Author(s):  
Peter Podlovics ◽  
Csaba Hruska ◽  
Andor Pénzes

GRIN is short for Graph Reduction Intermediate Notation, a modern back end for lazy functional languages. Most of the currently available compilers for such languages share a common flaw: they can only optimize programs on a per-module basis. The GRIN framework allows for interprocedural whole program analysis, enabling optimizing code transformations across functions and modules as well. Some implementations of GRIN already exist, but most of them were developed only for experimentation purposes. Thus, they either compromise on low-level efficiency or contain ad hoc modifications compared to the original specification. Our goal is to provide a full-fledged implementation of GRIN by combining the currently available best technologies like LLVM, and evaluate the framework's effectiveness by measuring how the optimizer improves the performance of certain programs. We also present some improvements to the already existing components of the framework. Some of these improvements include a typed representation for the intermediate language and an interprocedural program optimization, the dead data elimination.


2021 ◽  
Author(s):  
Hannah Badier ◽  
Christian Pilato ◽  
Jean-Christophe Le Lann ◽  
Philippe Coussy ◽  
Guy Gogniat

Author(s):  
Matthijs Vákár

AbstractWe show how to define forward- and reverse-mode automatic differentiation source-code transformations or on a standard higher-order functional language. The transformations generate purely functional code, and they are principled in the sense that their definition arises from a categorical universal property. We give a semantic proof of correctness of the transformations. In their most elegant formulation, the transformations generate code with linear types. However, we demonstrate how the transformations can be implemented in a standard functional language without sacrificing correctness. To do so, we make use of abstract data types to represent the required linear types, e.g. through the use of a basic module system.


2020 ◽  
pp. 368-374
Author(s):  
P.A. Ivanenko ◽  

Article presents an approach to correctness validation of autotuning optimizational transformations. Autotuner is considered as dynamic discrete system and validation is reduced to verification of characteristic of equivalence by result of representation of initial and optimized program versions in autotuning formal model. In partial cases this validation can be done automatically using source code and rewriting rules technique.


Sign in / Sign up

Export Citation Format

Share Document