Using machine learning to predict the code size impact of duplication heuristics in a dynamic compiler

2021 ◽  
Author(s):  
Raphael Mosaner ◽  
David Leopoldseder ◽  
Lukas Stadler ◽  
Hanspeter Mössenböck
Keyword(s):  
2014 ◽  
Vol 22 (4) ◽  
pp. 309-329 ◽  
Author(s):  
Grigori Fursin ◽  
Renato Miceli ◽  
Anton Lokhmotov ◽  
Michael Gerndt ◽  
Marc Baboulin ◽  
...  

Empirical auto-tuning and machine learning techniques have been showing high potential to improve execution time, power consumption, code size, reliability and other important metrics of various applications for more than two decades. However, they are still far from widespread production use due to lack of native support for auto-tuning in an ever changing and complex software and hardware stack, large and multi-dimensional optimization spaces, excessively long exploration times, and lack of unified mechanisms for preserving and sharing of optimization knowledge and research material. We present a possible collaborative approach to solve above problems using Collective Mind knowledge management system. In contrast with previous cTuning framework, this modular infrastructure allows to preserve and share through the Internet the whole auto-tuning setups with all related artifacts and their software and hardware dependencies besides just performance data. It also allows to gradually structure, systematize and describe all available research material including tools, benchmarks, data sets, search strategies and machine learning models. Researchers can take advantage of shared components and data with extensible meta-description to quickly and collaboratively validate and improve existing auto-tuning and benchmarking techniques or prototype new ones. The community can now gradually learn and improve complex behavior of all existing computer systems while exposing behavior anomalies or model mispredictions to an interdisciplinary community in a reproducible way for further analysis. We present several practical, collaborative and model-driven auto-tuning scenarios. We also decided to release all material atc-mind.org/repoto set up an example for a collaborative and reproducible research as well as our new publication model in computer engineering where experimental results are continuously shared and validated by the community.


2020 ◽  
Vol 30 (11n12) ◽  
pp. 1641-1665
Author(s):  
Jiang Wu ◽  
Jianjun Xu ◽  
Xiankai Meng ◽  
Haoyu Zhang ◽  
Zhuo Zhang

Modern compilers provide a huge number of optional compilation optimization options. It is necessary to select the appropriate compilation optimization options for different programs or applications. To mitigate this problem, machine learning is widely used as an efficient technology. How to ensure the integrity and effectiveness of program information is the key to problem mitigation. In addition, when selecting the best compilation optimization option, the optimization goals are often execution speed, code size, and CPU consumption. There is not much research on program reliability. This paper proposes a Gate Graph Attention Neural Network (GGANN)-based compilation optimization option selection model. The data flow and function-call information are integrated into the abstract syntax tree as the program graph-based features. We extend the deep neural network based on GGANN and build a learning model that learns the heuristics method for program reliability. The experiment is performed under the Clang compiler framework. Compared with the traditional machine learning method, our model improves the average accuracy by 5–11% in the optimization option selection for program reliability. At the same time, experiments show that our model has strong scalability.


2020 ◽  
Vol 43 ◽  
Author(s):  
Myrthe Faber

Abstract Gilead et al. state that abstraction supports mental travel, and that mental travel critically relies on abstraction. I propose an important addition to this theoretical framework, namely that mental travel might also support abstraction. Specifically, I argue that spontaneous mental travel (mind wandering), much like data augmentation in machine learning, provides variability in mental content and context necessary for abstraction.


2020 ◽  
Author(s):  
Mohammed J. Zaki ◽  
Wagner Meira, Jr
Keyword(s):  

2020 ◽  
Author(s):  
Marc Peter Deisenroth ◽  
A. Aldo Faisal ◽  
Cheng Soon Ong
Keyword(s):  

Author(s):  
Lorenza Saitta ◽  
Attilio Giordana ◽  
Antoine Cornuejols

Author(s):  
Shai Shalev-Shwartz ◽  
Shai Ben-David
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document