scholarly journals HAO: Hardware-aware Neural Architecture Optimization for Efficient Inference

Author(s):  
Zhen Dong ◽  
Yizhao Gao ◽  
Qijing Huang ◽  
John Wawrzynek ◽  
Hayden K.H. So ◽  
...  
Author(s):  
Wei Niu ◽  
Zhenglun Kong ◽  
Geng Yuan ◽  
Weiwen Jiang ◽  
Jiexiong Guan ◽  
...  

Transformer-based deep learning models have increasingly demonstrated high accuracy on many natural language processing (NLP) tasks. In this paper, we propose a compression-compilation co-design framework that can guarantee the identified model meets both resource and real-time specifications of mobile devices. Our framework applies a compiler-aware neural architecture optimization method (CANAO), which can generate the optimal compressed model that balances both accuracy and latency. We are able to achieve up to 7.8x speedup compared with TensorFlow-Lite with only minor accuracy loss. We present two types of BERT applications on mobile devices: Question Answering (QA) and Text Generation. Both can be executed in real-time with latency as low as 45ms. Videos for demonstrating the framework can be found on https://www.youtube.com/watch?v=_WIRvK_2PZI


1992 ◽  
Author(s):  
William Ross ◽  
Ennio Mingolla

Author(s):  
Hanna Mazzawi ◽  
Xavi Gonzalvo ◽  
Aleks Kracun ◽  
Prashant Sridhar ◽  
Niranjan Subrahmanya ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document