Instruction fetch energy reduction using loop caches for embedded applications with small tight loops

Author(s):  
Lea Hwang Lee ◽  
B. Moyer ◽  
J. Arends
2018 ◽  
Vol 90 (11) ◽  
pp. 1519-1532
Author(s):  
Joonas Multanen ◽  
Timo Viitanen ◽  
Pekka Jääskeläinen ◽  
Jarmo Takala

Author(s):  
Alexandru-Lucian Georgescu ◽  
Alessandro Pappalardo ◽  
Horia Cucu ◽  
Michaela Blott

AbstractThe last decade brought significant advances in automatic speech recognition (ASR) thanks to the evolution of deep learning methods. ASR systems evolved from pipeline-based systems, that modeled hand-crafted speech features with probabilistic frameworks and generated phone posteriors, to end-to-end (E2E) systems, that translate the raw waveform directly into words using one deep neural network (DNN). The transcription accuracy greatly increased, leading to ASR technology being integrated into many commercial applications. However, few of the existing ASR technologies are suitable for integration in embedded applications, due to their hard constrains related to computing power and memory usage. This overview paper serves as a guided tour through the recent literature on speech recognition and compares the most popular ASR implementations. The comparison emphasizes the trade-off between ASR performance and hardware requirements, to further serve decision makers in choosing the system which fits best their embedded application. To the best of our knowledge, this is the first study to provide this kind of trade-off analysis for state-of-the-art ASR systems.


Sign in / Sign up

Export Citation Format

Share Document