modulo scheduling
Recently Published Documents


TOTAL DOCUMENTS

102
(FIVE YEARS 14)

H-INDEX

19
(FIVE YEARS 2)

Electronics ◽  
2021 ◽  
Vol 10 (18) ◽  
pp. 2210
Author(s):  
Zhongyuan Zhao ◽  
Weiguang Sheng ◽  
Jinchao Li ◽  
Pengfei Ye ◽  
Qin Wang ◽  
...  

Modulo-scheduled coarse-grained reconfigurable array (CGRA) processors have shown their potential for exploiting loop-level parallelism at high energy efficiency. However, these CGRAs need frequent reconfiguration during their execution, which makes them suffer from large area and power overhead for context memory and context-fetching. To tackle this challenge, this paper uses an architecture/compiler co-designed method for context reduction. From an architecture perspective, we carefully partition the context into several subsections and only fetch the subsections that are different to the former context word whenever fetching the new context. We package each different subsection with an opcode and index value to formulate a context-fetching primitive (CFP) and explore the hardware design space by providing the centralized and distributed CFP-fetching CGRA to support this CFP-based context-fetching scheme. From the software side, we develop a similarity-aware tuning algorithm and integrate it into state-of-the-art modulo scheduling and memory access conflict optimization algorithms. The whole compilation flow can efficiently improve the similarities between contexts in each PE for the purpose of reducing both context-fetching latency and context footprint. Experimental results show that our HW/SW co-designed framework can improve the area efficiency and energy efficiency to at most 34% and 21% higher with only 2% performance overhead.


2021 ◽  
pp. 104334
Author(s):  
Leandro de Souza Rosa ◽  
Christos-Savvas Bouganis ◽  
Vanderlei Bonato

2021 ◽  
Vol 20 (5) ◽  
pp. 1-31
Author(s):  
Michael Witterauf ◽  
Dominik Walter ◽  
Frank Hannig ◽  
Jürgen Teich

Tightly Coupled Processor Arrays (TCPAs), a class of massively parallel loop accelerators, allow applications to offload computationally expensive loops for improved performance and energy efficiency. To achieve these two goals, executing a loop on a TCPA requires an efficient generation of specific programs as well as other configuration data for each distinct combination of loop bounds and number of available processing elements (PEs). Since both these parameters are generally unknown at compile time—the number of available PEs due to dynamic resource management, and the loop bounds, because they depend on the problem size—both the programs and configuration data must be generated at runtime. However, pure just-in-time compilation is impractical, because mapping a loop program onto a TCPA entails solving multiple NP-complete problems. As a solution, this article proposes a unique mixed static/dynamic approach called symbolic loop compilation. It is shown that at compile time, the NP-complete problems (modulo scheduling, register allocation, and routing) can still be solved to optimality in a symbolic way resulting in a so-called symbolic configuration , a space-efficient intermediate representation parameterized in the loop bounds and number of PEs. This phase is called symbolic mapping . At runtime, for each requested accelerated execution of a loop program with given loop bounds and known number of available PEs, a concrete configuration , including PE programs and configuration data for all other components, is generated from the symbolic configuration according to these parameter values. This phase is called instantiation . We describe both phases in detail and show that instantiation runs in polynomial time with its most complex step, program instantiation, not directly depending on the number of PEs and thus scaling to arbitrary sizes of TCPAs. To validate the efficiency of this mixed static/dynamic compilation approach, we apply symbolic loop compilation to a set of real-world loop programs from several domains, measuring both compilation time and space requirements. Our experiments confirm that a symbolic configuration is a space-efficient representation suited for systems with little memory—in many cases, a symbolic configuration is smaller than even a single concrete configuration instantiated from it—and that the times for the runtime phase of program instantiation and configuration loading are negligible and moreover independent of the size of the available processor array. To give an example, instantiating a configuration for a matrix-matrix multiplication benchmark takes equally long for 4× 4 and 32× 32 PEs.


Author(s):  
Siva Sankara Phani.T , Et. al.

Coarse-Grained Reconfigurable Architectures (CGRA) is an effective solution for speeding up computer-intensive activities due to its high energy efficiency and flexibility sacrifices. The timely implementation of CGRA loops was one of the hardest problems in the analysis. Modulo scheduling (MS) was productive in order to implement loops on CGRAs. The problem remains with current MS algorithms, namely to map large and irregular circuits to CGRAs over a fair period of compilation with restricted computational and high-performance routing tools. This is mainly due to an absence of awareness of major mapping limits and a time consuming approach to solving temporary and space-related mapping using CGRA buffer tools. It aims to boost the performance and robust compilation of the CGRA modulo planning algorithm. The problem with the CGRA MS is divided into time and space and the mechanisms between the two problems have to be reorganized. We have a detailed, systematic mapping fluid that addresses the algorithms of the time mapping problem with a powerful buffer algorithm and efficient connection and calculation limitations. We create a fast-stable algorithm for spatial mapping with a retransmission and rearrangement mechanism. With higher performance and quicker build-up time, our MS algorithm can map loops to CBGRA. The results show that, given the same compilation budget, our mapping algorithm results in a better rate for compilation. The performance of this method will be increased from 5% to 14%, better than the standard CGRA mapping algorithms available.


2020 ◽  
Vol 18 (12) ◽  
pp. 2166-2173
Author(s):  
Lucas Fernandes Ribeiro ◽  
Francisco Carlos Silva ◽  
Ivan Saraiva Silva

2020 ◽  
Vol 31 (9) ◽  
pp. 2201-2219
Author(s):  
Zhongyuan Zhao ◽  
Weiguang Sheng ◽  
Qin Wang ◽  
Wenzhi Yin ◽  
Pengfei Ye ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document