Fixed Point Data Type Modeling for High Level Synthesis

2010 ◽  
Vol E93-C (3) ◽  
pp. 361-368
Author(s):  
Benjamin CARRION SCHAFER ◽  
Yusuke IGUCHI ◽  
Wataru TAKAHASHI ◽  
Shingo NAGATANI ◽  
Kazutoshi WAKABAYASHI
Electronics ◽  
2021 ◽  
Vol 10 (22) ◽  
pp. 2859
Author(s):  
Mannhee Cho ◽  
Youngmin Kim

Convolutional neural networks (CNNs) are widely used in modern applications for their versatility and high classification accuracy. Field-programmable gate arrays (FPGAs) are considered to be suitable platforms for CNNs based on their high performance, rapid development, and reconfigurability. Although many studies have proposed methods for implementing high-performance CNN accelerators on FPGAs using optimized data types and algorithm transformations, accelerators can be optimized further by investigating more efficient uses of FPGA resources. In this paper, we propose an FPGA-based CNN accelerator using multiple approximate accumulation units based on a fixed-point data type. We implemented the LeNet-5 CNN architecture, which performs classification of handwritten digits using the MNIST handwritten digit dataset. The proposed accelerator was implemented, using a high-level synthesis tool on a Xilinx FPGA. The proposed accelerator applies an optimized fixed-point data type and loop parallelization to improve performance. Approximate operation units are implemented using FPGA logic resources instead of high-precision digital signal processing (DSP) blocks, which are inefficient for low-precision data. Our accelerator model achieves 66% less memory usage and approximately 50% reduced network latency, compared to a floating point design and its resource utilization is optimized to use 78% fewer DSP blocks, compared to general fixed-point designs.


2012 ◽  
Vol 2012 ◽  
pp. 1-14 ◽  
Author(s):  
Daniel Menard ◽  
Nicolas Herve ◽  
Olivier Sentieys ◽  
Hai-Nam Nguyen

Implementing signal processing applications in embedded systems generally requires the use of fixed-point arithmetic. The main problem slowing down the hardware implementation flow is the lack of high-level development tools to target these architectures from algorithmic specification language using floating-point data types. In this paper, a new method to automatically implement a floating-point algorithm into an FPGA or an ASIC using fixed-point arithmetic is proposed. An iterative process on high-level synthesis and data word-length optimization is used to improve both of these dependent processes. Indeed, high-level synthesis requires operator word-length knowledge to correctly execute its allocation, scheduling, and resource binding steps. Moreover, the word-length optimization requires resource binding and scheduling information to correctly group operations. To dramatically reduce the optimization time compared to fixed-point simulation-based methods, the accuracy evaluation is done through an analytical method. Different experiments on signal processing algorithms are presented to show the efficiency of the proposed method. Compared to classical methods, the average architecture area reduction is between 10% and 28%.


2021 ◽  
Vol 44 ◽  
Author(s):  
Charles R. Gallistel

Abstract Numbers are symbols manipulated in accord with the axioms of arithmetic. They sometimes represent discrete and continuous quantities (e.g., numerosities, durations, rates, distances, directions, and probabilities), but they are often simply names. Brains, including insect brains, represent the rational numbers with a fixed-point data type, consisting of a significand and an exponent, thereby conveying both magnitude and precision.


2015 ◽  
Vol 51 (3) ◽  
pp. 244-246 ◽  
Author(s):  
L.S. Rosa ◽  
C.F.M. Toledo ◽  
V. Bonato

Sign in / Sign up

Export Citation Format

Share Document