scholarly journals Automated Conversion of SystemC Fixed-Point Data Types

Author(s):  
Axel G. Braun ◽  
Djones V. Lettnin ◽  
Joachim Gerlach ◽  
Wolfgang Rosenstiel
Keyword(s):  
Electronics ◽  
2021 ◽  
Vol 10 (22) ◽  
pp. 2859
Author(s):  
Mannhee Cho ◽  
Youngmin Kim

Convolutional neural networks (CNNs) are widely used in modern applications for their versatility and high classification accuracy. Field-programmable gate arrays (FPGAs) are considered to be suitable platforms for CNNs based on their high performance, rapid development, and reconfigurability. Although many studies have proposed methods for implementing high-performance CNN accelerators on FPGAs using optimized data types and algorithm transformations, accelerators can be optimized further by investigating more efficient uses of FPGA resources. In this paper, we propose an FPGA-based CNN accelerator using multiple approximate accumulation units based on a fixed-point data type. We implemented the LeNet-5 CNN architecture, which performs classification of handwritten digits using the MNIST handwritten digit dataset. The proposed accelerator was implemented, using a high-level synthesis tool on a Xilinx FPGA. The proposed accelerator applies an optimized fixed-point data type and loop parallelization to improve performance. Approximate operation units are implemented using FPGA logic resources instead of high-precision digital signal processing (DSP) blocks, which are inefficient for low-precision data. Our accelerator model achieves 66% less memory usage and approximately 50% reduced network latency, compared to a floating point design and its resource utilization is optimized to use 78% fewer DSP blocks, compared to general fixed-point designs.


2010 ◽  
Vol E93-C (3) ◽  
pp. 361-368
Author(s):  
Benjamin CARRION SCHAFER ◽  
Yusuke IGUCHI ◽  
Wataru TAKAHASHI ◽  
Shingo NAGATANI ◽  
Kazutoshi WAKABAYASHI

1984 ◽  
Vol 36 (3) ◽  
pp. 495-519
Author(s):  
Jiří Adámek ◽  
Wolfgang Merzenich

In the literature about the definition of data types there exist many approaches using some concept of fixed point. Wand [13] and Lehmann, Smyth [9] e.g. constructed data types as least fixed points of functors F:K → K. Arbib and Manes [3] showed that some data types turn out to be the greatest fixed points of such endofunctors. In this paper we regard least and greatest fixed points that have a given property.


Author(s):  
Toshiyuki Dobashi ◽  
Atsushi Tashiro ◽  
Masahiro Iwahashi ◽  
Hitoshi Kiya

A tone mapping operation (TMO) for HDR images with fixed-point arithmetic is proposed. A TMO generates a low dynamic range (LDR) image from a high dynamic range (HDR) image by compressing its dynamic range. Since HDR images are generally expressed in a floating-point data format, a TMO also deals with floating-point data even though resulting LDR images have integer data. As a result, conventional TMOs require many resources such as computational and memory cost. To reduce the resources, an integer TMO which treats a floating-point number as two 8-bit integer numbers was proposed. However, this method has the limitation of available input HDR image formats. The proposed method introduces an intermediate format to relieve the limitation of input formats, and expands the integer TMO for the intermediate format. The proposed integer TMO can be applied for multiple formats such as the RGBE and the OpenEXR. Moreover, the method can conduct all calculations in the TMO with fixed-point arithmetic. Using both integer data and fixed-point arithmetic, the method reduces not only the memory cost, but also the computational cost. The experimental and evaluation results show that the proposed method reduces the computational and memory cost, and gives almost same quality of LDR images, compared with the conventional method with floating-point arithmetic.


Sign in / Sign up

Export Citation Format

Share Document