floating point number
Recently Published Documents


TOTAL DOCUMENTS

64
(FIVE YEARS 8)

H-INDEX

6
(FIVE YEARS 1)

2021 ◽  
Vol 16 (2) ◽  
pp. 1-12
Author(s):  
Fabio Benevenuti ◽  
Fernanda Lima Kastensmidt ◽  
Ádria Barros de Oliveira ◽  
Nemitala Added ◽  
Vitor Ângelo Paulino de Aguiar ◽  
...  

This work discusses the main aspects of vulnerability and degradation of accuracy of an image classification engine implemented into SRAM-based FPGAs under faults. The image classification engine is an all-convolutional neural-network (CNN) trained with a dataset of traffic sign recognition benchmark. The Caffe and Ristretto frameworks were used for CNN training and fine-tuning while the ZynqNet inference engine was adopted as hardware implementation on a Xilinx 28 nm SRAM-based FPGA. The CNN under test was generated using an evolutive approach based on genetic algorithm. The methodologies for qualifying this CNN under faults is presented and both heavy-ions accelerated irradiation and emulated fault injection were performed. To cross validate results from radiation and fault injection, different implementations of the same CNN were tested using reduced arithmetic precision and protection of user data by Hamming codes, in combination with configuration memory healing by the scrubbing mechanism available in Xilinx FPGA. Some of these alternative implementations increased significantly the mission time of the CNN, when compared to the original ZynqNet operating on 32 bits floating point number, and the experiment suggests areas for further improvements on the fault injection methodology in use.


2020 ◽  
Vol 3 (1) ◽  
pp. 291-300
Author(s):  
Serkan Dereli ◽  
Mahmut Uç

Digital systems consist of thousands of digital circuit blocks operating in the background, working in their simplest form such as addition, subtraction, multiplication, division. In exponential expressions like square roots and cube roots, just like these circuits, it is found in many digital systems and performs tasks. Although these processes seem to be used only in circuits carrying out mathematical operations, they actually take an active role in solving many engineering problems. In this study, a digital circuit design that computes both the integer and a floating point exponent of a 32-bit floating-point number has been realized. This digital circuit, which is coded with VHDL language, can be used from beginner to advanced level in FPGA based systems. This digital circuit, which is coded with VHDL language, can be used from beginner to advanced level in FPGA based systems. In addition, three floating IP cores - logarithm, multiplication and exponent - were used in this digital circuit, and results were obtained with a total of five finite state machines in sixty-six clock pulse time.


In gift scenario each method has to be compelled to be quick, adept and simple. Fast Fourier transform (FFT) may be a competent algorithmic program to calculate the N purpose Discrete Fourier transform (DFT).It has huge applications in communication systems, signal processing and image processing and instrumentation. However the accomplishment of FFT needs immense range of complicated multiplications, therefore to create this method quick and simple. It’s necessary for a number to be quick and power adept. To influence this problem the mixture of Urdhva Tiryagbhyam associate degreed Karatsuba algorithmic program offers is an adept technique of multiplication [1]. Vedic arithmetic is that the aboriginal system of arithmetic that includes a distinctive technique of calculation supported sixteen Sutras. Using these techniques within the calculation algorithms of the coprocessor can reduce the complexness, execution time, area, power etc. The distinctiveness during this project is Fast Fourier Transform (FFT) style methodology exploitation mixture of Urdhva Tiryagbhyam and Karatsuba algorithmic program based mostly floating point number. By combining these two approaches projected style methodology is time-area-power adept [1] [2]. The code writing is completed in verilog and also the FPGA synthesis on virtex 5 is completed using Xilinx ISE 14.5.


Author(s):  
Yuxuan Wang ◽  
Yuanyong Luo ◽  
Zhongfeng Wang ◽  
Qinghong Shen ◽  
Hongbing Pan

2020 ◽  
Vol 2020 ◽  
pp. 1-18 ◽  
Author(s):  
Subin Moon ◽  
Younho Lee

As a method of privacy-preserving data analysis (PPDA), a fully homomorphic encryption (FHE) has been in the spotlight recently. Unfortunately, because many data analysis methods assume that the type of data is of real type, the FHE-based PPDA methods could not support the enough level of accuracy due to the nature of FHE that fixed-point real-number representation is supported easily. In this paper, we propose a new method to represent encrypted floating-point real numbers on top of FHE. The proposed method is designed to have analogous range and accuracy to 32-bit floating-point number in IEEE 754 representation. We propose a method to perform arithmetic operations and size comparison operations. The proposed method is designed using two different FHEs, HEAAN and TFHE. As a result, HEAAN is proven to be very efficient for arithmetic operations and TFHE is efficient in size comparison. This study is expected to contribute to practical use of FHE-based PPDA.


2019 ◽  
pp. 461-470
Author(s):  
Oleh Horyachyy ◽  
Leonid Moroz ◽  
Viktor Otenko

The purpose of this paper is to introduce a modification of Fast Inverse Square Root (FISR) approximation algorithm with reduced relative errors. The original algorithm uses a magic constant trick with input floating-point number to obtain a clever initial approximation and then utilizes the classical iterative Newton-Raphson formula. It was first used in the computer game Quake III Arena, causing widespread discussion among scientists and programmers, and now it can be frequently found in many scientific applications, although it has some drawbacks. The proposed algorithm has such parameters of the modified inverse square root algorithm that minimize the relative error and includes two magic constants in order to avoid one floating-point multiplication. In addition, we use the fused multiply-add function and iterative methods of higher order in the second iteration to improve the accuracy. Such algorithms do not require storage of large tables for initial approximation and can be effectively used on field-programmable gate arrays (FPGAs) and other platforms without hardware support for this function.


2019 ◽  
Vol 1 (1) ◽  
pp. 26-32
Author(s):  
Bahadır ÖZKILBAÇ

FPGAs have capabilities such as low power consumption, multiple I/O pins, and parallel processing. Because of these capabilities, FPGAs are commonly used in numerous areas that require mathematical computing such as signal processing, artificial neural network design, image processing and filter applications. From the simplest to the most complex, all mathematical applications are based on multiplication, division, subtraction, addition. When calculating, it is often necessary to deal with numbers that are fractional, large or negative. In this study, the Arithmetic Logic Unit (ALU), which uses multiplication, division, addition, subtraction in the form of IEEE754 32-bit floating-point number used to represent fractional and large numbers is designed using FPGA part of the Xilinx Zynq-7000 integrated circuit. The programming language used is VHDL. Then, the ALU designed by the ARM processor part of the same integrated circuit was sent by the commands and controlled.


2019 ◽  
Vol 6 (1) ◽  
pp. 73-81
Author(s):  
Parasian D. P Silitonga ◽  
Irene Sri Morina

Audio file size is relatively larger when compared to files with text format. Large files can cause various obstacles in the form of large space requirements for storage and a long enough time in the shipping process. File compression is one solution that can be done to overcome the problem of large file sizes. Arithmetic coding is one algorithm that can be used to compress audio files. The arithmetic coding algorithm encodes the audio file and changes one row of input symbols with a floating point number and obtains the output of the encoding in the form of a number of values greater than 0 and smaller than 1. The process of compression and decompression of audio files in this study is done against several wave files. Wave files are standard audio file formats developed by Microsoft and IBM that are stored using PCM (Pulse Code Modulation) coding. The wave file compression ratio obtained in this study was 16.12 percent with an average compression process time of 45.89 seconds, while the average decompression time was 0.32 seconds.


Sign in / Sign up

Export Citation Format

Share Document