Modified Distributed Arithmetic Architecture for Adiabatic DSP Systems

Author(s):  
Dusan Suvakovic ◽  
C. Andre ◽  
T. Salama
2020 ◽  
Vol 23 (2) ◽  
pp. 259-264 ◽  
Author(s):  
Grande Naga Jyothi ◽  
Kishore Sanapala ◽  
A. Vijayalakshmi

In current inventive technology, latency, power and area are the crucial parameters to outline any kind of the algorithm on FPGA. The fundamental tool used for DSP applications is Fast Fourier Transform. FFT plays a vital role in acquiring the signal characteristics with least use of carrying out parameters. The adder plays an utmost importance. To make the best possible adder design regarding delay and area, various works have been proposed before. In proposed system, a combination different sub adders like Carry Look ahead adder (CLA), Ripple carry adder (RCA), and Carry save adder (CSA) is proposed. This reduces the delay and area but also increases the speed. The hybrid adders is proposed to represent FFT architecture inplace of conventional adders. Hybrid adder will act as a complex adder. Speed multipliers are fundamental parts of DSP systems. Multipliers are complex process and consumes more time. In order to lower the complexity multiplication, various multiplier less method are introduced. An efficient DA based complex multiplier is proposed, inplace of regular multiplier. The pipelining technique is applied only to hybrid adder. The design of Radix-2 FFT for 8 point of FFT, 1024 point of FFT is done, programmed using Verilog language. Using Xilinx 14.5i tool with Spartan 6 kit, Simulation is achieved.


2020 ◽  
Author(s):  
Priyadharsini Sarath ◽  
Gnanamurugan Selvan

2021 ◽  
Vol 11 (12) ◽  
pp. 5523
Author(s):  
Qian Ye ◽  
Minyan Lu

The main purpose of our provenance research for DSP (distributed stream processing) systems is to analyze abnormal results. Provenance for these systems is not nontrivial because of the ephemerality of stream data and instant data processing mode in modern DSP systems. Challenges include but are not limited to an optimization solution for avoiding excessive runtime overhead, reducing provenance-related data storage, and providing it in an easy-to-use fashion. Without any prior knowledge about which kinds of data may finally lead to the abnormal, we have to track all transformations in detail, which potentially causes hard system burden. This paper proposes s2p (Stream Process Provenance), which mainly consists of online provenance and offline provenance, to provide fine- and coarse-grained provenance in different precision. We base our design of s2p on the fact that, for a mature online DSP system, the abnormal results are rare, and the results that require a detailed analysis are even rarer. We also consider state transition in our provenance explanation. We implement s2p on Apache Flink named as s2p-flink and conduct three experiments to evaluate its scalability, efficiency, and overhead from end-to-end cost, throughput, and space overhead. Our evaluation shows that s2p-flink incurs a 13% to 32% cost overhead, 11% to 24% decline in throughput, and few additional space costs in the online provenance phase. Experiments also demonstrates the s2p-flink can scale well. A case study is presented to demonstrate the feasibility of the whole s2p solution.


Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 983
Author(s):  
Jingjian Li ◽  
Wei Wang ◽  
Hong Mo ◽  
Mengting Zhao ◽  
Jianhua Chen

A distributed arithmetic coding algorithm based on source symbol purging and using the context model is proposed to solve the asymmetric Slepian–Wolf problem. The proposed scheme is to make better use of both the correlation between adjacent symbols in the source sequence and the correlation between the corresponding symbols of the source and the side information sequences to improve the coding performance of the source. Since the encoder purges a part of symbols from the source sequence, a shorter codeword length can be obtained. Those purged symbols are still used as the context of the subsequent symbols to be encoded. An improved calculation method for the posterior probability is also proposed based on the purging feature, such that the decoder can utilize the correlation within the source sequence to improve the decoding performance. In addition, this scheme achieves better error performance at the decoder by adding a forbidden symbol in the encoding process. The simulation results show that the encoding complexity and the minimum code rate required for lossless decoding are lower than that of the traditional distributed arithmetic coding. When the internal correlation strength of the source is strong, compared with other DSC schemes, the proposed scheme exhibits a better decoding performance under the same code rate.


2014 ◽  
Vol 626 ◽  
pp. 127-135 ◽  
Author(s):  
D. Jessintha ◽  
M. Kannan ◽  
P.L. Srinivasan

Discrete Cosine Transform (DCT) is commonly used in image compression. In the history of DCT, a milestone was the Distributed Arithmetic (DA) technique. Due to the technology dependency a multiplier-less computation was built with DA based technique. It occupied less area but the throughput is less. Later, due to the technology scaling, multiplier based architectures can be easily adapted for low-power and high-performance architecture. Fixed width multipliers [1]-[7] reduces hardware and time complexity. In this work, Radix 4 fixed width multiplier is adapted with DCT architecture due to low power consumption and saves 30% power. In order to reduce truncation errors caused during fixed width multiplication, an estimation circuit is designed based on conditional probability theory.


Sign in / Sign up

Export Citation Format

Share Document