length code
Recently Published Documents


TOTAL DOCUMENTS

68
(FIVE YEARS 5)

H-INDEX

9
(FIVE YEARS 1)

2021 ◽  
Vol 20 (2) ◽  
pp. 187
Author(s):  
I Gusti Ayu Garnita Darma Putri ◽  
Nyoman Putra Sastra ◽  
I Made Oka Widyantara ◽  
Dewa Made Wiharta

Paper ini merancang sebuah skema kompresi citra medis menggunakan DWT dengan mother wavelet Coiflet dan Symlet. Proses thresholding dan kuantisasi menjadi kunci terjadinya lossy compression di skema ini, dan data outputnya akan dikodekan dengan pengkodean Huffman atau Arithmetic. Terdapat empat kombinasi codec berbeda yakni: Coiflet-Huffman, Coiflet-Arithmetic, Symlet-Huffman yang masing-masing akan dianalisa kinerja kompresinya berdasarkan PSNR dan rasio kompresi. Pengujian kompresi menggunakan 3 citra medis grayscale berdimensi 160x160 piksel. Hasil pengujian menunjukan codec yang mampu menghasilkan PSNR dan rate paling optimal adalah codec Symlet-Arithmetic dengan nilai threshold yang dianjurkan yakni kurang dari 12. Pemberian nilai threshold diatas 12 akan menyebabkan PSNR citra rekonstruksi berada dibawah standar nilai minimum PSNR citra digital sebesar 30 dB.


Author(s):  
B. S. Yesmagambetov ◽  

In telemetry systems, using irreversible data compression, several message generation methods can be used. In the channel output packet, there may be several code words defining its composition. They can be combined and arranged in a strictly defined sequence. Such a data packet is a constant or variable length code combination, wherein the constant length of the packet is generated in the case of a predetermined and unchanged amount of information at the data output interval, and the variable is otherwise generated. The channel data packet can then be treated as a single whole: provide it with address information about the source of the message, information about the time interval at which the packet was formed, to bind significant samples to time, additional check symbols and codes to increase interference immunity of transmission, or to form a packet structure in the same way. Address, time and synchronization information in the literature is called overhead. The need to transmit overhead information reduces the efficiency of the transceiver systems. Therefore, the problem of reducing the volume of service information is extremely urgent.


2021 ◽  
Author(s):  
Fei Wang ◽  
Jian Jiao ◽  
Ke Zhang ◽  
Shaohua Wu ◽  
Qinyu Zhang
Keyword(s):  

2020 ◽  
Vol 10 (20) ◽  
pp. 7340
Author(s):  
Ching-Nung Yang ◽  
Yung-Chien Chou ◽  
Tao-Ku Chang ◽  
Cheonshik Kim

Recently, image compression using adaptive block truncation coding based on edge quantization (ABTC-EQ) was proposed by Mathews and Nair. Their approach deals with an image for two types of blocks, edge blocks and non-edge blocks. Different from using the bi-clustering approach on all blocks in previous block truncation coding (BTC)-like schemes, ABTC-EQ adopts tri-clustering to tackle edge blocks. The compression ratio of ABTC-EQ is reduced, but the visual quality of the reconstructed image is significantly improved. However, it is observed that ABTC-EQ uses 2 bits to represent the index of three clusters in a block. We can only use an average of 5/3 bits by variable-length code to represent the index of each cluster. On the other hand, there are two observations on the quantization levels in a block. The first observation is that the difference between the two quantization values is often smaller than the quantization values themselves. The second observation is that more clusters may enhance the visual quality of the reconstructed image. Based on variable-length coding and the above observations, we design variants of ABTC-EQ to enhance the visual quality of the reconstructed image and compression ratio.


Algorithms ◽  
2020 ◽  
Vol 13 (4) ◽  
pp. 99 ◽  
Author(s):  
Deloula Mansouri ◽  
Xiaohui Yuan ◽  
Abdeldjalil Saidani

With the emergent evolution in DNA sequencing technology, a massive amount of genomic data is produced every day, mainly DNA sequences, craving for more storage and bandwidth. Unfortunately, managing, analyzing and specifically storing these large amounts of data become a major scientific challenge for bioinformatics. Therefore, to overcome these challenges, compression has become necessary. In this paper, we describe a new reference-free DNA compressor abbreviated as DNAC-SBE. DNAC-SBE is a lossless hybrid compressor that consists of three phases. First, starting from the largest base (Bi), the positions of each Bi are replaced with ones and the positions of other bases that have smaller frequencies than Bi are replaced with zeros. Second, to encode the generated streams, we propose a new single-block encoding scheme (SEB) based on the exploitation of the position of neighboring bits within the block using two different techniques. Finally, the proposed algorithm dynamically assigns the shorter length code to each block. Results show that DNAC-SBE outperforms state-of-the-art compressors and proves its efficiency in terms of special conditions imposed on compressed data, storage space and data transfer rate regardless of the file format or the size of the data.


2018 ◽  
Author(s):  
Jamaluddin Jamaluddin

Dalam ilmu komputer, kompresi data adalah sebuah cara untuk memadatkan data sehingga hanya memerlukan ruang yang lebih kecil sehingga lebih efisien dalam menyimpannya atau mempersingkat waktu pertukaran data tersebut. Dalam tulisan ini, penulis ingin membandingkan efektivitas antara tiga jenis algoritma untuk melakukan kompresi data dalam bentuk teks. Ketiga algoritma tersebut adalah Fixed Length Binary Encoding, Variable Length Binary Encoding dan Algoritma Huffman.


Author(s):  
Restu Maulunida ◽  
Achmad Solichin

At present, the need to access the data have been transformed into digital data, and its use has been growing very rapidly. This transformation is due to the use of the Internet is growing very rapidly, and also the development of mobile devices are growing massively. People tend to store a lot of files in their storage and transfer files from one media to another media. When approaching the limit of storage media, the fewer files that can be stored. A compression technique is required to reduce the size of a file. The dictionary coding technique is one of the lossless compression techniques, LZW is an algorithm for applying coding dictionary compression techniques. In the LZW algorithm, the process of forming a dictionary uses a future based dictionary and encoding process using the Fixed Length Code. It allows the encoding process to produce a sequence that is still quite long. This study will modify the process of forming a dictionary and use Variable Length Code, to optimize the compression ratio. Based on the test using the data used in this study, the average compression ratio for LZW algorithm is 42,85%, and our proposed algorithm is 38,35%. It proves that the modification of the formation of the dictionary we proposed has not been able to improve the compression ratio of the LZW algorithm.


2017 ◽  
Vol 15 (3) ◽  
pp. 657-672 ◽  
Author(s):  
Bingjie Li ◽  
Cunguang Zhang ◽  
Bo Li ◽  
Hongxu Jiang ◽  
Qizhi Xu

2017 ◽  
Vol 2 (3) ◽  
pp. 252 ◽  
Author(s):  
Maurizio Martina ◽  
Andrea Molino ◽  
Fabrizio Vacca ◽  
Guido Masera ◽  
Guido Montorsi

The complete design of a new high throughput adaptive turbo decoder is described. The developed system isprogrammable in terms of block length, code rate and modulation scheme, which can be dinamically changed from frame to frame, according to varied channel conditions or user requirements. A parallel architecture with 16 concurrent SISOs has been adopted to achieve a decoding throughput as high as 35 Mbit/s with 10 iterations, while error correcting performance are within 1dB from the capacity limit. The whole system, including the iterativedecoder itself, de-mapping and de-puncturing units, as well as the input double buffer, has been mapped to a single FPGA device, running at 80 MHz, with a percentage occupation of 54%.


Sign in / Sign up

Export Citation Format

Share Document