scholarly journals Analysis of DICOM Image Compression Alternative Using Huffman Coding

2019 ◽  
Vol 2019 ◽  
pp. 1-11 ◽  
Author(s):  
Romi Fadillah Rahmat ◽  
T. S. M. Andreas ◽  
Fahmi Fahmi ◽  
Muhammad Fermi Pasha ◽  
Mohammed Yahya Alzahrani ◽  
...  

Compression, in general, aims to reduce file size, with or without decreasing data quality of the original file. Digital Imaging and Communication in Medicine (DICOM) is a medical imaging file standard used to store multiple information such as patient data, imaging procedures, and the image itself. With the rising usage of medical imaging in clinical diagnosis, there is a need for a fast and secure method to share large number of medical images between healthcare practitioners, and compression has always been an option. This work analyses the Huffman coding compression method, one of the lossless compression techniques, as an alternative method to compress a DICOM file in open PACS settings. The idea of the Huffman coding compression method is to provide codeword with less number of bits for the symbol that has a higher value of byte frequency distribution. Experiments using different type of DICOM images are conducted, and the analysis on the performances in terms of compression ratio and compression/decompression time, as well as security, is provided. The experimental results showed that the Huffman coding technique has the capability to compress the DICOM file up to 1 : 3.7010 ratio and up to 72.98% space savings.

Information ◽  
2020 ◽  
Vol 11 (4) ◽  
pp. 196
Author(s):  
Shmuel T. Klein ◽  
Dana Shapira

It seems reasonable to expect from a good compression method that its output should not be further compressible, because it should behave essentially like random data. We investigate this premise for a variety of known lossless compression techniques, and find that, surprisingly, there is much variability in the randomness, depending on the chosen method. Arithmetic coding seems to produce perfectly random output, whereas that of Huffman or Ziv-Lempel coding still contains many dependencies. In particular, the output of Huffman coding has already been proven to be random under certain conditions, and we present evidence here that arithmetic coding may produce an output that is identical to that of Huffman.


Author(s):  
Hendra Mesra ◽  
Handayani Tjandrasa ◽  
Chastine Fatichah

<p>In general, the compression method is developed to reduce the redundancy of data. This study uses a different approach to embed some bits of datum in image data into other datum using a Reversible Low Contrast Mapping (RLCM) transformation. Besides using the RLCM for embedding, this method also applies the properties of RLCM to compress the datum before it is embedded. In its algorithm, the proposed method engages Queue and Recursive Indexing. The algorithm encodes the data in a cyclic manner. In contrast to RLCM, the proposed method is a coding method as Huffman coding. This research uses publicly available image data to examine the proposed method. For all testing images, the proposed method has higher compression ratio than the Huffman coding.</p>


2018 ◽  
Vol 7 (4.36) ◽  
pp. 419
Author(s):  
V. Beslin Geo ◽  
K. Sakthidasan @ Sankaran ◽  
P. Archana ◽  
M. Umarani

Extraction of hidden text in web images, computer screen images, news, games and e-learning is a very important task in image processing. Compression of digital images leads to poor visual quality of background and text images. Digital images are significantly considered and segmented using DWT into text and background blocks. Huffman coding is used to perform the lossless compression process in the compressed text pixels and the SPIHT algorithm in employed to the compress the background pixels. The result of DWT segmentation shows fringes in the segmented text image. The proposed method uses connected region and edge detection approach which provides a segmented text from digital video stills. The segmented text is converted to binary image using luminance thresholding which leads to fine quality of extracted text. 


2020 ◽  
Vol 4 (1) ◽  
pp. 155-162
Author(s):  
I Dewa Gede Hardi Rastama ◽  
I Made Oka Widyantara ◽  
Linawati

Medical imaging is a presentment of human organ parts. Medical imaging is saved on a film; therefore, it needs a big saving quota. Compressing is a process to remove redundancy from a piece of information without reducing its quality. This study recommended compressed medical image with DWT (Discrete Wavelet Transform) with adaptive threshold added and entropy copying with the Run Length Encoding (RLE) coding. This study is comparing several parameters, such as compressed ratio and compressed image file size, and PSNR (Peak Signal to Noise Ratio) for analyzing the quality of reconstructive image. The study showed that the comparison of rate, compressed ratio, and PSNR tracing of Haar and Daubechies doesn’t have a significant difference. Comparison of rate, compressed ratio, and PSNR tracing on the hard and soft threshold is the rate of the sold threshold is lower than the hard threshold. The optimal outcome of this study is to use a soft threshold.


Author(s):  
Hendra Mesra ◽  
Handayani Tjandrasa ◽  
Chastine Fatichah

<p>In general, the compression method is developed to reduce the redundancy of data. This study uses a different approach to embed some bits of datum in image data into other datum using a Reversible Low Contrast Mapping (RLCM) transformation. Besides using the RLCM for embedding, this method also applies the properties of RLCM to compress the datum before it is embedded. In its algorithm, the proposed method engages Queue and Recursive Indexing. The algorithm encodes the data in a cyclic manner. In contrast to RLCM, the proposed method is a coding method as Huffman coding. This research uses publicly available image data to examine the proposed method. For all testing images, the proposed method has higher compression ratio than the Huffman coding.</p>


Author(s):  
Emy Setyaningsih ◽  
Agus Harjoko

A compression process is to reduce or compress the size of data while maintaining the quality of information contained therein. This paper presents a survey of research papers discussing improvement of various hybrid compression techniques during the last decade. A hybrid compression technique is a technique combining excellent properties of each group of methods as is performed in JPEG compression method. This technique combines lossy and lossless compression method to obtain a high-quality compression ratio while maintaining the quality of the reconstructed image. Lossy compression technique produces a relatively high compression ratio, whereas lossless compression brings about high-quality data reconstruction as the data can later be decompressed with the same results as before the compression. Discussions of the knowledge of and issues about the ongoing hybrid compression technique development indicate the possibility of conducting further researches to improve the performance of image compression method.


Author(s):  
Andreas Soegandi

The purpose of this study was to perform lossless compression on the uncompress audio file audio to minimize file size without reducing the quality. The application is developed using the entropy encoding compression method with rice coding technique. For the result, the compression ratio is good enough and easy to be developed because the algorithm is quite simple. 


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1817
Author(s):  
Jiawen Xue ◽  
Li Yin ◽  
Zehua Lan ◽  
Mingzhu Long ◽  
Guolin Li ◽  
...  

This paper proposes a novel 3D discrete cosine transform (DCT) based image compression method for medical endoscopic applications. Due to the high correlation among color components of wireless capsule endoscopy (WCE) images, the original 2D Bayer data pattern is reconstructed into a new 3D data pattern, and 3D DCT is adopted to compress the 3D data for high compression ratio and high quality. For the low computational complexity of 3D-DCT, an optimized 4-point DCT butterfly structure without multiplication operation is proposed. Due to the unique characteristics of the 3D data pattern, the quantization and zigzag scan are ameliorated. To further improve the visual quality of decompressed images, a frequency-domain filter is proposed to eliminate the blocking artifacts adaptively. Experiments show that our method attains an average compression ratio (CR) of 22.94:1 with the peak signal to noise ratio (PSNR) of 40.73 dB, which outperforms state-of-the-art methods.


Author(s):  
N. Karthika Devi ◽  
G. Mahendran ◽  
S. Murugeswari ◽  
S. Praveen Samuel Washburn ◽  
D. Archana Devi ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document