Design and Development of a Hardware Efficient Image Compression Improvement Framework

2020 ◽  
Vol 12 (3) ◽  
pp. 217-225
Author(s):  
Hasanujjaman ◽  
Arnab Banerjee ◽  
Utpal Biswas ◽  
Mrinal K. Naskar

Background: In the region of image processing, a varied number of methods have already initiated the concept of data sciences optimization, in which, numerous global researchers have put their efforts upon the reduction of compression ratio and increment of PSNR. Additionally, the efforts have also separated into hardware and processing sections, that would help in emerging more prospective outcomes from the research. In this particular paper, a mystical concept for the image segmentation has been developed that helps in splitting the image into two different halves’, which is further termed as the atomic image. In-depth, the separations were done on the bases of even and odd pixels present within the size of the original image in the spatial domain. Furthermore, by splitting the original image into an atomic image will reflect an efficient result in experimental data. Additionally, the time for compression and decompression of the original image with both Quadtree and Huffman is also processed to receive the higher results observed in the result section. The superiority of the proposed schemes is further described upon the comparison basis of performances through the conventional Quadtree decomposition process. Objective : The objective of this present work is to find out the minimum resources required to reconstruct the image after compression. Method : The popular method of quadtree decomposition with Huffman encoding used for image compression. Results : The proposed algorithm was implemented on six types of images and got maximum PSNR of 30.12dB for Lena Image and a maximum compression ratio of 25.96 for MRI image. Conclusion: Different types of images are tested and a high compression ratio with acceptable PSNR was obtained.

Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1817
Author(s):  
Jiawen Xue ◽  
Li Yin ◽  
Zehua Lan ◽  
Mingzhu Long ◽  
Guolin Li ◽  
...  

This paper proposes a novel 3D discrete cosine transform (DCT) based image compression method for medical endoscopic applications. Due to the high correlation among color components of wireless capsule endoscopy (WCE) images, the original 2D Bayer data pattern is reconstructed into a new 3D data pattern, and 3D DCT is adopted to compress the 3D data for high compression ratio and high quality. For the low computational complexity of 3D-DCT, an optimized 4-point DCT butterfly structure without multiplication operation is proposed. Due to the unique characteristics of the 3D data pattern, the quantization and zigzag scan are ameliorated. To further improve the visual quality of decompressed images, a frequency-domain filter is proposed to eliminate the blocking artifacts adaptively. Experiments show that our method attains an average compression ratio (CR) of 22.94:1 with the peak signal to noise ratio (PSNR) of 40.73 dB, which outperforms state-of-the-art methods.


The domain of image signal processing, image compression is the significant technique, which is mainly invented to reduce the redundancy of image data in order to able to transmit the image pixels with high quality resolution. The standard image compression techniques like losseless and lossy compression technique generates high compression ratio image with efficient storage and transmission requirement respectively. There are many image compression technique are available for example JPEG, DWT and DCT based compression algorithms which provides effective results in terms of high compression ratio with clear quality image transformation. But they have more computational complexities in terms of processing, encoding, energy consumption and hardware design. Thus, bringing out these challenges, the proposed paper considers the most prominent research papers and discuses FPGA architecture design and future scope in the state of art of image compression technique. The primary aim to investigate the research challenges toward VLSI designing and image compression. The core section of the proposed study includes three folds viz standard architecture designs, related work and open research challenges in the domain of image compression.


2011 ◽  
Vol 65 ◽  
pp. 415-418
Author(s):  
Guang Ming Li ◽  
Zhen Qi He

At present, because more embedded image compressions are single, various compression methods have not transplant to embedded equipment. In this paper, A BP neural network based image compression methods have been proposed. The neural network is trained more and more, and obtained a set of weights and thresholds usefully. Then, use the FPGA to achieve, In the FPGA, using the framework of soft-core Nios Ⅱ way. Ultimately, compression program written using Verilog and burned into the FPGA. Experiments show that the system has the advantages of high compression ratio, small size, and can stable operation.


2013 ◽  
Vol 2013 ◽  
pp. 1-8 ◽  
Author(s):  
Kamil Dimililer

Medical images require compression, before transmission or storage, due to constrained bandwidth and storage capacity. An ideal image compression system must yield high-quality compressed image with high compression ratio. In this paper, Haar wavelet transform and discrete cosine transform are considered and a neural network is trained to relate the X-ray image contents to their ideal compression method and their optimum compression ratio.


2015 ◽  
Vol 16 (1) ◽  
pp. 83
Author(s):  
Ansam Ennaciri ◽  
Mohammed Erritali ◽  
Mustapha Mabrouki ◽  
Jamaa Bengourram

The objective of this paper is to study the main characteristics of wavelets that affect the image compression by using the discrete wavelet transform and lead to an image data compression while preserving the essential quality of the original image. This implies a good compromise between the image compression ratio and the PSNR (Peak Signal Noise Ration).


2017 ◽  
Vol 2 (3) ◽  
pp. 98-102
Author(s):  
Mohammed Salih Mahdi ◽  
Nidaa Falih Hassan

Lately, Internet improved in the various trends, especially, the use of the image increased due to the daily use in several scopes like social media (Facebook, Twitter, WhatsApp, etc.), connected devices (sensor, IP camera, Internet of Things (IoT) Internet of Everything (IoE), etc) and smart phone devices that users interchanged images estimated in the billions. So, images issues in internet can be summarized into two criteria, the first criteria is considered with transmit image size.  The second criteria is considered with low bandwidth through transmission. This paper exhibits a methodology for image compression using an idea of multiplication Table. The suggested algorithm helpful in realizing a preferable achievement by presenting a high Compression Ratio, preserve image quality with a high PSNR, small losing in the original image and efficiently in running time.


Author(s):  
X. Cheng ◽  
Z. Li

Abstract. Images with large volumes are generated daily with the advent of advanced sensors and platforms (e.g., satellite, unmanned autonomous vehicle) of data acquisition. This incurs issues on the storage, processing, and transmission of images. To address such issues, image compression is essential and can be achieved by lossy and/or lossless approaches. With lossy compression, a high compression ratio can usually be achieved but the original data can never be completely recovered. On the other hand, with lossless compression, the original information is well reserved. Lossless compression is very desirable in many applications such as remote sensing, geological surveying. Shannon's source coding theorem has defined the theoretical limits of compression ratio. However, some researchers have discovered that some compression techniques have achieved a compression ratio that is higher than the theoretical limits. Then, two questions naturally arise, i.e., “When this happens?” and “Why this happens?”. This study is dedicated to giving answers to these two questions. Six algorithms are used to compress 1650 images with different complexities. The experimental results show that the generally acknowledged Shannon’s coding theorem is still good enough for predicting compression ratio by the algorithms with consideration of statistical information only, but not capable of predicting compression ratio by the algorithms with consideration of configurational information of pixels. Overall, this study indicates that new empirical (or theoretical) models for predicting lossless compression ratio can be built with metrics capturing configurational information.


Sign in / Sign up

Export Citation Format

Share Document