scholarly journals Frei-Chen bases based lossy digital image compression technique

2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Mahmood Al-khassaweneh ◽  
Omar AlShorman

In the big data era, image compression is of significant importance in today’s world. Importantly, compression of large sized images is required for everyday tasks; including electronic data communications and internet transactions. However, two important measures should be considered for any compression algorithm: the compression factor and the quality of the decompressed image. In this paper, we use Frei-Chen bases technique and the Modified Run Length Encoding (RLE) to compress images. The Frei-Chen bases technique is applied at the first stage in which the average subspace is applied to each 3 × 3 block. Those blocks with the highest energy are replaced by a single value that represents the average value of the pixels in the corresponding block. Even though Frei-Chen bases technique provides lossy compression, it maintains the main characteristics of the image. Additionally, the Frei-Chen bases technique enhances the compression factor, making it advantageous to use. In the second stage, RLE is applied to further increase the compression factor. The goal of using RLE is to enhance the compression factor without adding any distortion to the resultant decompressed image. Integrating RLE with Frei-Chen bases technique, as described in the proposed algorithm, ensures high quality decompressed images and high compression rate. The results of the proposed algorithms are shown to be comparable in quality and performance with other existing methods.

Author(s):  
Magy El Banhawy ◽  
Walaa Saber ◽  
Fathy Amer

A fundamental factor of digital image compression is the conversion processes. The intention of this process is to understand the shape of an image and to modify the digital image to a grayscale configuration where the encoding of the compression technique is operational. This article focuses on an investigation of compression algorithms for images with artistic effects. A key component in image compression is how to effectively preserve the original quality of images. Image compression is to condense by lessening the redundant data of images in order that they are transformed cost-effectively. The common techniques include discrete cosine transform (DCT), fast Fourier transform (FFT), and shifted FFT (SFFT). Experimental results point out compression ratio between original RGB images and grayscale images, as well as comparison. The superior algorithm improving a shape comprehension for images with grahic effect is SFFT technique.


Author(s):  
Emy Setyaningsih ◽  
Agus Harjoko

A compression process is to reduce or compress the size of data while maintaining the quality of information contained therein. This paper presents a survey of research papers discussing improvement of various hybrid compression techniques during the last decade. A hybrid compression technique is a technique combining excellent properties of each group of methods as is performed in JPEG compression method. This technique combines lossy and lossless compression method to obtain a high-quality compression ratio while maintaining the quality of the reconstructed image. Lossy compression technique produces a relatively high compression ratio, whereas lossless compression brings about high-quality data reconstruction as the data can later be decompressed with the same results as before the compression. Discussions of the knowledge of and issues about the ongoing hybrid compression technique development indicate the possibility of conducting further researches to improve the performance of image compression method.


Author(s):  
Chanintorn Jittawiriyanukoon ◽  
Vilasinee Srisarkun

A fundamental factor of digital image compression is the conversion processes. The intention of this process is to understand the shape of an image and to modify the digital image to a grayscale configuration where the encoding of the compression technique is operational. This article focuses on an investigation of compression algorithms for images with artistic effects. A key component in image compression is how to effectively preserve the original quality of images. Image compression is to condense by lessening the redundant data of images in order that they are transformed cost-effectively. The common techniques include discrete cosine transform (DCT), fast Fourier transform (FFT), and shifted FFT (SFFT). Experimental results point out compression ratio between original RGB images and grayscale images, as well as comparison. The superior algorithm improving a shape comprehension for images with grahic effect is SFFT technique.


2016 ◽  
Vol 2016 ◽  
pp. 1-7 ◽  
Author(s):  
Farhan Hussain ◽  
Jechang Jeong

A compression technique for still digital images is proposed with deep neural networks (DNNs) employing rectified linear units (ReLUs). We tend to exploit the DNNs capabilities to find a reasonable estimate of the underlying compression/decompression relationships. We aim for a DNN for image compression purpose that has better generalization property and reduced training time and support real time operation. The use of ReLUs which map more plausibly to biological neurons, makes the training of our DNN significantly faster, shortens the encoding/decoding time, and improves its generalization ability. The introduction of the ReLUs establishes an efficient gradient propagation, induces sparsity in the proposed network, and is efficient in terms of computations making these networks suitable for real time compression systems. Experiments performed on standard real world images show that using ReLUs instead of logistic sigmoid units speeds up the training of the DNN by converging markedly faster. The evaluation of objective and subjective quality of reconstructed images also proves that our DNN achieves better generalization as most of the images are never seen by the network before.


2020 ◽  
Vol 44 (3) ◽  
pp. 603-623 ◽  
Author(s):  
Lei Li ◽  
Chengzhi Zhang ◽  
Daqing He ◽  
Jia Tina Du

PurposeThrough a two-stage survey, this paper examines how researchers judge the quality of answers on ResearchGate Q&A, an academic social networking site.Design/methodology/approachIn the first-stage survey, 15 researchers from Library and Information Science (LIS) judged the quality of 157 answers to 15 questions and reported the criteria that they had used. The content of their reports was analyzed, and the results were merged with relevant criteria from the literature to form the second-stage survey questionnaire. This questionnaire was then completed by researchers recognized as accomplished at identifying high-quality LIS answers on ResearchGate Q&A.FindingsMost of the identified quality criteria for academic answers—such as relevance, completeness, and verifiability—have previously been found applicable to generic answers. The authors also found other criteria, such as comprehensiveness, the answerer's scholarship, and value-added. Providing opinions was found to be the most important criterion, followed by completeness and value-added.Originality/valueThe findings here show the importance of studying the quality of answers on academic social Q&A platforms and reveal unique considerations for the design of such systems.


Connectivity ◽  
2020 ◽  
Vol 148 (6) ◽  
Author(s):  
Yu. I. Katkov ◽  
◽  
O. S. Zvenigorodsky ◽  
O. V. Zinchenko ◽  
V. V. Onyshchenko ◽  
...  

The article is devoted to the topical issue of finding new effective and improving existing widespread compression methods in order to reduce computational complexity and improve the quality of image-renewable image compression images, is important for the introduction of cloud technologies. The article presents a problem To increase the efficiency of cloud storage, it is necessary to determine methods for reducing the information redundancy of digital images by fractal compression of video content, to make recommendations on the possibilities of applying these methods to solve various practical problems. The necessity of storing high-quality video information in new HDTV formats 2k, 4k, 8k in cloud storage to meet the existing needs of users has been substantiated. It is shown that when processing and transmitting high quality video information there is a problem of reducing the redundancy of video data (image compression) provided that the desired image quality is preserved, restored by the user. It has been shown that in cloud storage the emergence of such a problem is historically due to the contradiction between consumer requirements for image quality and the necessary volumes and ways to reduce redundancy of video data, which are transmitted over communication channels and processed in data center servers. The solution to this problem is traditionally rooted in the search for effective technologies for compressing, archiving and compressing video information. An analysis of video compression methods and digital video compression technology has been performed, which reduces the amount of data used to represent the video stream. Approaches to image compression in cloud storage under conditions of preservation or a slight reduction in the amount of data that provide the user with the specified quality of the restored image are shown. Classification of special compression methods without loss and with information loss is provided. Based on the analysis, it is concluded that it is advisable to use special methods of compression with loss of information to store high quality video information in the new formats HDTV 2k, 4k, 8k in cloud storage. The application of video image processing and their encoding and compression on the basis of fractal image compression is substantiated. Recommendations for the implementation of these methods are given.


2021 ◽  
Vol 15 ◽  
pp. 43-47
Author(s):  
Ahmad Shahin ◽  
Walid Moudani ◽  
Fadi Chakik

In this paper we present a hybrid model for image compression based on segmentation and total variation regularization. The main motivation behind our approach is to offer decode image with immediate access to objects/features of interest. We are targeting high quality decoded image in order to be useful on smart devices, for analysis purpose, as well as for multimedia content-based description standards. The image is approximated as a set of uniform regions: The technique will assign well-defined members to homogenous regions in order to achieve image segmentation. The Adaptive fuzzy c-means (AFcM) is a guide to cluster image data. A second stage coding is applied using entropy coding to remove the whole image entropy redundancy. In the decompression phase, the reverse process is applied in which the decoded image suffers from missing details due to the coarse segmentation. For this reason, we suggest the application of total variation (TV) regularization, such as the Rudin-Osher-Fatemi (ROF) model, to enhance the quality of the coded image. Our experimental results had shown that ROF may increase the PSNR and hence offer better quality for a set of benchmark grayscale images.


Author(s):  
Kandarpa Kumar Sarma

The explosive growths in data exchanges have necessitated the development of new methods of image compression including use of learning based techniques. The learning based systems aids proper compression and retrieval of the image segments. Learning systems like. Artificial Neural Networks (ANN) have established their efficiency and reliability in achieving image compression. In this work, two approaches to use ANNs in Feed Forward (FF) form and another based on Self Organizing Feature Map (SOFM) is proposed for digital image compression. The image to be compressed is first decomposed into smaller blocks and passed to FFANN and SOFM networks for generation of codebooks. The compressed images are reconstructed using a composite block formed by a FFANN and a Discrete Cosine Transform (DCT) based compression-decompression system. Mean Square Error (MSE), Compression ratio (CR) and Peak Signal-to-Noise Ratio (PSNR) are used to evaluate the performance of the system.


2003 ◽  
Vol 2003 ◽  
pp. 10-10
Author(s):  
R.M. Kirkland ◽  
D.C. Patterson

A preliminary study at this Institute indicated that inclusion of high quality maize silage in a grass silage-based diet could promote higher forage intakes in beef cattle, but the response to inclusion of maize silage was affected by the quality of grass silage. The objective of this study was to further examine the effects of grass (GS) and maize (MS) silage qualities on intake characteristics, and to evaluate the influence of forage offered on animal performance.


2020 ◽  
Vol 10 (9) ◽  
pp. 3214
Author(s):  
Muhammad Tahir ◽  
Muhammad Usman ◽  
Fazal Muhammad ◽  
Shams ur Rehman ◽  
Imran Khan ◽  
...  

High Blood Pressure (BP) is a vital factor in the development of cardiovascular diseases worldwide. For more than a decade now, patients search for quality and easy-to-read Online Health Information (OHI) for symptoms, preventions, therapy and other medical conditions. In this paper, we evaluate the quality and readability of OHI about high BP. In order that the first 20 clicks of three top-rated search engines have been used to collect the pertinent data. Using the exclusion criteria, 25 unique websites are selected for evaluation. The quality of all included links is evaluated through DISCERN checklist, a questionnaire for assessing the quality of written information for a health problem. To enhance the reliability of evaluation, all links are separately assessed by two different groups—a group of Health Professional (HPs) and a group of Lay Subjects (LS). A readability test is performed using Flesch-Kincaid tool. Fleiss’ kappa has been calculated before considering average value of each group. After evaluation, the average DISCERN value of HPs is 49.43 ± 14.0 (fair quality) while for LS, it is 48.7 ± 12.2; the mean Flesch-Reading Ease Score (FRES) is 58.5 ± 11.1, which is fairly difficult to read and the Average Grade Level (AGL) is 8.8 ± 1.9. None of the websites scored more than 73 (90%). In both groups, only 4 (16%) websites achieved DISCERN score over 80%. Mann-Whitney and Cronbach’s alpha have been computed to check the statistical significance of the difference between two groups and internal consistency of DISCERN checklist, respectively. Normality and homoscedasticity tests have been performed to check the distribution of scores of both evaluating groups. In both groups, information category websites achieved high DISCERN score but their readability level is worse. Highest scoring websites have clear aim, succinct source and high quality of information on treatment options. High BP is a pervasive disease, yet most of the websites did not produce precise or high-quality information on treatment options.


Sign in / Sign up

Export Citation Format

Share Document