Effects of Color Quantization on JPEG Compression

2020 ◽  
Vol 20 (03) ◽  
pp. 2050026
Author(s):  
Leonardo C. Araujo ◽  
Joao P. H. Sansao ◽  
Mario C. S. Junior

This paper analyzes the effects of color quantization on standard JPEG compression. Optimized color palettes were used to quantize natural images, using dithering and chroma subsampling as optional. The resulting variations on file size and quantitative quality measures were analyzed. Preliminary results, using a small image database, show that file size suffered an average 20% increase and a concomitant loss in quality was perceived ([Formula: see text]6dB PSNR, [Formula: see text]0.16 SSIM and [Formula: see text]9.6 Butteraugli). Color quantization present itself as an ineffective tool on JPEG compression but if necessarily imposed, on high quality compressed images, it might lead to a negligible increase in data size and quality loss. In addition dithering seems to always decrease JPEG compression ratio.

Sensors ◽  
2019 ◽  
Vol 19 (5) ◽  
pp. 1214 ◽  
Author(s):  
Kai-Lung Hua ◽  
Ho Trang ◽  
Kathiravan Srinivasan ◽  
Yung-Yao Chen ◽  
Chun-Hao Chen ◽  
...  

The JPEG-XR encoding process utilizes two types of transform operations: Photo Overlap Transform (POT) and Photo Core Transform (PCT). Using the Device Porting Kit (DPK) provided by Microsoft, we performed encoding and decoding processes on JPEG XR images. It was discovered that when the quantization parameter is >1-lossy compression conditions, the resulting image displays chequerboard block artefacts, border artefacts and corner artefacts. These artefacts are due to the nonlinearity of transforms used by JPEG-XR. Typically, it is not so visible; however, it can cause problems while copying and scanning applications, as it shows nonlinear transforms when the source and the target of the image have different configurations. Hence, it is important for document image processing pipelines to take such artefacts into account. Additionally, these artefacts are most problematic for high-quality settings and appear more visible at high compression ratios. In this paper, we analyse the cause of the above artefacts. It was found that the main problem lies in the step of POT and quantization. To solve this problem, the use of a “uniform matrix” is proposed. After POT (encoding) and before inverse POT (decoding), an extra step is added to multiply this uniform matrix. Results suggest that it is an easy and effective way to decrease chequerboard, border and corner artefacts, thereby improving the image quality of lossy encoding JPEG XR than the original DPK program with no increased calculation complexity or file size.


Entropy ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. 421
Author(s):  
Dariusz Puchala ◽  
Kamil Stokfiszewski ◽  
Mykhaylo Yatsymirskyy

In this paper, the authors analyze in more details an image encryption scheme, proposed by the authors in their earlier work, which preserves input image statistics and can be used in connection with the JPEG compression standard. The image encryption process takes advantage of fast linear transforms parametrized with private keys and is carried out prior to the compression stage in a way that does not alter those statistical characteristics of the input image that are crucial from the point of view of the subsequent compression. This feature makes the encryption process transparent to the compression stage and enables the JPEG algorithm to maintain its full compression capabilities even though it operates on the encrypted image data. The main advantage of the considered approach is the fact that the JPEG algorithm can be used without any modifications as a part of the encrypt-then-compress image processing framework. The paper includes a detailed mathematical model of the examined scheme allowing for theoretical analysis of the impact of the image encryption step on the effectiveness of the compression process. The combinatorial and statistical analysis of the encryption process is also included and it allows to evaluate its cryptographic strength. In addition, the paper considers several practical use-case scenarios with different characteristics of the compression and encryption stages. The final part of the paper contains the additional results of the experimental studies regarding general effectiveness of the presented scheme. The results show that for a wide range of compression ratios the considered scheme performs comparably to the JPEG algorithm alone, that is, without the encryption stage, in terms of the quality measures of reconstructed images. Moreover, the results of statistical analysis as well as those obtained with generally approved quality measures of image cryptographic systems, prove high strength and efficiency of the scheme’s encryption stage.


2021 ◽  
Vol 2 ◽  
pp. 263348952199419
Author(s):  
Cara C Lewis ◽  
Kayne Mettert ◽  
Aaron R Lyon

Background: Despite their inclusion in Rogers’ seminal diffusion of innovations theory, few implementation studies empirically evaluate the role of intervention characteristics. Now, with growing evidence on the role of adaptation in implementation, high-quality measures of characteristics such as adaptability, trialability, and complexity are needed. Only two systematic reviews of implementation measures captured those related to the intervention or innovation and their assessment of psychometric properties was limited. This manuscript reports on the results of eight systematic reviews of measures of intervention characteristics with nuanced data regarding a broad range of psychometric properties. Methods: The systematic review proceeded in three phases. Phase I, data collection, involved search string generation, title and abstract screening, full text review, construct assignment, and citation searches. Phase II, data extraction, involved coding psychometric information. Phase III, data analysis, involved two trained specialists independently rating each measure using PAPERS (Psychometric And Pragmatic Evidence Rating Scales). Results: Searches identified 16 measures or scales: zero for intervention source, one for evidence strength and quality, nine for relative advantage, five for adaptability, six for trialability, nine for complexity, and two for design quality and packaging. Information about internal consistency and norms was available for most measures, whereas information about other psychometric properties was most often not available. Ratings for psychometric properties fell in the range of “poor” to “good.” Conclusion: The results of this review confirm that few implementation scholars are examining the role of intervention characteristics in behavioral health studies. Significant work is needed to both develop new measures (e.g., for intervention source) and build psychometric evidence for existing measures in this forgotten domain. Plain Language Summary Intervention characteristics have long been perceived as critical factors that directly influence the rate of adopting an innovation. It remains unclear the extent to which intervention characteristics including relative advantage, complexity, trialability, intervention source, design quality and packaging, evidence strength and quality, adaptability, and cost impact implementation of evidence-based practices in behavioral health settings. To unpack the differential influence of these factors, high quality measures are needed. Systematic reviews can identify measures and synthesize the data regarding their quality to identify gaps in the field and inform measure development and testing efforts. Two previous reviews identified measures of intervention characteristics, but they did not provide information about the extent of the existing evidence nor did they evaluate the host of evidence available for identified measures. This manuscript summarizes the results of nine systematic reviews (i.e., one for each of the factors listed above) for which 16 unique measures or scales were identified. The nuanced findings will help direct measure development work in this forgotten domain.


2017 ◽  
Vol 13 (10) ◽  
pp. e874-e880 ◽  
Author(s):  
Emily E. Johnston ◽  
Abby R. Rosenberg ◽  
Arif H. Kamal

We must ensure that the 20,000 US children (age 0 to 19 years) who die as a result of serious illness annually receive high-quality end-of-life care. Ensuring high-quality end-of-life care requires recognition that pediatric end-of-life care is conceptually and operationally different than that for adults. For example, in-hospital adult death is considered an outcome to be avoided, whereas many pediatric families may prefer hospital death. Because pediatric deaths are comparatively rare, not all centers offer pediatric-focused palliative care and hospice services. The unique psychosocial issues facing families who are losing a child include challenges for parent decision makers and young siblings. Furthermore, the focus on advance directive documentation in adult care may be less relevant in pediatrics because parental decision makers are available. Health care quality measures provide a framework for tracking the care provided and aid in agency and provider accountability, reimbursement, and educated patient choice for location of care. The National Quality Forum, Joint Commission, and other groups have developed several end-of-life measures. However, none of the current quality measures focus on the unique needs of dying pediatric patients and their caregivers. To evolve the existing infrastructure to better measure and report quality pediatric end-of-life care, we propose two changes. First, we outline how existing adult quality measures may be modified to better address pediatric end-of-life care. Second, we suggest the formation of a pediatric quality measure end-of-life task force. These are the next steps to evolving end-of-life quality measures to better fit the needs of seriously ill children.


2019 ◽  
Vol 101 (5) ◽  
pp. 841-852 ◽  
Author(s):  
Joseph Doyle ◽  
John Graves ◽  
Jonathan Gruber

Hospital quality measures are crucial to a key idea behind health care payment reforms: “paying for quality” instead of quantity. Nevertheless, such measures face major criticisms largely over the potential failure of risk adjustment to overcome endogeneity concerns when ranking hospitals. In this paper, we test whether patients treated at hospitals that score higher on commonly used quality measures have better health outcomes in terms of rehospitalization and mortality. To compare similar patients across hospitals in the same market, we exploit ambulance company preferences as an instrument for hospital choice. We find that a variety of measures that insurers use to measure provider quality are successful: choosing a high-quality hospital compared to a low-quality hospital results in 10% to 15% better outcomes.


Author(s):  
Tianyu Guo ◽  
Chang Xu ◽  
Boxin Shi ◽  
Chao Xu ◽  
Dacheng Tao

Generative Adversarial Networks (GANs) have demonstrated a strong ability to fit complex distributions since they were presented, especially in the field of generating natural images. Linear interpolation in the noise space produces a continuously changing in the image space, which is an impressive property of GANs. However, there is no special consideration on this property in the objective function of GANs or its derived models. This paper analyzes the perturbation on the input of the generator and its influence on the generated images. A smooth generator is then developed by investigating the tolerable input perturbation. We further integrate this smooth generator with a gradient penalized discriminator, and design smooth GAN that generates stable and high-quality images. Experiments on real-world image datasets demonstrate the necessity of studying smooth generator and the effectiveness of the proposed algorithm.


2013 ◽  
Vol 380-384 ◽  
pp. 4019-4022
Author(s):  
Feng Zhou ◽  
Yi Jian Pei ◽  
Hao Wu ◽  
Zhi Jun Chen

With the development of society, much technology has been invented, many problems have been conquered. But it is still very hard for medical science. For example, physicians usually obtain the lesion information by means of medical imaging equipment and computer visualization techniques, to judge the symptoms, and develop appropriate treatment programs. However, it is hard for Magnetic Resonance Imaging (MRI) equipment to get high-quality image of the lesion, and then extract the pathological features of high quality. This paper will introduce a method to overcome this tough stuff. This paper attempts to adopt compressed sensing technology for image processing in the bone repair process. Lesion images are first acquired via MRI, then with the wavelet transform, to get the sparse matrix by the wavelet coefficient sparse representation. The method can obtain compressed images, and extract the corresponding pathological features without reducing the image quality.


Sign in / Sign up

Export Citation Format

Share Document