Application Research of Kakadu System Based on JPEG2000

2011 ◽  
Vol 403-408 ◽  
pp. 1933-1936
Author(s):  
Ke Qiang Ren ◽  
Hai Ying Jiang

JPEG2000 is a new compressed standard of static image, Kakadu is a open source system which can be higher efficiency to realize the algorithm of JPEG2000. This article introduces the codec construction and the compressed stream structure in JPEG2000, analyzes the construction of Kakadu and image compression class in Kakadu, and then carries on application experiments based on JPEG2000 in Kakadu 2.2 platform. Experiments show that the compression of interest image region has higher transfer rate and lower memory compared with the original image, and JPEG2000 has higher compression rate and better visual quality compared with JPEG.

2020 ◽  
Vol 2020 ◽  
pp. 1-8 ◽  
Author(s):  
Zhengwei Zhang ◽  
Mingjian Zhang ◽  
Liuyang Wang

To improve the visual quality and embedding rate of existing reversible image watermarking algorithms, a reversible image watermarking algorithm based on quadratic difference expansion is proposed. First, the pixel points with grayscale values 0 and 255 in the original image are removed, and then, the half-scrambled watermark information is embedded into the original image using linear difference expansion. Finally, the remaining half of the watermark information is embedded into the previously generated watermarked image by the quadratic difference expansion, meanwhile the removed pixel points with grayscale values 0 and 255 in the image are merged, and the final watermarked image is generated accordingly. The experimental results show that the algorithm has both a high embedding rate and a high visual quality, which can completely recover the original image. Compared with other difference expansion watermarking algorithms, it has certain advantages without having to consider the smoothness of the embedded image region.


2019 ◽  
Vol 11 (7) ◽  
pp. 849 ◽  
Author(s):  
Chengwei Liu ◽  
Xiubao Sui ◽  
Xiaodong Kuang ◽  
Yuan Liu ◽  
Guohua Gu ◽  
...  

In this paper, an optimized contrast enhancement method combining global and local enhancement results is proposed to improve the visual quality of infrared images. Global and local contrast enhancement methods have their merits and demerits, respectively. The proposed method utilizes the complementary characteristics of these two methods to achieve noticeable contrast enhancement without artifacts. In our proposed method, the 2D histogram, which contains both global and local gray level distribution characteristics of the original image, is computed first. Then, based on the 2D histogram, the global and local enhanced results are obtained by applying histogram specification globally and locally. Lastly, the enhanced result is computed by solving an optimization equation subjected to global and local constraints. The pixel-wise regularization parameters for the optimization equation are adaptively determined based on the edge information of the original image. Thus, the proposed method is able to enhance the local contrast while preserving the naturalness of the original image. Qualitative and quantitative evaluation results demonstrate that the proposed method outperforms the block-based methods for improving the visual quality of infrared images.


Author(s):  
Pradeep Reddy Raamana ◽  
Athena Theyers ◽  
Tharushan Selliah ◽  
Piali Bhati ◽  
Stephen R. Arnott ◽  
...  

AbstractQuality control of morphometric neuroimaging data is essential to improve reproducibility. Owing to the complexity of neuroimaging data and subsequently the interpretation of their results, visual inspection by trained raters is the most reliable way to perform quality control. Here, we present a protocol for visual quality control of the anatomical accuracy of FreeSurfer parcellations, based on an easy to use open source tool called VisualQC. We comprehensively evaluate its utility in terms of error detection rate and inter-rater reliability on two large multi-site datasets, and discuss site differences in error patterns. This evaluation shows that VisualQC is a practically viable protocol for community adoption.


Author(s):  
Fangfang Li ◽  
Sergey Krivenko ◽  
Vladimir Lukin

Image information technology has become an important perception technology considering the task of providing lossy image compression with the desired quality using certain encoders Recent researches have shown that the use of a two-step method can perform the compression in a very simple manner and with reduced compression time under the premise of providing a desired visual quality accuracy. However, different encoders have different compression algorithms. These issues involve providing the accuracy of the desired quality. This paper considers the application of the two-step method in an encoder based on a discrete wavelet transform (DWT). In the experiment, bits per pixel (BPP) is used as the control parameter to vary and predict the compressed image quality, and three visual quality evaluation metrics (PSNR, PSNR-HVS, PSNR-HVS-M) are analyzed. In special cases, the two-step method is allowed to be modified. This modification relates to the cases when images subject to lossy compression are either too simple or too complex and linear approximation of dependences is no more valid. Experimental data prove that, compared with the single-step method, after performing the two-step compression method, the mean square error of differences between desired and provided values drops by an order of magnitude. For PSNR-HVS-M, the error of the two-step method does not exceed 3.6 dB. The experiment has been conducted for Set Partitioning in Hierarchical Trees (SPIHT), a typical image encoder based on DWT, but it can be expected that the proposed method applies to other DWT-based image compression techniques. The results show that the application range of the two-step lossy compression method has been expanded. It is not only suitable for encoders based on discrete cosine transform (DCT) but also works well for DWT-based encoders.


i-Perception ◽  
2021 ◽  
Vol 12 (5) ◽  
pp. 204166952110545
Author(s):  
Fumiya Kurosawa ◽  
Taiki Orima ◽  
Kosuke Okada ◽  
Isamu Motoyoshi

The visual system represents textural image regions as simple statistics that are useful for the rapid perception of scenes and surfaces. What images ‘textures’ are, however, has so far mostly been subjectively defined. The present study investigated the empirical conditions under which natural images are processed as texture. We first show that ‘texturality’ – i.e., whether or not an image is perceived as a texture – is strongly correlated with the perceived similarity between an original image and its Portilla-Simoncelli (PS) synthesized image. We found that both judgments are highly correlated with specific PS statistics of the image. We also demonstrate that a discriminant model based on a small set of image statistics could discriminate whether a given image was perceived as a texture with over 90% accuracy. The results provide a method to determine whether a given image region is represented statistically by the human visual system.


2021 ◽  
pp. 83-91
Author(s):  
Богдан Віталійович Коваленко ◽  
Володимир Васильович Лукін

The subject of the article is to analyze the effectiveness of lossy image compression using a BPG encoder using visual metrics as a quality criterion. The aim is to confirm the existence of an operating point for images of varying complexity for visual quality metrics. The objectives of the paper are the following: to analyze for a set of images of varying complexity, where images are distorted by additive white Gaussian noise with different variance values, build and analyze dependencies for visual image quality metrics, provide recommendations on the choice of parameters for compression in the vicinity of the operating point. The methods used are the following: methods of mathematical statistics; methods of digital image processing. The following results were obtained. Dependencies of visual quality metrics for images of various degrees of complexity affected by noise with variance equal to 64, 100, and 196. It can be seen from the constructed dependence that a working point is present for images of medium and low complexity for both the PSNR-HVS-M and MS-SSIM metrics. Recommendations are given for choosing a parameter for compression based on the obtained dependencies. Conclusions. Scientific novelty of the obtained results is the following: for a new compression method using Better Portable Graphics (BPG), research has been conducted and the existence of an operating point for visual quality metrics has been proven, previously such studies were conducted only for the PSNR metric.The test images were distorted by additive white Gaussian noise and then compressed using the methods implemented in the BPG encoder. The images were compressed with different values of the Q parameter, which made it possible to estimate the image compression quality at different values of compression ratio. The resulting data made it possible to visualize the dependence of the visual image quality metric on the Q parameter. Based on the obtained dependencies, it can be concluded that the operating point is present both for the PSNR-HVS-M metric and for the MS-SSIM for images of medium and low complexity, it is also worth noting that, especially clearly, the operating point is noticeable at large noise variance values. As a recommendation, a formula is presented for calculating the value of the compression control parameter (for the case with the BPG encoder, it is the Q parameter) for images distorted by noise with variance varying within a wide range, on the assumption that the noise variance is a priori known or estimated with high accuracy.


Author(s):  
Hugh A. Cayless

This paper will present the results of ongoing experimentation with the linking of manuscript images to TEI transcriptions. The method being tested involves the automated conversion of images containing text to SVG, using Open Source tools. Once the text has been converted to SVG paths, these can be grouped in the document to mark the words therein and these groups can then be linked using standard methods to tokenized versions of the transcriptions. The goal of these experiments is to achieve a much more fine-grained linking and annotation mechanism than is so far possible with available tools, e.g. the Image Markup Tool and TEI P5 facsimile markup, both of which annotate only rectangular sections of an image. The method envisioned here would produce a legible tracing of the word, expressed in XML, to which transcripts and annotations might be attached and which can be superimposed upon the original image.


Author(s):  
Hajime Nobuhara ◽  
◽  
Kaoru Hirota

A new style of fuzzy wavelets is proposed by the fuzzification of morphological wavelets. Due to the correspondence of the morphological wavelets operations and fuzzy relational ones, wavelets analysis/synthesis schemes can be formulated based on fuzzy relational calculus. To enable efficient image compression/reconstruction, the concept of the alpha-band which is an alpha-cut generalization, is also proposed for thresholding wavelets. In an image compression/reconstruction experiment using test images extracted from the Standard Image DataBAse (SIDBA), it is confirmed that the root mean square error (RMSE) of the proposed soft thresholding is decreased to 87.3% of conventional hard thresholding, when the original image is "Lenna."


Sign in / Sign up

Export Citation Format

Share Document