scholarly journals ANALYSIS OF TWO-STEP APPROACH FOR COMPRESSING TEXTURE IMAGES WITH DESIRED QUALITY

2020 ◽  
pp. 50-58
Author(s):  
Fangfang Li ◽  
Sergey S. Krivenko ◽  
Vladimir V. Lukin

considered. Quality is mainly characterized by the peak signal-to-noise ratio (PSNR) but visual quality metrics are briefly studied as well. Potentially, a two-step approach can be used to carry out a compression with providing the desired quality in a quite simple way and with a reduced compression time. However, the two-step approach can run into problems for PSNR metric under conditions that a required PSNR is quite small (about 30 dB). These problems mainly deal with the accuracy of providing a desired quality at the second step. The paper analyzes the reasons why this happens. For this purpose, a set of nine test images of different complexity is analyzed first. Then, the use of the two-step approach is studied for a wide set of complex structure texture test images. The corresponding test experiments are carried out for several values of the desired PSNR. The obtained results show that the two-step approach has limitations in the cases when complex texture images have to be compressed with providing relatively low values of the desired PSNR. The main reason is that the rate-distortion dependence is nonlinear while linear approximation is applied at the second step. To get around the aforementioned shortcomings, a simple but efficient solution is proposed based on the performed analysis. It is shown that, due to the proposed modification, the application range of the two-step method of lossy compression has become considerably wider and it covers PSNR values that are commonly required in practice. The experiments are performed for a typical image encoder AGU based on discrete cosine transform (DCT) but it can be expected that the proposed approach is applicable for other DCT-based image compression techniques.

Author(s):  
Fangfang Li ◽  
Sergey Krivenko ◽  
Vladimir Lukin

Image information technology has become an important perception technology considering the task of providing lossy image compression with the desired quality using certain encoders Recent researches have shown that the use of a two-step method can perform the compression in a very simple manner and with reduced compression time under the premise of providing a desired visual quality accuracy. However, different encoders have different compression algorithms. These issues involve providing the accuracy of the desired quality. This paper considers the application of the two-step method in an encoder based on a discrete wavelet transform (DWT). In the experiment, bits per pixel (BPP) is used as the control parameter to vary and predict the compressed image quality, and three visual quality evaluation metrics (PSNR, PSNR-HVS, PSNR-HVS-M) are analyzed. In special cases, the two-step method is allowed to be modified. This modification relates to the cases when images subject to lossy compression are either too simple or too complex and linear approximation of dependences is no more valid. Experimental data prove that, compared with the single-step method, after performing the two-step compression method, the mean square error of differences between desired and provided values drops by an order of magnitude. For PSNR-HVS-M, the error of the two-step method does not exceed 3.6 dB. The experiment has been conducted for Set Partitioning in Hierarchical Trees (SPIHT), a typical image encoder based on DWT, but it can be expected that the proposed method applies to other DWT-based image compression techniques. The results show that the application range of the two-step lossy compression method has been expanded. It is not only suitable for encoders based on discrete cosine transform (DCT) but also works well for DWT-based encoders.


1999 ◽  
Vol 30 (4) ◽  
pp. 324-331 ◽  
Author(s):  
Maria Estela da Silva ◽  
Telma Teixeira Franco

This work investigated the partitioning of b-galactosidase from Kluyveromyces fragilis in aqueous two-phase systems (ATPS) by bioaffinity. PEG 4000 was chemically activated with thresyl chloride, and the biospecific ligand p-aminophenyl 1-thio-b-D-galactopyranoside (APGP) was attached to the activated PEG 4000. A new two-step method for extraction and purification of the enzyme b-galactosidase from Kluyveromyces fragilis was developed. In the first step, a system composed of 6% PEG 4000-APGP and 8% dextran 505 was used, where b-galactosidase was strongly partitioned to the top phase (K = 2,330). In the second step, a system formed of 13% PEG-APGP and 9% phosphate salt was used to revert the value of the partition coefficient of b-galactosidase (K = 2 x 10-5) in order to provide the purification and recovery of 39% of the enzyme in the bottom salt-rich phase.


2015 ◽  
Vol 4 (2) ◽  
pp. 42-55 ◽  
Author(s):  
L. Balaji ◽  
K.K. Thyagharajan ◽  
A. Dhanalakshmi

H.264 / AVC expansion is H.264 / SVC which is applicable in environments that demand video streaming. This paper delivers an algorithm to shorten computational complexity and extend coding efficiency by determining the mode speedily. In this writing, the authors talk a fast mode resolution algorithm with less complexity unlikely the traditional joint scalable video model (JSVM). Their algorithm end mode hunt by a probability model defined. This model is address for both intra-mode and inter-mode predictions of base layer and enhancement layers in a macro block (MB). The estimated rate distortion cost (RDC) for modes among layers is custom to determine the best mode of each MB. The experimental results show that the authors' algorithm realizes 26.9% of encoding time when compared with the JSVM reference software with smallest reduction in peak signal to noise ratio (PSNR).


Electronics ◽  
2019 ◽  
Vol 8 (10) ◽  
pp. 1139 ◽  
Author(s):  
Kai Yang ◽  
Zhitao Huang ◽  
Xiang Wang ◽  
Fenghua Wang

Signal-to-noise ratio (SNR) is a priori information necessary for many signal processing algorithms or techniques. However, there are many problems exsisting in conventional SNR estimation techniques, such as limited application range of modulation types, narrow effective estimation range of signal-to-noise ratio, and poor ability to accommodate non-zero timing offsets and frequency offsets. In this paper, an SNR estimation technique based on deep learning (DL) is proposed, which is a non-data-aid (NDA) technique. Second and forth moment (M2M4) estimator is used as a benchmark, and experimental results show that the performance and robustness of the proposed method are better, and the applied ranges of modulation types is wider. At the same time, the proposed method is not only applicable to the baseband signal and the incoherent signal, but can also estimate the SNR of the intermediate frequency signal.


Algorithms ◽  
2019 ◽  
Vol 12 (7) ◽  
pp. 130 ◽  
Author(s):  
Dinh Trieu Duong ◽  
Huy Phi Cong ◽  
Xiem Hoang Van

Distributed video coding (DVC) is an attractive and promising solution for low complexity constrained video applications, such as wireless sensor networks or wireless surveillance systems. In DVC, visual quality consistency is one of the most important issues to evaluate the performance of a DVC codec. However, it is the fact that the quality of the decoded frames that is achieved in most recent DVC codecs is not consistent and it is varied with high quality fluctuation. In this paper, we propose a novel DVC solution named Joint exploration model based DVC (JEM-DVC) to solve the problem, which can provide not only higher performance as compared to the traditional DVC solutions, but also an effective scheme for the quality consistency control. We first employ several advanced techniques that are provided in the Joint exploration model (JEM) of the future video coding standard (FVC) in the proposed JEM-DVC solution to effectively improve the performance of JEM-DVC codec. Subsequently, for consistent quality control, we propose two novel methods, named key frame quantization (KF-Q) and Wyner-Zip frame quantization (WZF-Q), which determine the optimal values of the quantization parameter (QP) and quantization matrix (QM) applied for the key and WZ frame coding, respectively. The optimal values of QP and QM are adaptively controlled and updated for every key and WZ frames to guarantee the consistent video quality for the proposed codec unlike the conventional approaches. Our proposed JEM-DVC is the first DVC codec in literature that employs the JEM coding technique, and then all of the results that are presented in this paper are new. The experimental results show that the proposed JEM-DVC significantly outperforms the relevant DVC benchmarks, notably the DISCOVER DVC and the recent H.265/HEVC based DVC, in terms of both Peak signal-to-noise ratio (PSNR) performance and consistent visual quality.


2015 ◽  
Vol 8 (4) ◽  
pp. 32
Author(s):  
Sabarish Sridhar

Steganography, water marking and encryption are widely used in image processing and communication. A general practice is to use them independently or in combination of two - for e.g. data hiding with encryption or steganography alone. This paper aims to combine the features of watermarking, image encryption as well as image steganography to provide reliable and secure data transmission .The basics of data hiding and encryption are explained. The first step involves inserting the required watermark on the image at the optimum bit plane. The second step is to use an RSA hash to actually encrypt the image. The final step involves obtaining a cover image and hiding the encrypted image within this cover image. A set of metrics will be used for evaluation of the effectiveness of the digital water marking. The list includes Mean Squared Error, Peak Signal to Noise Ratio and Feature Similarity.


2019 ◽  
Vol 10 (5) ◽  
pp. 65
Author(s):  
Abu-Hussain Jamal ◽  
Oleg Tilchin

The suggested comprehensive three-step method for management of the employees’ accountability for innovation is aimed at intensification of the innovation activity in an organization. The innovation process is characterized by suitability, feasibility, and applicability of the ideas. It is performed by the phases: finding new ideas, evaluation of ideas, development of ideas including their experimentation and implementation. Change of the innovation process characteristics causes the need of the accountability management. As a result of the management, the accountability characteristics such as a sphere, a level, and a measure of the employees’ accountability for innovation are changed. The method is realized by sequence of the steps: setting accountability, evaluating accountability, and managing accountability. The steps are aligned with the innovative process phases. At the first step, the spheres and the levels of employees’ accountability for generating ideas are set. At the second step, the spheres, levels, and measures of employees’ accountability for development of the ideas are determined. The measure of accountability characterizes accountability of the members of the dynamic and heterogeneous group which is self-formed by employees as a result of the idea assessment. It is set equal to the idea value. The idea value is calculated by summation of assessments of the innovative process characteristics. At the third step, the spheres, levels, and measures of employees’ accountability while development of the ideas are guided. Sharing accountability among the group members is based on their knowledge and skills. The preferable innovation direction and the key idea are revealed.


Molecules ◽  
2019 ◽  
Vol 24 (22) ◽  
pp. 4013 ◽  
Author(s):  
Artur Bukowczan ◽  
Edyta Hebda ◽  
Maciej Czajkowski ◽  
Krzysztof Pielichowski

In this work, we report for the first time on the influence of polyhedral oligomericsilsesquioxanes (POSS) on the structure and properties of liquid crystalline polyurethane (LCPU). LCPU/POSS hybrids were synthesized via a two-step method. In the first step, 4,4′-methylenephenyl diisocyanate (MDI) and polytetramethylene ether glycol (PTMG) reacted with functionalized trisilanolphenyl POSS (TSP-POSS) bearing three hydroxyl groups. In the second step, the growing chain was extended with 4,4′-bis(hydroxyhexoxy)biphenyl (BHHBP). FTIR measurements confirmed the chemical bonding between the POSS and LCPU matrix and showed the influence of the silsesquioxane modification on the intensity of hydrogen bonds. The DSC and POM techniques confirmed the formation of liquid crystalline phases. The incorporation of silsesquixanes into the LC matrix leads to higher melting and isotropization temperatures along with the broadening phase transition effect. Scanning electron microscopy showed a good distribution of POSS moieties, both in the bulk and on the surface of the liquid crystalline PU matrix, whereby wide-angle X-ray diffraction (WAXD) patterns revealed halos from both the liquid crystalline and unmodified polyurethane matrix. The stress at the breaking points for LCPU/POSS hybrids containing 50% and 60% of elastic segments is greater than the stress at the breaking point of the reference material (LCPU), what is due to good dispersion of POSS in less elastic matrix. Thermal properties of the LCPU/POSS materials obtained, determined by TGA, revealed that the char residue increased with the amount of POSS for 40% of elastic segments materials.


2020 ◽  
Vol 12 (7) ◽  
pp. 120 ◽  
Author(s):  
Thanuja Mallikarachchi ◽  
Dumidu Talagala ◽  
Hemantha Kodikara Arachchi ◽  
Chaminda Hewage ◽  
Anil Fernando

Video playback on mobile consumer electronic (CE) devices is plagued by fluctuations in the network bandwidth and by limitations in processing and energy availability at the individual devices. Seen as a potential solution, the state-of-the-art adaptive streaming mechanisms address the first aspect, yet the efficient control of the decoding-complexity and the energy use when decoding the video remain unaddressed. The quality of experience (QoE) of the end-users’ experiences, however, depends on the capability to adapt the bit streams to both these constraints (i.e., network bandwidth and device’s energy availability). As a solution, this paper proposes an encoding framework that is capable of generating video bit streams with arbitrary bit rates and decoding-complexity levels using a decoding-complexity–rate–distortion model. The proposed algorithm allocates rate and decoding-complexity levels across frames and coding tree units (CTUs) and adaptively derives the CTU-level coding parameters to achieve their imposed targets with minimal distortion. The experimental results reveal that the proposed algorithm can achieve the target bit rate and the decoding-complexity with 0.4% and 1.78% average errors, respectively, for multiple bit rate and decoding-complexity levels. The proposed algorithm also demonstrates a stable frame-wise rate and decoding-complexity control capability when achieving a decoding-complexity reduction of 10.11 (%/dB). The resultant decoding-complexity reduction translates into an overall energy-consumption reduction of up to 10.52 (%/dB) for a 1 dB peak signal-to-noise ratio (PSNR) quality loss compared to the HM 16.0 encoded bit streams.


Entropy ◽  
2020 ◽  
Vol 22 (11) ◽  
pp. 1285
Author(s):  
WenLin Li ◽  
DeYu Qi ◽  
ChangJian Zhang ◽  
Jing Guo ◽  
JiaJun Yao

This paper proposes a video summarization algorithm called the Mutual Information and Entropy based adaptive Sliding Window (MIESW) method, which is specifically for the static summary of gesture videos. Considering that gesture videos usually have uncertain transition postures and unclear movement boundaries or inexplicable frames, we propose a three-step method where the first step involves browsing a video, the second step applies the MIESW method to select candidate key frames, and the third step removes most redundant key frames. In detail, the first step is to convert the video into a sequence of frames and adjust the size of the frames. In the second step, a key frame extraction algorithm named MIESW is executed. The inter-frame mutual information value is used as a metric to adaptively adjust the size of the sliding window to group similar content of the video. Then, based on the entropy value of the frame and the average mutual information value of the frame group, the threshold method is applied to optimize the grouping, and the key frames are extracted. In the third step, speeded up robust features (SURF) analysis is performed to eliminate redundant frames in these candidate key frames. The calculation of Precision, Recall, and Fmeasure are optimized from the perspective of practicality and feasibility. Experiments demonstrate that key frames extracted using our method provide high-quality video summaries and basically cover the main content of the gesture video.


Sign in / Sign up

Export Citation Format

Share Document