LSF Vector Quantizer for 2.4kb/s Codec Based on Speech Unvoiced/Voiced Classification

2014 ◽  
Vol 644-650 ◽  
pp. 2185-2188
Author(s):  
Qiang Li ◽  
Xiao Hong Zhang ◽  
Qing Yu Niu

In order to reduce bit-rate and still maintain fine distortion performance,this paper proposed a method of LSF quantization based on speech unvoiced/voiced classification. This method use differential LSF parameters from unvoiced/voiced database to train codebook. And using this method can suppress the quantization error propagation caused by directly vector quantization of LSF parameters. Experimental results show that using this method to quantify LSF will have a better quality while allocating the same number of bits.

2020 ◽  
Vol 34 (01) ◽  
pp. 51-58 ◽  
Author(s):  
Xinyan Dai ◽  
Xiao Yan ◽  
Kelvin K. W. Ng ◽  
Jiu Liu ◽  
James Cheng

Vector quantization (VQ) techniques are widely used in similarity search for data compression, computation acceleration and etc. Originally designed for Euclidean distance, existing VQ techniques (e.g., PQ, AQ) explicitly or implicitly minimize the quantization error. In this paper, we present a new angle to analyze the quantization error, which decomposes the quantization error into norm error and direction error. We show that quantization errors in norm have much higher influence on inner products than quantization errors in direction, and small quantization error does not necessarily lead to good performance in maximum inner product search (MIPS). Based on this observation, we propose norm-explicit quantization (NEQ) — a general paradigm that improves existing VQ techniques for MIPS. NEQ quantizes the norms of items in a dataset explicitly to reduce errors in norm, which is crucial for MIPS. For the direction vectors, NEQ can simply reuse an existing VQ technique to quantize them without modification. We conducted extensive experiments on a variety of datasets and parameter configurations. The experimental results show that NEQ improves the performance of various VQ techniques for MIPS, including PQ, OPQ, RQ and AQ.


1993 ◽  
Vol 34 (1) ◽  
pp. 19-31 ◽  
Author(s):  
F. Lavagetto ◽  
S. Zappatore

2010 ◽  
Vol 7 (1) ◽  
pp. 189-200 ◽  
Author(s):  
Haitao Wei ◽  
Yu Junqing ◽  
Li Jiang

As a video coding standard, H.264 achieves high compress rate while keeping good fidelity. But it requires more intensive computation than before to get such high coding performance. A Hierarchical Multi-level Parallelisms (HMLP) framework for H.264 encoder is proposed which integrates four level parallelisms - frame-level, slice-level, macroblock-level and data-level into one implementation. Each level parallelism is designed in a hierarchical parallel framework and mapped onto the multi-cores and SIMD units on multi-core architecture. According to the analysis of coding performance on each level parallelism, we propose a method to combine different parallel levels to attain a good compromise between high speedup and low bit-rate. The experimental results show that for CIF format video, our method achieves the speedup of 33.57x-42.3x with 1.04x-1.08x bit-rate increasing on 8-core Intel Xeon processor with SIMD Technology.


2014 ◽  
Vol 939 ◽  
pp. 600-606 ◽  
Author(s):  
Eiki Okuyama ◽  
Shingo Asano ◽  
Yuichi Suzuki ◽  
Hiromi Ishikawa

In the straightness profile measurement of a mechanical workpiece, hardware datums have been the traditional standard. However, when the straightness profile is measured using a scanning displacement sensor set on an X-stage as the hardware datums, output of a displacement sensor includes the signal of straightness profile and the sensor’s parasitic motion, i.e. straightness error motion. Then, error separation techniques of the straightness profile from parasitic motions have been developed. For example, two-point method uses two displacement sensors and separates the sensor’s straightness error motion from the straightness profile. However, the conventional two-point method cannot measure a large-scale workpiece because the large sampling number causes random error amplification. In this article, the influence of the random error of generalized two-point method is shown. As the result of the theoretical analysis and numerical analysis, random error propagation decrease when sampling number increase. Further, experimental results obtained by generalized two-point method with large sampling number are analyzed using Wavelet transform and influence of error of the generalized two-point method is discussed in the space-spatial frequency domain.


Author(s):  
Manoranjan Paul ◽  
Manzur Murshed ◽  
Laurence S. Dooley

his chapter presents a contemporary review of the various different strategies available to facilitate Very Low Bit-Rate (VLBR) coding for video communications over mobile and fixed transmission channels as well as the Internet. VLBR media is typically classified as having a bit rate between 8 and 64 Kbps. Techniques that are analyzed include Vector Quantization, various parametric model-based representations, the Discrete Wavelet and Cosine Transforms, and fixed and arbitrary shaped pattern-based coding. In addition to discussing the underlying theoretical principles and relevant features of each approach, the chapter also examines their benefits and disadvantages, together with some of the major challenges that remain to be solved. The chapter concludes by providing some judgments on the likely focus of future research in the VLBR coding field.


Author(s):  
Wei Li ◽  
Fan Zhao ◽  
Peng Ren ◽  
Zheng Xiang

The block transform video coder of H.265/HEVC has been formulated for a more flexible content representation to satisfy various implementation demands. Here three coefficient-scanning methods of diagonal, horizontal and vertical scan are employed for mapping a 2-D array to a 1-D vector for further reducing redundancy in entropy coding. However, the fixation scanning pattern does not fully exploit the correlation among quantized coefficients and the coding redundancy still exists. In this paper, a new adaptive coefficient scanning (ACS) method is proposed for effective H.265/HEVC entropy coding. Here the characteristic of syntax elements of quantized coefficients is first studied and then the related context probability of symbol is estimated through combining local property. Guided by the principle of entropy coding, the scanning approach is established for higher coding performance. Experimental results demonstrate that a reduction of about 3.6%–4.2% in bit-rate can be observed with almost no loss in coding complexity.


Sign in / Sign up

Export Citation Format

Share Document