The Comparison of Rice Codes and Even-Rodeh Codes Algorithms for Text Compression

Author(s):  
Mohammad Andri Budiman ◽  
Dian Rachmawati ◽  
Sari Wardhatul Jannah
Keyword(s):  
1993 ◽  
Author(s):  
Cleopas Angaye ◽  
Paul Fisher
Keyword(s):  

1993 ◽  
Vol 29 (24) ◽  
pp. 2155
Author(s):  
H.U. Khan ◽  
J. Ahmad ◽  
A. Mahmood ◽  
H.A. Fatmi

1999 ◽  
Vol 12 (4-5) ◽  
pp. 755-765 ◽  
Author(s):  
P.M. Long ◽  
A.I. Natsev ◽  
J.S. Vitter
Keyword(s):  

1995 ◽  
Vol 1 (2) ◽  
pp. 163-190 ◽  
Author(s):  
Kenneth W. Church ◽  
William A. Gale

AbstractShannon (1948) showed that a wide range of practical problems can be reduced to the problem of estimating probability distributions of words and ngrams in text. It has become standard practice in text compression, speech recognition, information retrieval and many other applications of Shannon's theory to introduce a “bag-of-words” assumption. But obviously, word rates vary from genre to genre, author to author, topic to topic, document to document, section to section, and paragraph to paragraph. The proposed Poisson mixture captures much of this heterogeneous structure by allowing the Poisson parameter θ to vary over documents subject to a density function φ. φ is intended to capture dependencies on hidden variables such genre, author, topic, etc. (The Negative Binomial is a well-known special case where φ is a Г distribution.) Poisson mixtures fit the data better than standard Poissons, producing more accurate estimates of the variance over documents (σ2), entropy (H), inverse document frequency (IDF), and adaptation (Pr(x ≥ 2/x ≥ 1)).


1993 ◽  
Vol 24 (1) ◽  
pp. 68-74
Author(s):  
Andrew Davison
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document