Word Frequency Distribution of Literature Information: Zipf’s Law

Informetrics ◽  
2017 ◽  
pp. 121-143 ◽  
Author(s):  
Junping Qiu ◽  
Rongying Zhao ◽  
Siluo Yang ◽  
Ke Dong
Entropy ◽  
2020 ◽  
Vol 22 (2) ◽  
pp. 179 ◽  
Author(s):  
Álvaro Corral ◽  
Montserrat García del Muro

The word-frequency distribution provides the fundamental building blocks that generate discourse in natural language. It is well known, from empirical evidence, that the word-frequency distribution of almost any text is described by Zipf’s law, at least approximately. Following Stephens and Bialek (2010), we interpret the frequency of any word as arising from the interaction potentials between its constituent letters. Indeed, Jaynes’ maximum-entropy principle, with the constrains given by every empirical two-letter marginal distribution, leads to a Boltzmann distribution for word probabilities, with an energy-like function given by the sum of the all-to-all pairwise (two-letter) potentials. The so-called improved iterative-scaling algorithm allows us finding the potentials from the empirical two-letter marginals. We considerably extend Stephens and Bialek’s results, applying this formalism to words with length of up to six letters from the English subset of the recently created Standardized Project Gutenberg Corpus. We find that the model is able to reproduce Zipf’s law, but with some limitations: the general Zipf’s power-law regime is obtained, but the probability of individual words shows considerable scattering. In this way, a pure statistical-physics framework is used to describe the probabilities of words. As a by-product, we find that both the empirical two-letter marginal distributions and the interaction-potential distributions follow well-defined statistical laws.


2016 ◽  
Vol 55 (1) ◽  
pp. 61-69
Author(s):  
Neringa Bružaitė ◽  
Tomas Rekašius

The paper examines Lithuanian texts of different authors and genres. The main points ofinterest – the number of words, the number of different words and word frequencies. Structural type distributionand Zipf’s law are applied for describing the frequency distribution of words in the text. It is obvious that thelexical diversity of any text can be defined by different words that are used in the text, also called vocabulary.It is shown that the information contained in a reduced vocabulary is enough for dividing the texts analyzedin this article into groups by genre and author using a hierarchical clustering method. In this case, distancesbetween clusters are measured using the Jaccard distance measure, and clusters are aggregated using the Wardmethod.


2002 ◽  
Vol 05 (01) ◽  
pp. 1-6 ◽  
Author(s):  
RAMON FERRER i CANCHO ◽  
RICARD V. SOLÉ

Random-text models have been proposed as an explanation for the power law relationship between word frequency and rank, the so-called Zipf's law. They are generally regarded as null hypotheses rather than models in the strict sense. In this context, recent theories of language emergence and evolution assume this law as a priori information with no need of explanation. Here, random texts and real texts are compared through (a) the so-called lexical spectrum and (b) the distribution of words having the same length. It is shown that real texts fill the lexical spectrum much more efficiently and regardless of the word length, suggesting that the meaningfulness of Zipf's law is high.


Nature ◽  
1956 ◽  
Vol 178 (4545) ◽  
pp. 1308-1308 ◽  
Author(s):  
A. F. PARKER-RHODES ◽  
T. JOYCE

Sign in / Sign up

Export Citation Format

Share Document