markov source
Recently Published Documents


TOTAL DOCUMENTS

60
(FIVE YEARS 6)

H-INDEX

10
(FIVE YEARS 1)

2019 ◽  
Vol 65 (10) ◽  
pp. 6355-6384 ◽  
Author(s):  
Peida Tian ◽  
Victoria Kostina
Keyword(s):  

2019 ◽  
Vol 29 (08) ◽  
pp. 1950003 ◽  
Author(s):  
Agnieszka Pregowska ◽  
Ehud Kaplan ◽  
Janusz Szczepanski

The nature of neural codes is central to neuroscience. Do neurons encode information through relatively slow changes in the firing rates of individual spikes (rate code) or by the precise timing of every spike (temporal code)? Here we compare the loss of information due to correlations for these two possible neural codes. The essence of Shannon’s definition of information is to combine information with uncertainty: the higher the uncertainty of a given event, the more information is conveyed by that event. Correlations can reduce uncertainty or the amount of information, but by how much? In this paper we address this question by a direct comparison of the information per symbol conveyed by the words coming from a binary Markov source (temporal code) with the information per symbol coming from the corresponding Bernoulli source (uncorrelated, rate code). In a previous paper we found that a crucial role in the relation between information transmission rates (ITRs) and firing rates is played by a parameter [Formula: see text], which is the sum of transition probabilities from the no-spike state to the spike state and vice versa. We found that in this case too a crucial role is played by the same parameter [Formula: see text]. We calculated the maximal and minimal bounds of the quotient of ITRs for these sources. Next, making use of the entropy grouping axiom, we determined the loss of information in a Markov source compared with the information in the corresponding Bernoulli source for a given word length. Our results show that in the case of correlated signals the loss of information is relatively small, and thus temporal codes, which are more energetically efficient, can replace rate codes effectively. These results were confirmed by experiments.


2019 ◽  
Vol 65 (9) ◽  
pp. 5737-5749 ◽  
Author(s):  
Sudheer Poojary ◽  
Sanidhay Bhambay ◽  
Parimal Parag
Keyword(s):  

2016 ◽  
Vol Vol. 18 no. 3 (Analysis of Algorithms) ◽  
Author(s):  
Sara Kropf

The partial sum of the states of a Markov chain or more generally a Markov source is asymptotically normally distributed under suitable conditions. One of these conditions is that the variance is unbounded. A simple combinatorial characterization of Markov sources which satisfy this condition is given in terms of cycles of the underlying graph of the Markov chain. Also Markov sources with higher dimensional alphabets are considered. Furthermore, the case of an unbounded covariance between two coordinates of the Markov source is combinatorically characterized. If the covariance is bounded, then the two coordinates are asymptotically independent. The results are illustrated by several examples, like the number of specific blocks in $0$-$1$-sequences and the Hamming weight of the width-$w$ non-adjacent form.


Sign in / Sign up

Export Citation Format

Share Document