scholarly journals Spatial patterns of primary seed dispersal and adult tree distributions: Genipa americana dispersed by Cebus capucinus – CORRIGENDUM

2015 ◽  
Vol 32 (1) ◽  
pp. 88-88
Author(s):  
Kim Valenta ◽  
Mariah E. Hopkins ◽  
Melanie Meeking ◽  
Colin A. Chapman ◽  
Linda M. Fedigan

Within the second paragraph of page 494 incorrect language was used to characterize the summary characteristics used. Sentences 3–11 of this paragraph should have read:Second, we calculated three univariate summary characteristics: the nearest neighbour distribution function D(r), the pair-correlation function g(r) and the K-function K(r). The use of multiple summary characteristics holds increased power to characterize variation in spatial patterns (Wiegand et al. 2013). The univariate nearest neighbour distribution function D(r) can be interpreted as the probability that the typical adult tree has its nearest neighbouring adult tree within radius r (or alternatively, the probability that the typical defecation has its nearest neighbouring defecation within radius r). The univariate pair-correlation function g(r) is a non-cumulative normalized neighbourhood density function that gives the expected number of points within rings of radius r and width w centred on a typical point, divided by the mean density of points λ in the study region (Wiegand et al. 2009). We applied g(r) to trees and defecation point patterns separately, using a ring width of 10 m. The K-function K(r) provides a cumulative counterpart to the non-cumulative pair-correlation function g(r) by analysing dispersion and aggregation up to distance r rather than at distance r (Weigand & Moloney 2004). The K-function can be defined as the number of expected points (i.e. either trees or defecations) within circles of radius r extending from a typical point, divided by the mean density of points λ within the study region. Here, we apply the square root transformation L(r) to the K-function to remove scale dependence and stabilize the variance: $L( r ) = \scriptstyle\sqrt {\frac{{K( r )}}{\pi }} - r$ (Besag 1977, Wiegand & Moloney 2014).

Author(s):  
John Ross ◽  
Igor Schreiber ◽  
Marcel O. Vlad

In this chapter we develop further the theory of the correlation method introduced in chapter 7. Consider the expression of the pair correlation function where an ensemble average replaces the average over a time series of experiments on a single system. The pair correlation function defined in eq. (7.1) is the second moment of the pair distribution function and is obtained by integration. We choose a new measure of the correlation distance, one based on an information theoretical formulation. A natural measure of the correlation distance between two variables is the number of states jointly available to them (the size of the support set) compared to the number of states available to them individually. We therefore require that the measure of the statistical closeness between variables X and Y be the fraction of the number of states jointly available to them versus the total possible number of states available to X and Y individually. Further, we demand that the measure of the support sets weighs the states according to their probabilities. Thus, two variables are close and the support set is small if the knowledge of one predicts the most likely state of the other, even if there exists simultaneously a substantial number of other states. The information entropy gives the distance we demand in these requirements. The effective size of the support set of a continuous variable is . . . S(X) = eh(X) (9.2) . . . in which the entropy h(X) is defined by where S(X) is the support set of X and p(x) is the probability density of X. Similarly, we denote the entropy of a pair of continuous variables X,Y, as h(X,Y), which is related to the pair distribution function p(x,y) by an equation analogous to.


Sign in / Sign up

Export Citation Format

Share Document