null expectation
Recently Published Documents


TOTAL DOCUMENTS

22
(FIVE YEARS 13)

H-INDEX

5
(FIVE YEARS 2)

Author(s):  
Tad Dallas ◽  
Andrew Kramer

Species with broader niches may have the opportunity to occupy larger geographic areas, assuming no limitations on dispersal and a relatively homogeneous environmental space. While there is general support for positive \textit{geographic range size – climatic niche area} relationships, a great deal of variation exists across taxonomic and spatial gradients. Here, we use data on a large set of mammal ($n$ = 1225), bird ($n$ = 1829), and tree ($n$ = 341) species distributed across the Americas to examine the \textbf{1}) relationship between geographic range size and climatic niche area, \textbf{2}) influence of species traits on species departures from the best fit geographic range size – climatic niche area relationship, and \textbf{3}) how detection of these relationships is sensitive to how species range size and climatic niche area are estimated. We find positive \textit{geographic range size – climatic niche area} relationships for all taxa. Residual variation in this relationship contained a strong latitudinal signal. Subsampling the occurrence data to create a null expectation, we found that residual variation did not strongly deviate from the null expectation. Together, we provide support for the generality of \textit{geographic range size – climatic niche area} relationships, which may be constrained by latitude but are agnostic to species identity, suggesting that species traits are far less responsible than geographic barriers and the distribution of land area and available environmental space.


2021 ◽  
Author(s):  
Benjamin J Burgess ◽  
Michelle C Jackson ◽  
David J Murrell

1. Most ecosystems are subject to co-occurring, anthropogenically driven changes and understanding how these multiple stressors interact is a pressing concern. Stressor interactions are typically studied using null models, with the additive and multiplicative null expectation being those most widely applied. Such approaches classify interactions as being synergistic, antagonistic, reversal, or indistinguishable from the null expectation. Despite their wide-spread use, there has been no thorough analysis of these null models, nor a systematic test of the robustness of their results to sample size or sampling error in the estimates of the responses to stressors. 2. We use data simulated from food web models where the true stressor interactions are known, and analytical results based on the null model equations to uncover how (i) sample size, (ii) variation in biological responses to the stressors and (iii) statistical significance, affect the ability to detect non-null interactions. 3. Our analyses lead to three main results. Firstly, it is clear the additive and multiplicative null models are not directly comparable, and over one third of all simulated interactions had classifications that were model dependent. Secondly, both null models have weak power to correctly classify interactions at commonly implemented sample sizes (i.e., ≤6 replicates), unless data uncertainty is unrealistically low. This means all but the most extreme interactions are indistinguishable from the null model expectation. Thirdly, we show that increasing sample size increases the power to detect the true interactions but only very slowly. However, the biggest gains come from increasing replicates from 3 up to 25 and we provide an R function for users to determine sample sizes required to detect a critical effect size of biological interest for the additive model. 4. Our results will aid researchers in the design of their experiments and the subsequent interpretation of results. We find no clear statistical advantage of using one null model over the other and argue null model choice should be based on biological relevance rather than statistical properties. However, there is a pressing need to increase experiment sample sizes otherwise many biologically important synergistic and antagonistic stressor interactions will continue to be missed.


Author(s):  
Elysia Saputra ◽  
Amanda Kowalczyk ◽  
Luisa Cusick ◽  
Nathan Clark ◽  
Maria Chikina

Abstract Many evolutionary comparative methods seek to identify associations between phenotypic traits or between traits and genotypes, often with the goal of inferring potential functional relationships between them. Comparative genomics methods aimed at this goal measure the association between evolutionary changes at the genetic level with traits evolving convergently across phylogenetic lineages. However, these methods have complex statistical behaviors that are influenced by nontrivial and oftentimes unknown confounding factors. Consequently, using standard statistical analyses in interpreting the outputs of these methods leads to potentially inaccurate conclusions. Here, we introduce phylogenetic permulations, a novel statistical strategy that combines phylogenetic simulations and permutations to calculate accurate, unbiased P values from phylogenetic methods. Permulations construct the null expectation for P values from a given phylogenetic method by empirically generating null phenotypes. Subsequently, empirical P values that capture the true statistical confidence given the correlation structure in the data are directly calculated based on the empirical null expectation. We examine the performance of permulation methods by analyzing both binary and continuous phenotypes, including marine, subterranean, and long-lived large-bodied mammal phenotypes. Our results reveal that permulations improve the statistical power of phylogenetic analyses and correctly calibrate statements of confidence in rejecting complex null distributions while maintaining or improving the enrichment of known functions related to the phenotype. We also find that permulations refine pathway enrichment analyses by correcting for nonindependence in gene ranks. Our results demonstrate that permulations are a powerful tool for improving statistical confidence in the conclusions of phylogenetic analysis when the parametric null is unknown.


Author(s):  
Boris Kryštufek ◽  
Georgy Shenbrot ◽  
Tina Klenovšek ◽  
Franc Janžekovič

Abstract We explore the pattern of spatial variation in mandibular morphology in relation to subspecific taxonomy in the dwarf fat-tailed jerboa, Pygeretmus pumilio. The unguided k-means clustering on mandible shape scores, partitioned populations into two clusters, corresponding to western and eastern populations. These clusters nearly perfectly matched the two subspecies groups (pumilio and potanini groups) recognized in an independent study based on the morphology of the glans penis. The mandible, although under environmental pressure, has retained a sufficient amount of taxonomic information to retrieve grouping closely resembling the one derived from a sexually selective trait. We recommend morphometrics of the mandible as a routine step in addressing variations in mammals at species and subspecies levels. We also stress the advantage of unsupervised k-clustering in testing null expectation in subspecies taxonomies. However, the power of this approach has its limitations and in our analysis, the k-clustering failed to retrieve subspecies in the potanini group.


2020 ◽  
Author(s):  
Elysia Saputra ◽  
Amanda Kowalczyk ◽  
Luisa Cusick ◽  
Nathan Clark ◽  
Maria Chikina

AbstractThe wealth of high-quality genomes for numerous species has motivated many investigations into the genetic underpinnings of phenotypes. Comparative genomics methods approach this task by identifying convergent shifts at the genetic level that are associated with traits evolving convergently across independent lineages. However, these methods have complex statistical behaviors that are influenced by non-trivial and oftentimes unknown confounding factors. Consequently, using standard statistical analyses in interpreting the outputs of these methods leads to potentially inaccurate conclusions. Here, we introduce phylogenetic permulations, a novel statistical strategy that combines phylogenetic simulations and permutations to calculate accurate, unbiased p-values from phylogenetic methods. Permulations construct the null expectation for p-values from a given phylogenetic method by empirically generating null phenotypes. Subsequently, empirical p-values that capture the true statistical confidence given the correlation structure in the data are directly calculated based on the empirical null expectation. We examine the performance of permulation methods by analyzing both binary and continuous phenotypes, including marine, subterranean, and long-lived large-bodied mammal phenotypes. Our results reveal that permulations improve the statistical power of phylogenetic analyses and correctly calibrate statements of confidence in rejecting complex null distributions while maintaining or improving the enrichment of known functions related to the phenotype. We also find that permulations refine pathway enrichment analyses by correcting for non-independence in gene ranks. Our results demonstrate that permulations are a powerful tool for improving statistical confidence in the conclusions of phylogenetic analysis when the parametric null is unknown.


Viruses ◽  
2020 ◽  
Vol 12 (4) ◽  
pp. 462 ◽  
Author(s):  
Spyros Lytras ◽  
Joseph Hughes

Distinct patterns of dinucleotide representation, such as CpG and UpA suppression, are characteristic of certain viral genomes. Recent research has uncovered vertebrate immune mechanisms that select against specific dinucleotides in targeted viruses. This evidence highlights the importance of systematically examining the dinucleotide composition of viral genomes. We have developed a novel metric, called synonymous dinucleotide usage (SDU), for quantifying dinucleotide representation in coding sequences. Our method compares the abundance of a given dinucleotide to the null hypothesis of equal synonymous codon usage in the sequence. We present a Python3 package, DinuQ, for calculating SDU and other relevant metrics. We have applied this method on two sets of invertebrate- and vertebrate-specific flaviviruses and rhabdoviruses. The SDU shows that the vertebrate viruses exhibit consistently greater under-representation of CpG dinucleotides in all three codon positions in both datasets. In comparison to existing metrics for dinucleotide quantification, the SDU allows for a statistical interpretation of its values by comparing it to a null expectation based on the codon table. Here we apply the method to viruses, but coding sequences of other living organisms can be analysed in the same way.


Author(s):  
Spyros Lytras ◽  
Joseph Hughes

AbstractDistinct patterns of dinucleotide representation, such as CpG and UpA suppression, are characteristic of certain viral genomes. Recent research has uncovered vertebrate immune mechanisms that select against specific dinucleotides in targeted viruses. This evidence highlights the importance of systematically examining the dinucleotide composition of viral genomes. We have developed a novel metric, called Synonymous Dinucleotide Usage (SDU), for quantifying dinucleotide representation in coding sequences. Our method compares the abundance of a given dinucleotide to the null hypothesis of equal synonymous codon usage in the sequence. We present a Python3 package, DinuQ, for calculating SDU and other relevant metrics. We have applied this method on two sets of invertebrate- and vertebrate-specific flaviviruses and rhabdoviruses. The SDU shows that the vertebrate viruses exhibit consistently greater under-representation of CpG dinucleotides in all three codon positions in both datasets. In comparison to existing metrics for dinucleotide quantification, the SDU allows for a statistical interpretation of its values by comparing it to a null expectation based on the codon table. Here we apply the method to viruses, but coding sequences of other living organisms can be analysed in the same way.


2019 ◽  
Vol 64 (2) ◽  
pp. 163-172
Author(s):  
Ekaphan Kraichak ◽  
Luis Allende ◽  
Walter Obermayer ◽  
Robert Lücking ◽  
H. Thorsten Lumbsch

AbstractThe ‘competition-relatedness’ hypothesis postulates that co-occurring taxa should be more distantly related, because of lower competition. This hypothesis has been criticized for its dependence on untested assumptions and its exclusion of other assembly forces beyond competition and habitat filtering to explain the co-existence of closely related taxa. Here we analyzed the patterns of co-occurring individuals of lichenized fungi in the Graphis scripta complex, a monophyletic group of species occurring in temperate forests throughout the Northern Hemisphere. We generated sequences for three nuclear ribosomal and protein markers (nuLSU, RPB2, EF-1) and combined them with previously generated sequences to reconstruct an updated phylogeny for the complex. The resulting phylogeny was used to determine the patterns of co-occurrences at regional and at sample (tree) scales by calculating standard effect size of mean pairwise distance (SES.MPD) among co-occurring samples to determine whether they were more clustered than expected from chance. The resulting phylogeny revealed multiple distinct lineages, suggesting the presence of several phylogenetic species in this complex. At the regional and local (site) levels, SES.MPD exhibited significant clustering for five out of six regions. The sample (tree) scale SES. MPD values also suggested some clustering but the corresponding metrics did not deviate significantly from the null expectation. The differences in the SES.MPD values and their significance indicated that habitat filtering and/or local diversification may be operating at the regional level, while the local assemblies on each tree are interpreted as being the result of local competition or random colonization.


2019 ◽  
Author(s):  
William D. Pearse ◽  
T. Jonathan Davies

To date, our understanding of how species have shifted in response to recent climate warming has been based on a few studies with a limited number of species. Here we present a comprehensive, global overview of species’ distributional responses to changing climate across a broad variety of taxa (animals, plants, and fungi). We characterise species’ responses using a metric that describes the realised velocity of climate change: how closely species’ responses have tracked changing climate through time. In contrast to existing ‘climate velocity’ metrics that have focused on space, we focus on species and index their responses to a null expectation of change in order to examine drivers of inter-specific variation. Here we show that species are tracking climate on average, but not sufficiently to keep up with the pace of climate change. Further, species responses are highly idiosyncratic, and thus highlight that projections assuming uniform responses may be misleading. This is in stark contrast to species’ present-day and historical climate niches, which show strong evidence of the imprint of evolutionary history and functional traits. Our analyses are a first step in exploring the vast wealth of empirical data on species’ historic responses to recent climate change.


2019 ◽  
Vol 116 (34) ◽  
pp. 16892-16898 ◽  
Author(s):  
Daliang Ning ◽  
Ye Deng ◽  
James M. Tiedje ◽  
Jizhong Zhou

Understanding the community assembly mechanisms controlling biodiversity patterns is a central issue in ecology. Although it is generally accepted that both deterministic and stochastic processes play important roles in community assembly, quantifying their relative importance is challenging. Here we propose a general mathematical framework to quantify ecological stochasticity under different situations in which deterministic factors drive the communities more similar or dissimilar than null expectation. An index, normalized stochasticity ratio (NST), was developed with 50% as the boundary point between more deterministic (<50%) and more stochastic (>50%) assembly. NST was tested with simulated communities by considering abiotic filtering, competition, environmental noise, and spatial scales. All tested approaches showed limited performance at large spatial scales or under very high environmental noise. However, in all of the other simulated scenarios, NST showed high accuracy (0.90 to 1.00) and precision (0.91 to 0.99), with averages of 0.37 higher accuracy (0.1 to 0.7) and 0.33 higher precision (0.0 to 1.8) than previous approaches. NST was also applied to estimate stochasticity in the succession of a groundwater microbial community in response to organic carbon (vegetable oil) injection. Our results showed that community assembly was shifted from more deterministic (NST = 21%) to more stochastic (NST = 70%) right after organic carbon input. As the vegetable oil was consumed, the community gradually returned to be more deterministic (NST = 27%). In addition, our results demonstrated that null model algorithms and community similarity metrics had strong effects on quantifying ecological stochasticity.


Sign in / Sign up

Export Citation Format

Share Document