scholarly journals Ecological theory predicts ecosystem stressor interactions in freshwater ecosystems, but highlights the strengths and weaknesses of the additive null model

2020 ◽  
Author(s):  
Benjamin J. Burgess ◽  
Drew Purves ◽  
Georgina Mace ◽  
David J. Murrell

AbstractUnderstanding and predicting how multiple co-occurring environmental stressors combine to affect biodiversity and ecosystem services is an on-going grand challenge for ecology. So far progress has been made through accumulating large numbers of smaller-scale individual studies that are then investigated by meta-analyses to look for general patterns. In particular there has been an interest in checking for so-called ecological surprises where stressors interact in a synergistic manner. Recent reviews suggest that such synergisms do not dominate, but few other generalities have emerged. This lack of general prediction and understanding may be due in part to a dearth of ecological theory that can generate clear hypotheses and predictions to tested against empirical data. Here we close this gap by analysing food web models based upon classical ecological theory and comparing their predictions to a large (546 interactions) dataset for the effects of pairs of stressors on freshwater communities, using trophic- and population-level metrics of abundance, density, and biomass as responses. We find excellent overall agreement between the stochastic version of our models and the experimental data, and both conclude additive stressor interactions are the most frequent, but that meta-analyses report antagonistic summary interaction classes. Additionally, we show that the statistical tests used to classify the interactions are very sensitive to sampling variation. It is therefore likely that current weak sampling and low sample sizes are masking many non-additive stressor interactions, which our theory predicts to dominate when sampling variation is removed. This leads us to suspect ecological surprises may be more common than currently reported. Our results highlight the value of developing theory in tandem with empirical tests, and the need to examine the robustness of statistical machinery, especially the widely-used null models, before we can draw strong conclusions about how environmental drivers combine.

Author(s):  
Rob Alkemade ◽  
Jan Janse ◽  
Wilbert van Rooij ◽  
Yongyut Trisurat

Biodiversity is decreasing at high rates due to a number of human impacts. The GLOBIO3 model has been developed to assess human-induced changes in terrestrial biodiversity at national, regional, and global levels. Recently, GLOBIO-aquatic has been developed for inland aquatic ecosystems. These models are built on simple cause–effect relationships between environmental drivers and biodiversity, based on meta-analyses of literature data. The mean abundance of original species relative to their abundance in undisturbed ecosystems (MSA) is used as the indicator for biodiversity. Changes in drivers are derived from the IMAGE 2.4 model. Drivers considered are land-cover change, land-use intensity, fragmentation, climate change, atmospheric nitrogen deposition, excess of nutrients, infrastructure development, and river flow deviation. GLOBIO addresses (1) the impacts of environmental drivers on MSA and their relative importance; (2) expected trends under various future scenarios; and (3) the likely effects of various policy-response options. The changes in biodiversity can be assessed by the GLOBIO model at different geographical levels. The application depends largely on the availability of future projections of drivers. From the different analyses at the different geographical levels, it can be seen that biodiversity loss, in terms of MSA, will continue, and current policies may only reduce the rate of loss.


2017 ◽  
Author(s):  
Andreas Schneck

Background Publication bias is a form of scientific misconduct. It threatens the validity of research results and the credibility of science. Although several tests on publication bias exist, no in-depth evaluations are available that suggest which test to use for the specific research problem. Methods In the study at hand four tests on publication bias, Egger’s test (FAT), p-uniform, the test of excess significance (TES), as well as the caliper test, were evaluated in a Monte Carlo simulation. Two different types of publication bias, as well as its degree (0%, 50%, 100%), were simulated. The type of publication bias was defined either as file-drawer, meaning the repeated analysis of new datasets, or p-hacking, meaning the inclusion of covariates in order to obtain a significant result. In addition, the underlying effect (β = 0, 0.5, 1, 1.5), effect heterogeneity, and the number of observations in the simulated primary studies (N =100, 500), as well as in the number of observations for the publication bias tests (K =100, 1000), were varied. Results All tests evaluated were able to identify publication bias both in the file-drawer and p-hacking condition. The false positive rates were, with the exception of the 15%- and 20%-caliper test, unbiased. The FAT had the largest statistical power in the file-drawer conditions, whereas under p-hacking the TES was, except under effect heterogeneity, slightly better. The caliper test was, however, inferior to the other tests under effect homogeneity and had a decent statistical power only in conditions with 1000 primary studies. Discussion The FAT is recommended as a test for publication bias in standard meta-analyses with no or only small effect heterogeneity. If no clear direction of publication bias is suspected the TES is the first alternative to the FAT. The 5%-caliper tests is recommended under conditions of effect heterogeneity, which may be found if publication bias is examined in a discipline-wide setting when primary studies cover different research problems.


2018 ◽  
Author(s):  
Diana Domanska ◽  
Chakravarthi Kanduri ◽  
Boris Simovski ◽  
Geir Kjetil Sandve

AbstractBackgroundThe difficulties associated with sequencing and assembling some regions of the DNA sequence result in gaps in the reference genomes that are typically represented as stretches of Ns. Although the presence of assembly gaps causes a slight reduction in the mapping rate in many experimental settings, that does not invalidate the typical statistical testing comparing read count distributions across experimental conditions. However, we hypothesize that not handling assembly gaps in the null model may confound statistical testing of co-localization of genomic features.ResultsFirst, we performed a series of explorative analyses to understand whether and how the public genomic tracks intersect the assembly gaps track (hg19). The findings rightly confirm that the genomic regions in public genomic tracks intersect very little with assembly gaps and the intersection was observed only at the beginning and end regions of the assembly gaps rather than covering the whole gap sizes. Further, we simulated a set of query and reference genomic tracks in a way that nullified any dependence between them to test our hypothesis that not avoiding assembly gaps in the null model would result in spurious inflation of statistical significance. We then contrasted the distributions of test statistics and p-values of Monte Carlo simulation-based permutation tests that either avoided or not avoided assembly gaps in the null model when testing for significant co-localization between a pair of query and reference tracks. We observed that the statistical tests that did not account for the assembly gaps in the null model resulted in a distribution of the test statistic that is shifted to the right and a distribu tion of p-values that is shifted to the left (leading to inflated significance).ConclusionOur results shows that not accounting for assembly gaps in statistical testing of co-localization analysis may lead to false positives and over-optimistic findings.


2018 ◽  
Vol 374 (1764) ◽  
pp. 20180011 ◽  
Author(s):  
Josefa Velasco ◽  
Cayetano Gutiérrez-Cánovas ◽  
María Botella-Cruz ◽  
David Sánchez-Fernández ◽  
Paula Arribas ◽  
...  

Under global change, the ion concentration of aquatic ecosystems is changing worldwide. Many freshwater ecosystems are being salinized by anthropogenic salt inputs, whereas many naturally saline ones are being diluted by agricultural drainages. This occurs concomitantly with changes in other stressors, which can result in additive, antagonistic or synergistic effects on organisms. We reviewed experimental studies that manipulated salinity and other abiotic stressors, on inland and transitional aquatic habitats, to (i) synthesize their main effects on organisms' performance, (ii) quantify the frequency of joint effect types across studies and (iii) determine the overall individual and joint effects and their variation among salinity–stressor pairs and organism groups using meta-analyses. Additive effects were slightly more frequent (54%) than non-additive ones (46%) across all the studies ( n = 105 responses). However, antagonistic effects were dominant for the stressor pair salinity and toxicants (44%, n = 43), transitional habitats (48%, n = 31) and vertebrates (71%, n = 21). Meta-analyses showed detrimental additive joint effects of salinity and other stressors on organism performance and a greater individual impact of salinity than the other stressors. These results were consistent across stressor pairs and organism types. These findings suggest that strategies to mitigate multiple stressor impacts on aquatic ecosystems should prioritize restoring natural salinity concentrations. This article is part of the theme issue ‘Salt in freshwaters: causes, ecological consequences and future prospects’.


2020 ◽  
Vol 77 (9) ◽  
pp. 1574-1591 ◽  
Author(s):  
Kyleisha J. Foote ◽  
Pascale M. Biron ◽  
James W.A. Grant

Owing to declines in salmonid populations, in-stream restoration structures have been used for over 80 years to increase abundance of fish. However, the relative effectiveness of these structures remains unclear for some species or regions, partly due to contrasting conclusions from two previous meta-analyses. To update and reconcile these previous analyses, we conducted a meta-analysis using data available from 1969 to 2019 to estimate the effect of in-stream structures on salmonid abundance (number and density) and biomass. Data from 100 stream restoration projects showed a significant increase in salmonid abundance (effect size 0.636) and biomass (0.621), consistent with previous reviews and studies, and a stronger effect was found in adults than in juvenile fish. Despite a shift towards using more natural structures (wood and boulders) since the 1990s, structures have not become more effective. However, most projects monitor for less than 5 years, which may be insufficient time in some systems for channel morphology to adjust and population changes to be apparent. Process-based techniques, which give more space for the river, allow more long-term, self-sustaining restoration.


2019 ◽  
Vol 25 (1) ◽  
pp. 27-32 ◽  
Author(s):  
Lifeng Lin ◽  
Linyu Shi ◽  
Haitao Chu ◽  
Mohammad Hassan Murad

Publication bias, more generally termed as small-study effect, is a major threat to the validity of meta-analyses. Most meta-analysts rely on the p values from statistical tests to make a binary decision about the presence or absence of small-study effects. Measures are available to quantify small-study effects’ magnitude, but the current literature lacks clear rules to help evidence users in judging whether such effects are minimal or substantial. This article aims to provide rules of thumb for interpreting the measures. We use six measures to evaluate small-study effects in 29 932 meta-analyses from the Cochrane Database of Systematic Reviews. They include Egger’s regression intercept and the skewness under both the fixed-effect and random-effects settings, the proportion of suppressed studies, and the relative change of the estimated overall result due to small-study effects. The cut-offs for different extents of small-study effects are determined based on the quantiles in these distributions. We present the empirical distributions of the six measures and propose a rough guide to interpret the measures’ magnitude. The proposed rules of thumb may help evidence users grade the certainty in evidence as impacted by small-study effects.


Author(s):  
Joshua B. Burt ◽  
Markus Helmer ◽  
Maxwell Shinn ◽  
Alan Anticevic ◽  
John D. Murray

AbstractStudies of large-scale brain organization have revealed interesting relationships between spatial gradients in brain maps across multiple modalities. Evaluating the significance of these findings requires establishing statistical expectations under a null hypothesis of interest. Through generative modeling of synthetic data that instantiate a specific null hypothesis, quantitative benchmarks can be derived for arbitrarily complex statistical measures. Here, we present a generative null model, provided as an open-access software platform, that generates surrogate maps with spatial autocorrelation (SA) matched to SA of a target brain map. SA is a prominent and ubiquitous property of brain maps that violates assumptions of independence in conventional statistical tests. Our method can simulate surrogate brain maps, constrained by empirical data, that preserve the SA of cortical, subcortical, parcellated, and dense brain maps. We characterize how SA impacts p-values in pairwise brain map comparisons. Furthermore, we demonstrate how SA-preserving surrogate maps can be used in gene ontology enrichment analyses to test hypotheses of interest related to brain map topography. Our findings demonstrate the utility of SA-preserving surrogate maps for hypothesis testing in complex statistical analyses, and underscore the need to disambiguate meaningful relationships from chance associations in studies of large-scale brain organization.


Author(s):  
Andreas Schneck

Background Publication bias is a form of scientific misconduct. It threatens the validity of research results and the credibility of science. Although several tests on publication bias exist, no in-depth evaluations are available that suggest which test to use for the specific research problem. Methods In the study at hand four tests on publication bias, Egger’s test (FAT), p-uniform, the test of excess significance (TES), as well as the caliper test, were evaluated in a Monte Carlo simulation. Two different types of publication bias, as well as its degree (0%, 50%, 100%), were simulated. The type of publication bias was defined either as file-drawer, meaning the repeated analysis of new datasets, or p-hacking, meaning the inclusion of covariates in order to obtain a significant result. In addition, the underlying effect (β = 0, 0.5, 1, 1.5), effect heterogeneity, and the number of observations in the simulated primary studies (N =100, 500), as well as in the number of observations for the publication bias tests (K =100, 1000), were varied. Results All tests evaluated were able to identify publication bias both in the file-drawer and p-hacking condition. The false positive rates were, with the exception of the 15%- and 20%-caliper test, unbiased. The FAT had the largest statistical power in the file-drawer conditions, whereas under p-hacking the TES was, except under effect heterogeneity, slightly better. The caliper test was, however, inferior to the other tests under effect homogeneity and had a decent statistical power only in conditions with 1000 primary studies. Discussion The FAT is recommended as a test for publication bias in standard meta-analyses with no or only small effect heterogeneity. If no clear direction of publication bias is suspected the TES is the first alternative to the FAT. The 5%-caliper tests is recommended under conditions of effect heterogeneity, which may be found if publication bias is examined in a discipline-wide setting when primary studies cover different research problems.


2019 ◽  
Vol 227 (1) ◽  
pp. 83-89 ◽  
Author(s):  
Michael Kossmeier ◽  
Ulrich S. Tran ◽  
Martin Voracek

Abstract. The funnel plot is widely used in meta-analyses to assess potential publication bias. However, experimental evidence suggests that informal, mere visual, inspection of funnel plots is frequently prone to incorrect conclusions, and formal statistical tests (Egger regression and others) entirely focus on funnel plot asymmetry. We suggest using the visual inference framework with funnel plots routinely, including for didactic purposes. In this framework, the type I error is controlled by design, while the explorative, holistic, and open nature of visual graph inspection is preserved. Specifically, the funnel plot of the actually observed data is presented simultaneously, in a lineup, with null funnel plots showing data simulated under the null hypothesis. Only when the real data funnel plot is identifiable from all the funnel plots presented, funnel plot-based conclusions might be warranted. Software to implement visual funnel plot inference is provided via a tailored R function.


Sign in / Sign up

Export Citation Format

Share Document