scholarly journals Urbanisation of floodplain ecosystems: Weight-of-evidence and network meta-analysis elucidate multiple stressor pathways

2019 ◽  
Vol 684 ◽  
pp. 741-752 ◽  
Author(s):  
Wendy A. Monk ◽  
Zacchaeus G. Compson ◽  
Catherine B. Choung ◽  
Kathryn L. Korbel ◽  
Natalie K. Rideout ◽  
...  
2002 ◽  
Vol 2 ◽  
pp. 169-189 ◽  
Author(s):  
Lawrence W. Barnthouse ◽  
Douglas G. Heimbuch ◽  
Vaughn C. Anthony ◽  
Ray W. Hilborn ◽  
Ransom A. Myers

We evaluated the impacts of entrainment and impingement at the Salem Generating Station on fish populations and communities in the Delaware Estuary. In the absence of an agreed-upon regulatory definition of “adverse environmental impact” (AEI), we developed three independent benchmarks of AEI based on observed or predicted changes that could threaten the sustainability of a population or the integrity of a community.Our benchmarks of AEI included: (1) disruption of the balanced indigenous community of fish in the vicinity of Salem (the “BIC” analysis); (2) a continued downward trend in the abundance of one or more susceptible fish species (the “Trends” analysis); and (3) occurrence of entrainment/impingement mortality sufficient, in combination with fishing mortality, to jeopardize the future sustainability of one or more populations (the “Stock Jeopardy” analysis).The BIC analysis utilized nearly 30 years of species presence/absence data collected in the immediate vicinity of Salem. The Trends analysis examined three independent data sets that document trends in the abundance of juvenile fish throughout the estuary over the past 20 years. The Stock Jeopardy analysis used two different assessment models to quantify potential long-term impacts of entrainment and impingement on susceptible fish populations. For one of these models, the compensatory capacities of the modeled species were quantified through meta-analysis of spawner-recruit data available for several hundred fish stocks.All three analyses indicated that the fish populations and communities of the Delaware Estuary are healthy and show no evidence of an adverse impact due to Salem. Although the specific models and analyses used at Salem are not applicable to every facility, we believe that a weight of evidence approach that evaluates multiple benchmarks of AEI using both retrospective and predictive methods is the best approach for assessing entrainment and impingement impacts at existing facilities.


Author(s):  
Richard Connon ◽  
Simone Hasenbein ◽  
Susanne Brander ◽  
Helen Poynton ◽  
Erika Holland ◽  
...  

Legacy and current-use contaminants enter into and accumulate throughout the San Francisco Bay−Delta (Bay−Delta), and are present at concentrations with known effects on species important to this diverse watershed. There remains major uncertainty and a lack of focused research able to address and provide understanding of effects across multiple biological scales, despite previous and ongoing emphasis on the need for it. These needs are challenging specifically because of the established regulatory programs that often monitor on a chemical-by-chemical basis, or in which decisions are grounded in lethality-based endpoints. To best address issues of contaminants in the Bay−Delta, monitoring efforts should consider effects of environmentally relevant mixtures and sub-lethal impacts that can affect ecosystem health. These efforts need to consider the complex environment in the Bay−Delta including variable abiotic (e.g., temperature, salinity) and biotic (e.g., pathogens) factors. This calls for controlled and focused research, and the development of a multi-disciplinary contaminant monitoring and assessment program that provides information across biological scales. Information gained in this manner will contribute toward evaluating parameters that could alleviate ecologically detrimental outcomes. This review is a result of a Special Symposium convened at the University of California−Davis (UCD) on January 31, 2017 to address critical information needed on how contaminants affect the Bay−Delta. The UCD Symposium focused on new tools and approaches for assessing multiple stressor effects to freshwater and estuarine systems. Our approach is similar to the recently proposed framework laid out by the U.S. Environmental Protection Agency (USEPA) that uses weight of evidence to scale toxicological responses to chemical contaminants in a laboratory, and to guide the conservation of priority species and habitats. As such, we also aimed to recommend multiple endpoints that could be used to promote a multi-disciplinary understanding of contaminant risks in Bay−Delta while supporting management needs.


2016 ◽  
Vol 45 (1) ◽  
pp. 223-237 ◽  
Author(s):  
Lila Ramaiah ◽  
Mary Jane Hinrichs ◽  
Elizabeth V. Skuba ◽  
William O. Iverson ◽  
Daniela Ennulat

The continuing education course on integrating clinical and anatomical pathology data was designed to communicate the importance of using a weight of evidence approach to interpret safety findings in toxicology studies. This approach is necessary, as neither clinical nor anatomic pathology data can be relied upon in isolation to fully understand the relationship between study findings and the test article. Basic principles for correlating anatomic pathology and clinical pathology findings and for integrating these with other study end points were reviewed. To highlight these relationships, a series of case examples, presented jointly by a clinical pathologist and an anatomic pathologist, were used to illustrate the collaborative effort required between clinical and anatomical pathologists. In addition, the diagnostic utility of traditional liver biomarkers was discussed using results from a meta-analysis of rat hepatobiliary marker and histopathology data. This discussion also included examples of traditional and novel liver and renal biomarker data implementation in nonclinical toxicology studies to illustrate the relationship between discrete changes in biochemistry and tissue morphology.


2019 ◽  
Vol 29 (3) ◽  
pp. e2042 ◽  
Author(s):  
Zohaib Akram ◽  
Khulud Abdulrahman Al‐Aali ◽  
Mohammed Alrabiah ◽  
Faisal Abdullah Alonaizan ◽  
Tariq Abduljabbar ◽  
...  

2016 ◽  
Vol 79 (12) ◽  
pp. 2196-2210 ◽  
Author(s):  
IAN YOUNG ◽  
BARBARA J. WILHELM ◽  
SARAH CAHILL ◽  
REI NAKAGAWA ◽  
PATRICIA DESMARCHELIER ◽  
...  

ABSTRACT Pork is one of the major food sources of human salmonellosis worldwide, and beef products have been implicated in numerous foodborne outbreaks. Thus, effective interventions to reduce Salmonella contamination during beef and pork processing are of interest to both regulators and the meat industry. We conducted a rapid systematic review and meta-analysis of literature investigating the efficacy of slaughter and processing interventions to control Salmonella in beef and pork. Review steps included a comprehensive search strategy, relevance screening of abstracts, relevance confirmation of articles, data extraction, risk-of-bias assessment, meta-analysis (where appropriate), and a weight-of-evidence assessment. A total of 191 relevant experimental studies were identified. The results of two controlled trials indicated that hot water and steam treatments were effective for reducing the prevalence of Salmonella on beef carcasses (relative risk [RR] = 0.11; 95% confidence interval [CI]: 0.02, 0.58), whereas in four trials, prechill organic acid washes were effective for reducing Salmonella on pork carcasses (RR = 0.32; 95% CI: 0.13, 0.78), with high confidence in the estimates of effect. In four quasi-experimental studies, postexsanguination chemical washes were effective for reducing the prevalence of Salmonella on cattle hides, with low confidence in the specific estimate of effect; however, moderate confidence was found for the effect estimates of scalding (RR =0.20; 95% CI: 0.14, 0.29) and singeing (RR = 0.34; 95% CI: 0.22, 0.52) of pork carcasses. The overall evidence supports enhanced reductions of Salmonella through a multihurdle approach. Various slaughter and processing interventions can contribute to reduction of Salmonella on beef and pork carcasses, depending on the context of application, and an appropriate combination should be selected, validated, and verified by establishment operators under their local conditions.


2020 ◽  
Author(s):  
Rosanna Pinto ◽  
Lucia Ardoino ◽  
Paola Giardullo ◽  
Paola Villani ◽  
Carmela Marino

Abstract Background. An Italian project aims to review the scientific literature on the possible carcinogenicity of radiofrequency (100 kHz – 300 GHz) electromagnetic field (RF-EMF) exposure. The ENEA team has to carry out a systematic review of the in vivo studies on this topic. Objectives: Development of a protocol for a systematic review (meta-analysis included) to investigate the potential carcinogenic risk following RF-EMF in vivo exposure to doses above or within legal limits. The aim of this review is - to provide a descriptive and, if possible, a quantitative summaries of the results of the examined RF-EMF in vivo studies, together with an assessment of the consistency of observations and of the causes of heterogeneity. - to assess the weight of evidence to support or refute the hypothesis of carcinogenic effects caused by RF-EMF exposure and to draw conclusions about the potential for carcinogenicity of RF-EMF exposure.Methods: We will search for relevant studies in electronic academic databases and in the reference list of selected papers and reviews on the topic, including the descriptive reviews on RF-EMF carcinogenic effect carried out by international panels of experts since 2011. PECO’s statements were defined: experimental studies on rodents of both sexes, all ages and species, all genetic background (Population) exposed to RF-EMF alone, or in combination with other physical or chemical agents (Exposure); only studies reporting outcome data in exposed and sham control groups (Comparison);all types of cancer with all tumor-related outcome measures (Outcome) will be included.Only peer-reviewed articles written in English will be considered without limit in publication date.Eligibility criteria were defined for screening of papers, moreover the risk of bias assessment will be performed using a specific risk of bias tool for animal studies. Only studies with "definitely low and/or probably low risk of bias" will be included in the analysis. A meta-analysis will be performed, if feasible, for all outcome measures: for subgroup analysis a minimum of 3 studies per subgroup will be required. If meta-analysis will not be possible, data will be reported through the descriptive summary.Systematic review registration: PROSPERO CRD42020191105.


2019 ◽  
Author(s):  
Tomas Havranek ◽  
Zuzana Irsova ◽  
Sebastian Gechert ◽  
Dominika Kolcunova

We show that the large elasticity of substitution between capital and labor estimated in the literature on average, 0.9, can be explained by three factors: publication bias, use of aggregated data, and omission of the first-order condition for capital. The mean elasticity conditional on the absence of publication bias, disaggregated data, and inclusion of information from the first-order condition for capital is 0.3. To obtain this result, we collect 3,186 estimates of the elasticity reported in 121 studies, codify 71 variables that reflect the context in which researchers produce their estimates, and address model uncertainty by Bayesian and frequentist model averaging. We employ nonlinear techniques to correct for publication bias, which is responsible for at least half of the overall reduction in the mean elasticity from 0.9 to 0.3. Our findings also suggest that a failure to normalize the production function leads to a substantial upward bias in the estimated elasticity. The weight of evidence accumulated in the empirical literature emphatically rejects the Cobb-Douglas specification.


2011 ◽  
Vol 16 (7) ◽  
pp. 3207-3220
Author(s):  
Michael Goodman ◽  
Katherine Squibb ◽  
Eric Youngstrom ◽  
Laura Gutermuth Anthony ◽  
Lauren Kenworthy ◽  
...  

We examined prospective cohort studies evaluating the relation between prenatal and neonatal exposure to polychlorinated biphenyls (PCBs) and neurodevelopment in children to assess the feasibility of conducting a meta-analysis to support decision making. We described studies in terms of exposure and end point categorization, statistical analysis, and reporting of results. We used this evaluation to assess the feasibility of grouping studies into reasonably uniform categories. The most consistently used tests included Brazelton's Neonatal Behavioral Assessment Scale, the neurologic optimality score in the neonatal period, the Bayley Scales of Infant Development at 5-8months of age, and the McCarthy Scales of Children's Abilities in 5-year-olds. Despite administering the same tests at similar ages, the studies were too dissimilar to allow a meaningful quantitative examination of outcomes across cohorts. These analyses indicate that our ability to conduct weight-of-evidence assessments of the epidemiologic literature on neurotoxicants may be limited, even in the presence of multiple studies, if the available study methods, data analysis, and reporting lack comparability.


2007 ◽  
Vol 26 (4) ◽  
pp. 283-293 ◽  
Author(s):  
William K Boyes ◽  
Virginia C Moser ◽  
Andrew M Geller ◽  
Vernon A Benignus ◽  
Philip J Bushnell ◽  
...  

Neurotoxicity risk assessments depend on the best available scientific information, including data from animal toxicity studies, human experimental studies and human epidemiology studies. There are several factors to consider when evaluating the comparability of data from studies. Regarding the epidemiology literature, issues include choice of study design, use of appropriate controls, methods of exposure assessment, subjective or objective evaluation of neurological status, and assessment and statistical control of potential confounding factors, including co-exposure to other agents. Animal experiments must be evaluated regarding factors such as dose level and duration, procedures used to assess neurological or behavioural status, and appropriateness of inference from the animal model to human neurotoxicity. Major factors that may explain apparent differences between animal and human studies include: animal neurological status may be evaluated with different procedures than those used in humans; animal studies may involve shorter exposure durations and higher dose levels; and most animal studies evaluate a single substance whereas humans typically are exposed to multiple agents. The comparability of measured outcomes in animals and humans may be improved by considering functional domains rather than individual test measures. The application of predictive models, weight of evidence considerations and meta-analysis can help evaluate the consistency of outcomes across studies. An appropriate blend of scientific information from toxicology and epidemiology studies is necessary to evaluate potential human risks of exposure to neurotoxic substances. Human & Experimental Toxicology (2007) 26, 283-293


Sign in / Sign up

Export Citation Format

Share Document