scholarly journals Accurate Estimation of Viral Abundance by Epifluorescence Microscopy

2004 ◽  
Vol 70 (7) ◽  
pp. 3862-3867 ◽  
Author(s):  
Kevin Wen ◽  
Alice C. Ortmann ◽  
Curtis A. Suttle

ABSTRACT Virus enumeration by epifluorescence microscopy (EFM) is routinely done on preserved, refrigerated samples. Concerns about obtaining accurate and reproducible estimates led us to examine procedures for counting viruses by EFM. Our results indicate that aldehyde fixation results in rapid decreases in viral abundance. By 1 h postfixation, the abundance dropped by 16.4% ± 5.2% (n = 6), and by 4 h, the abundance was 20 to 35% lower. The average loss rates for glutaraldehyde- and formaldehyde-fixed samples over the first 2 h were 0.12 and 0.13 h−1, respectively. By 16 days, viral abundance had decreased by 72% (standard deviation, 6%; n = 6). Aldehyde fixation of samples followed by storage at 4°C, for even a few hours, resulted in large underestimates of viral abundance. The viral loss rates were not constant, and in glutaraldehyde- and formaldehyde-fixed samples they decreased from 0.13 and 0.17 h−1 during the first hour to 0.01 h−1 between 24 and 48 h. Although decay rates changed over time, the abundance was predicted by using separate models to describe decay over the first 8 h and decay beyond 8 h. Accurate estimates of abundance were easily made with unfixed samples stained with Yo-Pro-1, SYBR Green I, or SYBR Gold, and slides could be stored at −20°C for at least 2 weeks or, for Yo-Pro-1, at least 1 year. If essential, samples can be fixed and flash frozen in liquid nitrogen upon collection and stored at −86°C. Determinations performed with fixed samples result in large underestimates of abundance unless slides are made immediately or samples are flash frozen. If protocols outlined in this paper are followed, EFM yields accurate estimates of viral abundance.

2006 ◽  
Vol 72 (7) ◽  
pp. 4767-4774 ◽  
Author(s):  
Rebekah R. Helton ◽  
Ling Liu ◽  
K. Eric Wommack

ABSTRACT Accurate enumeration of viruses within environmental samples is critical for investigations of the ecological role of viruses and viral infection within microbial communities. This report evaluates differences in viral and bacterial direct counts between estuarine sediment samples which were either immediately processed onboard ship or frozen at −20°C and later processed. Viral and bacterial abundances were recorded at three stations spanning the length of the Chesapeake Bay in April and June 2003 within three sediment fractions: pore water (PW), whole sediment (WS), and sediment after pore water removal (AP). No significant difference in viral abundance was apparent between extracts from fresh or frozen sediments. In contrast, bacterial abundance was significantly lower in the samples subjected to freezing. Both bacterial and viral abundance showed significant differences between sediment fractions (PW, WS, or AP) regardless of the fresh or frozen status. Although pore water viral abundance has been used in the past as a measurement of viral abundance in sediments, this fraction accounted for only ca. 5% of the total sediment viral abundance across all samples. The effect of refrigerated storage of sediment viral extracts was also examined and showed that, within the first 2 h, viral abundance decreased ca. 30% in formalin-fixed extracts and 66% in unfixed extracts. Finally, the reliability of direct viral enumeration via epifluorescence microscopy was tested by using DNase treatment of WS extractions. These tests indicated that a large fraction (>86%) of the small SYBR gold fluorescing particles are likely viruses.


Author(s):  
Leif M. Burge ◽  
Laurence Chaput-Desrochers ◽  
Richard Guthrie

Pipelines can be exposed at water crossings where rivers lower the channel bed. Channel bed scour may cause damage to linear infrastructure such as pipelines by exposing the pipe to the flow of water and sediment. Accurate estimation of depth of scour is therefore critical in limiting damage to infrastructure. Channel bed scour has three main components: (1) general scour, (2) bed degradation, and (3) pool depth. General scour is the temporary lowering of the channel bed during a flood event. Channel bed degradation is the systematic lowering of a channel bed over time. Pool depth is depth of pools below the general bed elevation and includes the relocation of pools that result from river dynamics. Channel degradation is assessed in the field using indicators of channel incision such as channel bed armoring and bank characteristics, through the analysis of long profiles and sediment transport modelling. Pool depth is assessed using long profiles and channel movement over time. The catastrophic nature of bed lowering due to general scour requires a different assessment. A design depth of cover is based on analysis of depth of scour for a given return period (eg. 100-years). There are three main steps to predict general scour: (1) regional flood frequency analysis, (2) estimation of hydraulic variables, and (3) scour depth modelling. Typically, four scour models are employed: Lacey (1930), Blench (1969), Neill (1973), and Zeller (1981), with the average or maximum value used for design depth. We provide herein case studies for potential scour for pipeline water crossings at the Little Smoky River and Joachim Creek, AB. Using the four models above, and an analysis of channel degradation and pool depth, the recommended minimum depth of cover of 0.75 m and 0.142 m, respectively, were prescribed. Variability between scour models is large. The general scour model results varied from 0.45 m and 0.75 m for the Little Smoky River and 0.16 m to 0.51 m for Joachim Creek. While these models are more than 30 years old and do not adequately account for factors such as sediment mobility, they nevertheless do provide usable answers and should form part of the usual toolbox in water crossing scour calculations.


2015 ◽  
Vol 9 (3) ◽  
pp. 1039-1062 ◽  
Author(s):  
J. J. Fürst ◽  
H. Goelzer ◽  
P. Huybrechts

Abstract. Continuing global warming will have a strong impact on the Greenland ice sheet in the coming centuries. During the last decade (2000–2010), both increased melt-water runoff and enhanced ice discharge from calving glaciers have contributed 0.6 ± 0.1 mm yr−1 to global sea-level rise, with a relative contribution of 60 and 40% respectively. Here we use a higher-order ice flow model, spun up to present day, to simulate future ice volume changes driven by both atmospheric and oceanic temperature changes. For these projections, the flow model accounts for runoff-induced basal lubrication and ocean warming-induced discharge increase at the marine margins. For a suite of 10 atmosphere and ocean general circulation models and four representative concentration pathway scenarios, the projected sea-level rise between 2000 and 2100 lies in the range of +1.4 to +16.6 cm. For two low emission scenarios, the projections are conducted up to 2300. Ice loss rates are found to abate for the most favourable scenario where the warming peaks in this century, allowing the ice sheet to maintain a geometry close to the present-day state. For the other moderate scenario, loss rates remain at a constant level over 300 years. In any scenario, volume loss is predominantly caused by increased surface melting as the contribution from enhanced ice discharge decreases over time and is self-limited by thinning and retreat of the marine margin, reducing the ice–ocean contact area. As confirmed by other studies, we find that the effect of enhanced basal lubrication on the volume evolution is negligible on centennial timescales. Our projections show that the observed rates of volume change over the last decades cannot simply be extrapolated over the 21st century on account of a different balance of processes causing ice loss over time. Our results also indicate that the largest source of uncertainty arises from the surface mass balance and the underlying climate change projections, not from ice dynamics.


Circulation ◽  
2015 ◽  
Vol 131 (suppl_1) ◽  
Author(s):  
Paulina Kaiser ◽  
Lynda Lisabeth ◽  
Philippa Clarke ◽  
Sara Adar ◽  
Mahasin Mujahid ◽  
...  

Introduction: Research on the association between neighborhood environments and systolic blood pressure (SBP) is limited, predominantly cross-sectional, and has produced mixed results. Investigating specific aspects of neighborhood environments in relation to changes in SBP may help to identify the most important interventions for reducing the population burden of hypertension. Hypothesis: Better neighborhood food, physical activity, and social environments will be associated with lower baseline levels of SBP and smaller increases in SBP over time. Methods: The Multi-Ethnic Study of Atherosclerosis recruited participants from six sites in the U.S., aged 45-84 (mean 59) and free of clinical cardiovascular disease at baseline. Those with non-missing data for key variables were included (N=5,997); the analytic sample was 52.5% female, 39.1% White, 27.3% Hispanic, 11.9% Black, and 21.7% Chinese, with median follow-up time of 9.2 years (IQR 4.5) and SBP measured at three or more exams for 91.3% of participants. SBP in subjects taking anti-hypertensive medication were replaced with multiply imputed estimates of unmedicated SBP, imputed at each exam. Summary measures of neighborhood food and physical activity environments incorporated survey-based scales (healthy food availability and walking environment) and GIS-based measures (density of favorable food stores and recreational resources). The summary measure of the social environment combined survey-based measures of social cohesion and safety. Neighborhoods were defined by a one-mile buffer around each participant’s home address. Linear mixed models were used to model associations of time-varying cumulative average neighborhood environmental summary measures with SBP over time, adjusting for individual-level covariates (demographics, individual- and neighborhood-level SES); models with and without adjustment for baseline SBP were used to evaluate associations of neighborhood environments with SBP trajectories. Results: In models mutually adjusted for all three neighborhood domains and covariates, living in a better physical activity environment was associated with lower SBP at baseline (-1.34 mmHg [95% CI: -2.24, -0.45] per standard deviation higher cumulative average physical activity summary score), while living in a better social environment was associated with higher SBP at baseline (1.00 mmHg [0.39, 1.63] per standard deviation higher); food environment scores were not associated with baseline SBP. After adjustment for baseline SBP, there was no association between any neighborhood environments and trajectories of SBP. Conclusions: Better food and physical activity environments were associated with lower baseline SBP, while better social environments were associated with higher baseline SBP. Neighborhood environments appear to have minimal direct effect on SBP trajectories.


2020 ◽  
pp. 121-148
Author(s):  
Nicole Baerg

This chapter moves from studying developed countries to a sample of countries in Latin America over time. The chapter presents evidence that an increase in the information environment, in terms of its level of precision, exerts an attenuating and significant effect on the mean and standard deviation of forecasters’ inflation expectations, ultimately lowering inflation outcomes. The finding is robust to the inclusion of policy credibility, persistence in inflation, economic output, and month and country effects. When conducting instrumental variable analysis, similarly signed results hold. The main results imply that an increase in information precision helps to lower aggregate levels of inflation and that the channel that this works through is by lowering the weight of prior expectations, as predicted by the theoretical argument. Importantly, the results persist even when considering a sample of countries with relatively variable inflation outcomes and less established (and therefore less credible) economic institutions.


2020 ◽  
Vol 27 (2) ◽  
pp. 8-15
Author(s):  
J.A. Oyewole ◽  
F.O. Aweda ◽  
D. Oni

There is a crucial need in Nigeria to enhance the development of wind technology in order to boost our energy supply. Adequate knowledge about the wind speed distribution becomes very essential in the establishment of Wind Energy Conversion Systems (WECS). Weibull Probability Density Function (PDF) with two parameters is widely accepted and is commonly used for modelling, characterizing and predicting wind resource and wind power, as well as assessing optimum performance of WECS. Therefore, it is paramount to precisely estimate the scale and shape parameters for all regions or sites of interest. Here, wind data from year 2000 to 2010 for four different locations (Port Harcourt, Ikeja, Kano and Jos) were analysed and the Weibull parameters was determined. The three methods employed are Mean Standard Deviation Method (MSDM), Energy Pattern Factor Method (EPFM) and Method of Moments (MOM) for estimating Weibull parameters. The method that gave the most accurate estimation of the wind speed was MSDM method, while Energy Pattern Factor Method (EPFM) is the most reliable and consistent method for estimating probability density function of wind. Keywords: Weibull Distribution, Method of Moment, Mean Standard Deviation Method, Energy Pattern Method


2020 ◽  
pp. 002200272097509
Author(s):  
Erin Baggott Carter ◽  
Brett L. Carter

Does propaganda reduce the rate of popular protest in autocracies? To answer this question, we draw on an original dataset of state-run newspapers from thirty countries, encompassing six languages and over four million articles. We find that propaganda diminishes the rate of protest, and that its effects persist over time. By increasing the level of pro-regime propaganda by one standard deviation, autocrats have reduced the odds of protest the following day by 15%. The half-life of this effect is between five and ten days, and very little of the initial effect persists after one month. This temporal persistence is remarkably consistent with campaign advertisements in democracies.


2020 ◽  
Author(s):  
Leah Palapar ◽  
Ngaire Kerse ◽  
Anna Rolleston ◽  
Wendy P J den Elzen ◽  
Jacobijn Gussekloo ◽  
...  

Abstract Objective To determine the physical and mental health of very old people (aged 80+) with anaemia. Methods Individual level meta-analysis from five cohorts of octogenarians (n = 2,392): LiLACS NZ Māori, LiLACS NZ non-Māori, Leiden 85-plus Study, Newcastle 85+ Study, and TOOTH. Mixed models of change in functional ability, cognitive function, depressive symptoms, and self-rated health over time were separately fitted for each cohort. We combined individual cohort estimates of differences according to the presence of anaemia at baseline, adjusting for age at entry, sex, and time elapsed. Combined estimates are presented as differences in standard deviation units (i.e. standardised mean differences–SMDs). Results The combined prevalence of anaemia was 30.2%. Throughout follow-up, participants with anaemia, on average, had: worse functional ability (SMD −0.42 of a standard deviation across cohorts; CI -0.59,-0.25); worse cognitive scores (SMD -0.27; CI -0.39,-0.15); worse depression scores (SMD -0.20; CI -0.31,-0.08); and lower ratings of their own health (SMD -0.36; CI -0.47,-0.25). Differential rates of change observed were: larger declines in functional ability for those with anaemia (SMD −0.12 over five years; CI -0.21,-0.03) and smaller mean difference in depression scores over time between those with and without anaemia (SMD 0.18 over five years; CI 0.05,0.30). Conclusion Anaemia in the very old is a common condition associated with worse functional ability, cognitive function, depressive symptoms, and self-rated health, and a more rapid decline in functional ability over time. The question remains as to whether anaemia itself contributes to worse outcomes or is simply a marker of chronic diseases and nutrient deficiencies.


2011 ◽  
Vol 68 (3) ◽  
pp. 285-294 ◽  
Author(s):  
Carlos Rogério de Mello ◽  
Léo Fernandes Ávila ◽  
Lloyd Darrell Norton ◽  
Antônio Marciano da Silva ◽  
José Márcio de Mello ◽  
...  

Soil water content is essential to understand the hydrological cycle. It controls the surface runoff generation, water infiltration, soil evaporation and plant transpiration. This work aims to analyze the spatial distribution of top soil water content and to characterize the spatial mean and standard deviation of top soil water content over time in an experimental catchment located in the Mantiqueira Range region, state of Minas Gerais, Brazil. Measurements of top soil water content were carried out every 15 days, between May/2007 and May/2008. Using time-domain reflectometry (TDR) equipment, 69 points were sampled in the top 0.2 m of the soil profile. Geostatistical procedures were applied in all steps of the study. First, the spatial continuity was evaluated, and the experimental semi-variogram was modeled. For the development of top soil water content maps over time a co-kriging procedure was used having the slope as a secondary variable. Rainfall regime controlled the top soil water content during the wet season. Land use was also another fundamental local factor. The spatial standard deviation had low values under dry conditions, and high values under wet conditions. Thus, more variability occurs under wet conditions.


Sign in / Sign up

Export Citation Format

Share Document