An evaluation of gridded weather data sets for the purpose of estimating reference evapotranspiration in the United States

2020 ◽  
Vol 242 ◽  
pp. 106376 ◽  
Author(s):  
Philip A. Blankenau ◽  
Ayse Kilic ◽  
Richard Allen
2019 ◽  
Vol 35 (5) ◽  
pp. 823-835
Author(s):  
Rogério De Souza Nóia Júnior ◽  
Clyde William Fraisse ◽  
Vinicius Andrei Cerbaro ◽  
Mauricio Alex Z. Karrei ◽  
Noemi Guindin

Abstract. Methods of estimating evapotranspiration require weather variables as their main input data. Thus, the lack of full weather data sets is one of the main challenges for evaluating and mitigating the effects of climate variability and climate change on agricultural production systems. The Hargreaves-Samani method (HS) is one of the ways to estimate reference evapotranspiration (ETo) when only temperature observations are available, which is a common situation in many agricultural enterprises. Another possible option for regions not served by weather stations is the use of gridded weather data (GWD). Based on that, the main objective of this study was to evaluate the performance of the HS method to estimate ETo in different regions of the United States, as well as to assess the suitability of two gridded weather data (PRISM and NOAA-RTMA) sources to estimate ETo by comparing the results obtained to ETo estimated by the Penman-Monteith (FAO-PM) method, which is the recommended methodology by FAO Irrigation and Drainage Paper 56 when all weather variables are available. Weather observations were obtained for 17 locations across the United States representing regions with subtropical humid and semi-arid continental climates, considering the period of one year (2017). These observations were used to estimate daily ETo with the HS and Penman-Monteith methods. Our results demonstrated that the HS method performance varied according to the location and month of the year. Due to the high relative humidity (RH) during the winter, and high air temperature (Ta) during the summer, the locations selected in the state of Florida, presented the worst performance. The HS method performed well in many other locations such as Froid – MT. Also, the estimation of ETo by HS method and by using PRISM and NOAA-RTMA gridded weather databases showed a good agreement with the ETo estimated by FAO-PM based on weather station observations. Keywords: Penman-Monteith, PRISM, RTMA, Water Management.


2011 ◽  
Vol 9 (1-2) ◽  
pp. 58-69
Author(s):  
Marlene Kim

Asian Americans and Pacific Islanders (AAPIs) in the United States face problems of discrimination, the glass ceiling, and very high long-term unemployment rates. As a diverse population, although some Asian Americans are more successful than average, others, like those from Southeast Asia and Native Hawaiians and Pacific Islanders (NHPIs), work in low-paying jobs and suffer from high poverty rates, high unemployment rates, and low earnings. Collecting more detailed and additional data from employers, oversampling AAPIs in current data sets, making administrative data available to researchers, providing more resources for research on AAPIs, and enforcing nondiscrimination laws and affirmative action mandates would assist this population.


2016 ◽  
Vol 55 (11) ◽  
pp. 2509-2527 ◽  
Author(s):  
Jordane A. Mathieu ◽  
Filipe Aires

AbstractStatistical meteorological impact models are intended to represent the impact of weather on socioeconomic activities, using a statistical approach. The calibration of such models is difficult because relationships are complex and historical records are limited. Often, such models succeed in reproducing past data but perform poorly on unseen new data (a problem known as overfitting). This difficulty emphasizes the need for regularization techniques and reliable assessment of the model quality. This study illustrates, in a general way, how to extract pertinent information from weather data and exploit it in impact models that are designed to help decision-making. For a given socioeconomic activity, this type of impact model can be used to 1) study its sensitivity to weather anomalies (e.g., corn sensitivity to water stress), 2) perform seasonal forecasting (yield forecasting) for it, and 3) quantify the longer-term (several decades) impact of weather on it. The size of the training database can be increased by pooling data from various locations, but this requires statistical models that are able to use the localization information—for example, mixed-effect (ME) models. Linear, neural-network, and ME models are compared, using a real-world application: corn-yield forecasting over the United States. Many challenges faced in this paper may be encountered in many weather-impact analyses: these results show that much care is required when using space–time data because they are often highly spatially correlated. In addition, the forecast quality is strongly influenced by the training spatial scale. For the application that is described herein, learning at the state scale is a good trade-off: it is specific to local conditions while keeping enough data for the calibration.


Author(s):  
Joseph L. Breault

The National Academy of Sciences convened in 1995 for a conference on massive data sets. The presentation on health care noted that “massive applies in several dimensions . . . the data themselves are massive, both in terms of the number of observations and also in terms of the variables . . . there are tens of thousands of indicator variables coded for each patient” (Goodall, 1995, paragraph 18). We multiply this by the number of patients in the United States, which is hundreds of millions.


2019 ◽  
Vol 47 (1) ◽  
pp. 88-96 ◽  
Author(s):  
Juli M. Bollinger ◽  
Abhi Sanka ◽  
Lena Dolman ◽  
Rachel G. Liao ◽  
Robert Cook-Deegan

Accessing BRCA1/2 data facilitates the detection of disease-associated variants, which is critical to informing clinical management of risks. BRCA1/2 data sharing is complex and many practices exist. We describe current BRCA1/2 data-sharing practices, in the United States and globally, and discuss obstacles and incentives to sharing, based on 28 interviews with personnel at U.S. and non-U.S. clinical laboratories and databases. Our examination of the BRCA1/2 data-sharing landscape demonstrates strong support for and robust sharing of BRCA1/2 data around the world, increasing global accesses to diverse data sets.


2018 ◽  
Vol 40 ◽  
pp. 06021
Author(s):  
David Abraham ◽  
Tate McAlpin ◽  
Keaton Jones

The movement of bed forms (sand dunes) in large sand-bed rivers is being used to determine the transport rate of bed load. The ISSDOTv2 (Integrated Section Surface Difference Over Time version 2) methodology uses time sequenced differences of measured bathymetric surfaces to compute the bed-load transport rate. The method was verified using flume studies [1]. In general, the method provides very consistent and repeatable results, and also shows very good fidelity with most other measurement techniques. Over the last 7 years we have measured, computed and compiled what we believe to be the most extensive data set anywhere of bed-load measurements on large, sand bed rivers. Most of the measurements have been taken on the Mississippi, Missouri, Ohio and Snake Rivers in the United States. For cases where multiple measurements were made at varying flow rates, bed-load rating curves have been produced. This paper will provide references for the methodology, but is intended more to discuss the measurements, the resulting data sets, and current and potential uses for the bed-load data.


2008 ◽  
Vol 41 (1) ◽  
pp. 1-18 ◽  
Author(s):  
Peverill Squire

Abstract. Legislative scholars have paid almost no attention to explanations for the level of compensation provided to legislators, either within a country or cross-nationally, despite its importance to members and institutions. I posit a simple theory based on state wealth to explain differences in legislative pay. I test this theory using two novel data sets, one on 35 national assemblies, the other on subnational assemblies in Australia, Canada, Germany and the United States. Analysis of these data reveals that national or state wealth is strongly associated with legislator compensation. This finding is consistent with an intriguing analog in the labour economics literature.Résumé. Les érudits du monde législatif ne se sont guère penchés sur les raisons des divers niveaux de rémunération des législateurs, à l'échelle nationale ou transnationale, malgré l'importance du sujet pour les institutions et les membres des législatures. Pour expliquer cette disparité, j'avance une simple théorie fondée sur la richesse des États. J'évalue ensuite cette théorie en m'appuyant sur deux nouvelles bases de données, la première portant sur 35 assemblées nationales et l'autre sur des assemblées sous-nationales en Australie, au Canada, en Allemagne et aux États-Unis. Ces analyses statistiques démontrent qu'il existe effectivement un lien étroit entre la richesse de l'État et la rémunération des législateurs. Cette constatation est confirmée par une analogie fascinante dans la littérature sur l'économique du travail.


2008 ◽  
Vol 228 (5-6) ◽  
Author(s):  
Patrick A. Puhani

SummaryI extend a two-skill group model by Katz and Murphy (1992) to estimate relative demand and supply for skills as well as wage rigidity in Germany. Using three data sets for Germany, two for Britain and one for the United States, I simulate the change in relative wage rigidity (wage compression) in all three countries during the early and mid 1990s, this being the period when unemployment increased in Germany but fell in Britain and the US. I show that in this period, Germany experienced wage compression (relative wage rigidity), whereas Britain and the US experienced wage decompression. This evidence is consistent with the Krugman (1994) hypothesis.


2009 ◽  
pp. 28-41 ◽  
Author(s):  
E. Lynn Usery ◽  
Michael P. Finn ◽  
Michael Starbuck

The integration of geographic data layers in multiple raster and vector formats, from many different organizations and at a variety of resolutions and scales, is a significant problem for The National Map of the United States being developed by the U.S. Geological Survey. Our research has examined data integration from a layer-based approach for five of The National Map data layers: digital orthoimages, elevation, land cover, hydrography, and transportation. An empirical approach has included visual assessment by a set of respondents with statistical analysis to establish the meaning of various types of integration. A separate theoretical approach with established hypotheses tested against actual data sets has resulted in an automated procedure for integration of specific layers and is being tested. The empirical analysis has established resolution bounds on meanings of integration with raster datasets and distance bounds for vector data. The theoretical approach has used a combination of theories on cartographic transformation and generalization, such as Töpfer’s radical law, and additional research concerning optimum viewing scales for digital images to establish a set of guiding principles for integrating data of different resolutions.


2002 ◽  
Vol 16 (4) ◽  
pp. 137-160 ◽  
Author(s):  
Lawrence J White

I assemble two rarely used data sets to measure aggregate concentration in the U.S. in the 1980s and 1990s. Despite the merger waves of those decades, aggregate concentration declined in the 1980s and the early 1990s, but rose modestly in the late 1990s. The levels at the end of the decade were at or below the levels of the late 1980s or early 1990s. The average size of firm and the relative importance of larger size classes of firms increased, however. Gini coefficients for employment and payroll shares of companies showed moderate but steady increases from 1988 through 1999.


Sign in / Sign up

Export Citation Format

Share Document