scholarly journals Development of a semi-parametric PAR (Photosynthetically Active Radiation) partitioning model for the United States, version 1.0

2014 ◽  
Vol 7 (5) ◽  
pp. 2477-2484 ◽  
Author(s):  
J. C. Kathilankal ◽  
T. L. O'Halloran ◽  
A. Schmidt ◽  
C. V. Hanson ◽  
B. E. Law

Abstract. A semi-parametric PAR diffuse radiation model was developed using commonly measured climatic variables from 108 site-years of data from 17 AmeriFlux sites. The model has a logistic form and improves upon previous efforts using a larger data set and physically viable climate variables as predictors, including relative humidity, clearness index, surface albedo and solar elevation angle. Model performance was evaluated by comparison with a simple cubic polynomial model developed for the PAR spectral range. The logistic model outperformed the polynomial model with an improved coefficient of determination and slope relative to measured data (logistic: R2 = 0.76; slope = 0.76; cubic: R2 = 0.73; slope = 0.72), making this the most robust PAR-partitioning model for the United States currently available.

2014 ◽  
Vol 7 (2) ◽  
pp. 1649-1669
Author(s):  
J. C. Kathilankal ◽  
T. L. O'Halloran ◽  
A. Schmidt ◽  
C. V. Hanson ◽  
B. E. Law

Abstract. A semi-parametric PAR diffuse radiation model was developed using commonly measured climatic variables from 44 site-years of data from 9 AmeriFlux sites. The model has a logistic form and improves upon previous efforts, using a larger data set and physically viable climate variables as predictors, including relative humidity, clearness index, surface albedo, and solar elevation angle. Model performance was evaluated by comparison with a simple cubic polynomial model developed for the PAR spectral range. The logistic model outperformed the polynomial model with an improved coefficient of determination and slope relative to measured data (logistic: R2 = 0.85; slope = 0.86; cubic: R2 = 0.82; slope = 0.83), making this the most robust PAR-partitioning model for the US subcontinent currently available.


2015 ◽  
Vol 19 (1) ◽  
pp. 209-223 ◽  
Author(s):  
A. J. Newman ◽  
M. P. Clark ◽  
K. Sampson ◽  
A. Wood ◽  
L. E. Hay ◽  
...  

Abstract. We present a community data set of daily forcing and hydrologic response data for 671 small- to medium-sized basins across the contiguous United States (median basin size of 336 km2) that spans a very wide range of hydroclimatic conditions. Area-averaged forcing data for the period 1980–2010 was generated for three basin spatial configurations – basin mean, hydrologic response units (HRUs) and elevation bands – by mapping daily, gridded meteorological data sets to the subbasin (Daymet) and basin polygons (Daymet, Maurer and NLDAS). Daily streamflow data was compiled from the United States Geological Survey National Water Information System. The focus of this paper is to (1) present the data set for community use and (2) provide a model performance benchmark using the coupled Snow-17 snow model and the Sacramento Soil Moisture Accounting Model, calibrated using the shuffled complex evolution global optimization routine. After optimization minimizing daily root mean squared error, 90% of the basins have Nash–Sutcliffe efficiency scores ≥0.55 for the calibration period and 34% ≥ 0.8. This benchmark provides a reference level of hydrologic model performance for a commonly used model and calibration system, and highlights some regional variations in model performance. For example, basins with a more pronounced seasonal cycle generally have a negative low flow bias, while basins with a smaller seasonal cycle have a positive low flow bias. Finally, we find that data points with extreme error (defined as individual days with a high fraction of total error) are more common in arid basins with limited snow and, for a given aridity, fewer extreme error days are present as the basin snow water equivalent increases.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Richard Johnston ◽  
Xiaohan Yan ◽  
Tatiana M. Anderson ◽  
Edwin A. Mitchell

AbstractThe effect of altitude on the risk of sudden infant death syndrome (SIDS) has been reported previously, but with conflicting findings. We aimed to examine whether the risk of sudden unexpected infant death (SUID) varies with altitude in the United States. Data from the Centers for Disease Control and Prevention (CDC)’s Cohort Linked Birth/Infant Death Data Set for births between 2005 and 2010 were examined. County of birth was used to estimate altitude. Logistic regression and Generalized Additive Model (GAM) were used, adjusting for year, mother’s race, Hispanic origin, marital status, age, education and smoking, father’s age and race, number of prenatal visits, plurality, live birth order, and infant’s sex, birthweight and gestation. There were 25,305,778 live births over the 6-year study period. The total number of deaths from SUID in this period were 23,673 (rate = 0.94/1000 live births). In the logistic regression model there was a small, but statistically significant, increased risk of SUID associated with birth at > 8000 feet compared with < 6000 feet (aOR = 1.93; 95% CI 1.00–3.71). The GAM showed a similar increased risk over 8000 feet, but this was not statistically significant. Only 9245 (0.037%) of mothers gave birth at > 8000 feet during the study period and 10 deaths (0.042%) were attributed to SUID. The number of SUID deaths at this altitude in the United States is very small (10 deaths in 6 years).


2006 ◽  
Vol 36 (11) ◽  
pp. 3015-3028 ◽  
Author(s):  
Martin E Alexander ◽  
Miguel G Cruz

We evaluated the predictive capacity of a rate of spread model for active crown fires (M.G. Cruz, M.E. Alexander, and R.H. Wakimoto. 2005. Can. J. For. Res. 35: 1626–1639) using a relatively large (n = 57) independent data set originating from wildfire observations undertaken in Canada and the United States. The assembled wildfire data were characterized by more severe burning conditions and fire behavior in terms of rate of spread and the degree of crowning activity than the data set used to parameterize the crown fire rate of spread model. The statistics used to evaluate model adequacy showed good fit and a level of uncertainty considered acceptable for a wide variety of fire management and fire research applications. The crown fire rate of spread model predicted 42% of the data with an error lower then ±25%. Mean absolute percent errors of 51% and 60% were obtained for Canadian and American wildfires, respectively. The characteristics of the data set did not allow us to determine where model performance was weaker and consequently identify its shortcomings and areas of future improvement. The level of uncertainty observed suggests that the model can be readily utilized in support of operational fire management decision making and for simulations in fire research studies.


2021 ◽  
pp. 215336872110389
Author(s):  
Andrew J. Baranauskas

In the effort to prevent school shootings in the United States, policies that aim to arm teachers with guns have received considerable attention. Recent research on public support for these policies finds that African Americans are substantially less likely to support them, indicating that support for arming teachers is a racial issue. Given the racialized nature of support for punitive crime policies in the United States, it is possible that racial sentiment shapes support for arming teachers as well. This study aims to determine the association between two types of racial sentiment—explicit negative feelings toward racial/ethnic minority groups and racial resentment—and support for arming teachers using a nationally representative data set. While explicit negative feelings toward African Americans and Hispanics are not associated with support for arming teachers, those with racial resentments are significantly more likely to support arming teachers. Racial resentment also weakens the effect of other variables found to be associated with support for arming teachers, including conservative ideology and economic pessimism. Implications for policy and research are discussed.


2021 ◽  
pp. 1-29
Author(s):  
Eric Sonny Mathew ◽  
Moussa Tembely ◽  
Waleed AlAmeri ◽  
Emad W. Al-Shalabi ◽  
Abdul Ravoof Shaik

Two of the most critical properties for multiphase flow in a reservoir are relative permeability (Kr) and capillary pressure (Pc). To determine these parameters, careful interpretation of coreflooding and centrifuge experiments is necessary. In this work, a machine learning (ML) technique was incorporated to assist in the determination of these parameters quickly and synchronously for steady-state drainage coreflooding experiments. A state-of-the-art framework was developed in which a large database of Kr and Pc curves was generated based on existing mathematical models. This database was used to perform thousands of coreflood simulation runs representing oil-water drainage steady-state experiments. The results obtained from the corefloods including pressure drop and water saturation profile, along with other conventional core analysis data, were fed as features into the ML model. The entire data set was split into 70% for training, 15% for validation, and the remaining 15% for the blind testing of the model. The 70% of the data set for training teaches the model to capture fluid flow behavior inside the core, and then 15% of the data set was used to validate the trained model and to optimize the hyperparameters of the ML algorithm. The remaining 15% of the data set was used for testing the model and assessing the model performance scores. In addition, K-fold split technique was used to split the 15% testing data set to provide an unbiased estimate of the final model performance. The trained/tested model was thereby used to estimate Kr and Pc curves based on available experimental results. The values of the coefficient of determination (R2) were used to assess the accuracy and efficiency of the developed model. The respective crossplots indicate that the model is capable of making accurate predictions with an error percentage of less than 2% on history matching experimental data. This implies that the artificial-intelligence- (AI-) based model is capable of determining Kr and Pc curves. The present work could be an alternative approach to existing methods for interpreting Kr and Pc curves. In addition, the ML model can be adapted to produce results that include multiple options for Kr and Pc curves from which the best solution can be determined using engineering judgment. This is unlike solutions from some of the existing commercial codes, which usually provide only a single solution. The model currently focuses on the prediction of Kr and Pc curves for drainage steady-state experiments; however, the work can be extended to capture the imbibition cycle as well.


2021 ◽  
Author(s):  
Marni Mack ◽  
Argo Easston

In the United States, sepsis, the body's response to infection in a typically sterile circulation, is a leading causeof death (1). To assess the primary transcriptional alterations associated with each illness state, I utilized amicroarray data set from a cohort of thirtyone individuals with septic shock or systemic inflammatory responsesyndrome (2). At the transcriptional level, I discovered that the granulocytes of patients with SIRS weresimilar to those of patients with septic shock. SIRS showed a “intermediate” gene expression state betweenthat of control patients and that of septic shock patients for numerous genes expressed in the granulocyte. Thediscovery of the most differentially expressed genes in the granulocytic immune cells of patients with septicshock might aid the development of new therapies or diagnostics for an illness with a 14.7 percent to 29.9% inhospitaldeath rate despite decades of study (1).


Author(s):  
John S. Lapinski

This chapter introduces a new measure of legislative accomplishment. To understand lawmaking requires that one move beyond studying political behavior in Congress alone and beyond a complete empirical reliance on roll call votes. Moreover, legislative behavior and legislative outputs must be studied in tandem to gain a proper understanding of the lawmaking process in the United States. Although the idea of studying important lawmaking across time is not controversial, constructing an appropriate measure is not a trivial exercise. The chapter constructs a comprehensive lawmaking data set that provides measures of legislative accomplishment at the aggregate level as well as by specific policy issue areas for a 118-year period. It also explains the construction of Congress-by-Congress measures of legislative accomplishment, including measures broken down by the policy-coding schema.


Author(s):  
Sonia Gantman ◽  
Lorrie Metzger

We present a data cleaning project that utilizes real vendor master data of a large public university in the United States. Our main objective when developing this case was to identify the areas where students need guidance in order to apply a problem solving approach to the project. This includes initial analysis of the data and the task at hand, planning for cleaning and testing activities, executing this plan, and communicating the results in a written report. We provide a data set with 29K records of vendor master data, and a subset of the same data with 800 records. The assignment has two parts - the planning and the actual cleaning, each with its own deliverable. It can be used in many different courses and completed with almost any data analytics software. We provide suggested solutions and detailed solution notes for Excel and for Alteryx Designer.


Sign in / Sign up

Export Citation Format

Share Document