scholarly journals Improved Quality Assurance for Historical Hourly Temperature and Humidity: Development and Application to Environmental Analysis

2004 ◽  
Vol 43 (11) ◽  
pp. 1722-1735 ◽  
Author(s):  
Daniel Y. Graybeal ◽  
Arthur T. DeGaetano ◽  
Keith L. Eggleston

Abstract Historical hourly surface synoptic (airways) meteorological reports from around the United States have been digitized as part of the NOAA Climate Database Modernization Program. An important component is improvement of quality assurance procedures for hourly meteorological data. This paper presents the development and testing of two components of a new complex framework, as well as their application toward construction, for the first time, of a 75-yr time series of apparent temperature. A pilot study indicated that a majority of flags thrown from an existing algorithm represent single-hour blips, rather than steps, and that frontal passages were being flagged incorrectly. Therefore, a model focused on flagging blips is developed; two blip-magnitude measures are compared that define a blip as a departure from temporally neighboring observations. Switches of dewpoint with dewpoint depression have also been noted among observer/digitizer errors, and so an additional check was developed to screen for these cases. This check is based on a relationship between dewpoint depression and diurnal temperature range. Tests using artificial replication of common errors indicate that the new blip model outperforms traditional step models considerably, and the new model flags an order-of-magnitude fewer frontal passages. Operational use of this check suggests type-I and type-II error rates are similar in magnitude and are approximately 5%. More than two-fifths of known dewpoint depression switch errors are caught. However, poor performance with systematic errors suggests that using the depression-range check at a coarser temporal scale than hour to hour may be more fruitful.


2020 ◽  
Vol 49 (6) ◽  
pp. 1492-1498
Author(s):  
Brian P Griffin ◽  
Jennifer L Chandler ◽  
Jeremy C Andersen ◽  
Nathan P Havill ◽  
Joseph S Elkinton

Abstract Winter moth, Operophtera brumata L. (Lepidoptera: Geometridae), causes widespread defoliation in both its native and introduced distributions. Invasive populations of winter moth are currently established in the United States and Canada, and pheromone-baited traps have been widely used to track its spread. Unfortunately, a native species, the Bruce spanworm, O. bruceata (Hulst), and O. bruceata × brumata hybrids respond to the same pheromone, complicating efforts to detect novel winter moth populations. Previously, differences in measurements of a part of the male genitalia called the uncus have been utilized to differentiate the species; however, the accuracy of these measurements has not been quantified using independent data. To establish morphological cutoffs and estimate the accuracy of uncus-based identifications, we compared morphological measurements and molecular identifications based on microsatellite genotyping. We find that there are significant differences in some uncus measurements, and that in general, uncus measurements have low type I error rates (i.e., the probability of having false positives for the presence of winter moth). However, uncus measurements had high type II error rates (i.e., the probability of having false negatives for the presence of winter moth). Our results show that uncus measurements can be useful for performing preliminary identifications to monitor the spread of winter moth, though for accurate monitoring, molecular methods are still required. As such, efforts to study the spread of winter moth into interior portions of North America should utilize a combination of pheromone trapping and uncus measurements, while maintaining vouchers for molecular identification.



BMJ Open ◽  
2014 ◽  
Vol 4 (10) ◽  
pp. e006531 ◽  
Author(s):  
Sameer Parpia ◽  
Jim A Julian ◽  
Lehana Thabane ◽  
Chushu Gu ◽  
Timothy J Whelan ◽  
...  

BackgroundIn non-inferiority trials of radiotherapy in patients with early stage breast cancer, it is inevitable that some patients will cross over from the experimental arm to the standard arm prior to initiation of any treatment due to complexities in treatment planning or subject preference. Although the intention-to-treat (ITT) analysis is the preferred approach for superiority trials, its role in non-inferiority trials is still under debate. This has led to the use of alternative approaches such as the per-protocol (PP) analysis or the as-treated (AT) analysis, despite the inherent biases of such approaches.MethodsUsing simulations, we investigate the effect of 2%, 5% and 10% random and non-random crossovers prior to radiotherapy initiation on the ITT, PP, AT and the combination of ITT and PP analyses with respect to type I error in trials with time-to-event outcomes. We also evaluate bias and SE of the estimates from the ITT, PP and AT approaches.ResultsThe AT approach had the best performance in terms of type I error, but was anticonservative as non-random crossover increased. The ITT and PP approaches were anticonservative under all percentages of random and non-random crossover. Similarly, lowest bias was seen with the AT approach; however, bias increased as the percentage of non-random crossover increased. The ITT and PP had poor performance in terms of bias as crossovers increased.ConclusionsIf minimal crossovers were to occur, we have shown that the AT approach has the lowest type I error rates and smallest opportunity for bias. Results of trials with a high number of crossovers should be interpreted with caution, especially when crossover is non-random. Attempts to prevent crossovers should be maximised.





2014 ◽  
Vol 53 (05) ◽  
pp. 343-343

We have to report marginal changes in the empirical type I error rates for the cut-offs 2/3 and 4/7 of Table 4, Table 5 and Table 6 of the paper “Influence of Selection Bias on the Test Decision – A Simulation Study” by M. Tamm, E. Cramer, L. N. Kennes, N. Heussen (Methods Inf Med 2012; 51: 138 –143). In a small number of cases the kind of representation of numeric values in SAS has resulted in wrong categorization due to a numeric representation error of differences. We corrected the simulation by using the round function of SAS in the calculation process with the same seeds as before. For Table 4 the value for the cut-off 2/3 changes from 0.180323 to 0.153494. For Table 5 the value for the cut-off 4/7 changes from 0.144729 to 0.139626 and the value for the cut-off 2/3 changes from 0.114885 to 0.101773. For Table 6 the value for the cut-off 4/7 changes from 0.125528 to 0.122144 and the value for the cut-off 2/3 changes from 0.099488 to 0.090828. The sentence on p. 141 “E.g. for block size 4 and q = 2/3 the type I error rate is 18% (Table 4).” has to be replaced by “E.g. for block size 4 and q = 2/3 the type I error rate is 15.3% (Table 4).”. There were only minor changes smaller than 0.03. These changes do not affect the interpretation of the results or our recommendations.



Author(s):  
Timnit Gebru

This chapter discusses the role of race and gender in artificial intelligence (AI). The rapid permeation of AI into society has not been accompanied by a thorough investigation of the sociopolitical issues that cause certain groups of people to be harmed rather than advantaged by it. For instance, recent studies have shown that commercial automated facial analysis systems have much higher error rates for dark-skinned women, while having minimal errors on light-skinned men. Moreover, a 2016 ProPublica investigation uncovered that machine learning–based tools that assess crime recidivism rates in the United States are biased against African Americans. Other studies show that natural language–processing tools trained on news articles exhibit societal biases. While many technical solutions have been proposed to alleviate bias in machine learning systems, a holistic and multifaceted approach must be taken. This includes standardization bodies determining what types of systems can be used in which scenarios, making sure that automated decision tools are created by people from diverse backgrounds, and understanding the historical and political factors that disadvantage certain groups who are subjected to these tools.



2021 ◽  
pp. 001316442199489
Author(s):  
Luyao Peng ◽  
Sandip Sinharay

Wollack et al. (2015) suggested the erasure detection index (EDI) for detecting fraudulent erasures for individual examinees. Wollack and Eckerly (2017) and Sinharay (2018) extended the index of Wollack et al. (2015) to suggest three EDIs for detecting fraudulent erasures at the aggregate or group level. This article follows up on the research of Wollack and Eckerly (2017) and Sinharay (2018) and suggests a new aggregate-level EDI by incorporating the empirical best linear unbiased predictor from the literature of linear mixed-effects models (e.g., McCulloch et al., 2008). A simulation study shows that the new EDI has larger power than the indices of Wollack and Eckerly (2017) and Sinharay (2018). In addition, the new index has satisfactory Type I error rates. A real data example is also included.





2021 ◽  
Vol 9 (1) ◽  
Author(s):  
Moritz Mercker ◽  
Philipp Schwemmer ◽  
Verena Peschko ◽  
Leonie Enners ◽  
Stefan Garthe

Abstract Background New wildlife telemetry and tracking technologies have become available in the last decade, leading to a large increase in the volume and resolution of animal tracking data. These technical developments have been accompanied by various statistical tools aimed at analysing the data obtained by these methods. Methods We used simulated habitat and tracking data to compare some of the different statistical methods frequently used to infer local resource selection and large-scale attraction/avoidance from tracking data. Notably, we compared spatial logistic regression models (SLRMs), spatio-temporal point process models (ST-PPMs), step selection models (SSMs), and integrated step selection models (iSSMs) and their interplay with habitat and animal movement properties in terms of statistical hypothesis testing. Results We demonstrated that only iSSMs and ST-PPMs showed nominal type I error rates in all studied cases, whereas SSMs may slightly and SLRMs may frequently and strongly exceed these levels. iSSMs appeared to have on average a more robust and higher statistical power than ST-PPMs. Conclusions Based on our results, we recommend the use of iSSMs to infer habitat selection or large-scale attraction/avoidance from animal tracking data. Further advantages over other approaches include short computation times, predictive capacity, and the possibility of deriving mechanistic movement models.



1996 ◽  
Vol 26 (2) ◽  
pp. 149-160 ◽  
Author(s):  
J. K. Belknap ◽  
S. R. Mitchell ◽  
L. A. O'Toole ◽  
M. L. Helms ◽  
J. C. Crabbe


Sign in / Sign up

Export Citation Format

Share Document