scholarly journals Performance of Quality Assurance Procedures for an Applied Climate Information System

2005 ◽  
Vol 22 (1) ◽  
pp. 105-112 ◽  
Author(s):  
K. G. Hubbard ◽  
S. Goddard ◽  
W. D. Sorensen ◽  
N. Wells ◽  
T. T. Osugi

Abstract Valid data are required to make climate assessments and to make climate-related decisions. The objective of this paper is threefold: to introduce an explicit treatment of Type I and Type II errors in evaluating the performance of quality assurance procedures, to illustrate a quality control approach that allows tailoring to regions and subregions, and to introduce a new spatial regression test. Threshold testing, step change, persistence, and spatial regression were included in a test of three decades of temperature and precipitation data at six weather stations representing different climate regimes. The magnitude of thresholds was addressed in terms of the climatic variability, and multiple thresholds were tested to determine the number of Type I errors generated. In a separate test, random errors were seeded into the data and the performance of the tests was such that most Type II errors were made in the range of ±1°C for temperature, not too different from the sensor field accuracy. The study underscores the fact that precipitation is more difficult to quality control than temperature. The new spatial regression test presented in this document outperformed all the other tests, which together identified only a few errors beyond those identified by the spatial regression test.

2006 ◽  
Vol 7 (1) ◽  
pp. 29-35 ◽  
Author(s):  
Charles S. Tapiero

The purpose of this paper is to provide a strategic (game) approach to Quality Assurance. Unlike previous approaches that presume non‐motivated sources of risk, we assume in this paper that risk may arise strategically due to other motivations. For example, problems associated to supply risks received by a producer‐buyer. As a result, strategic quality assurance problems are formulated in terms of random payoff game which we solve while using the traditional approach to risk specification imbedded in quantile risks (Type I and Type II errors in statistics or producers and consumers risks). Technically, the approach devised consists in solving risk constrained (random payoff) games which involve strategic partners, potentially in conflict. The approach devised is then applied to a number of problems spanning essentially mutual sampling (quality assurance) between a buyer and supplier and strategic quality control in supply chains where potential conflict and information and power asymmetry is an inherent part of the operational problem to be dealt with. In such circumstances, contracts agreements might be violated if the parties do not apply strategic control tools to assure that what was intended is actually performed.


2020 ◽  
pp. 37-55 ◽  
Author(s):  
A. E. Shastitko ◽  
O. A. Markova

Digital transformation has led to changes in business models of traditional players in the existing markets. What is more, new entrants and new markets appeared, in particular platforms and multisided markets. The emergence and rapid development of platforms are caused primarily by the existence of so called indirect network externalities. Regarding to this, a question arises of whether the existing instruments of competition law enforcement and market analysis are still relevant when analyzing markets with digital platforms? This paper aims at discussing advantages and disadvantages of using various tools to define markets with platforms. In particular, we define the features of the SSNIP test when being applyed to markets with platforms. Furthermore, we analyze adjustment in tests for platform market definition in terms of possible type I and type II errors. All in all, it turns out that to reduce the likelihood of type I and type II errors while applying market definition technique to markets with platforms one should consider the type of platform analyzed: transaction platforms without pass-through and non-transaction matching platforms should be tackled as players in a multisided market, whereas non-transaction platforms should be analyzed as players in several interrelated markets. However, if the platform is allowed to adjust prices, there emerges additional challenge that the regulator and companies may manipulate the results of SSNIP test by applying different models of competition.


2018 ◽  
Vol 41 (1) ◽  
pp. 1-30 ◽  
Author(s):  
Chelsea Rae Austin

ABSTRACT While not explicitly stated, many tax avoidance studies seek to investigate tax avoidance that is the result of firms' deliberate actions. However, measures of firms' tax avoidance can also be affected by factors outside the firms' control—tax surprises. This study examines potential complications caused by tax surprises when measuring tax avoidance by focusing on one specific type of surprise tax savings—the unanticipated tax benefit from employees' exercise of stock options. Because the cash effective tax rate (ETR) includes the benefits of this tax surprise, the cash ETR mismeasures firms' deliberate tax avoidance. The analyses conducted show this mismeasurement is material and can lead to both Type I and Type II errors in studies of deliberate tax avoidance. Suggestions to aid researchers in mitigating these concerns are also provided.


1999 ◽  
Vol 18 (1) ◽  
pp. 37-54 ◽  
Author(s):  
Andrew J. Rosman ◽  
Inshik Seol ◽  
Stanley F. Biggs

The effect of different task settings within an industry on auditor behavior is examined for the going-concern task. Using an interactive computer process-tracing method, experienced auditors from four Big 6 accounting firms examined cases based on real data that differed on two dimensions of task settings: stage of organizational development (start-up and mature) and financial health (bankrupt and nonbankrupt). Auditors made judgments about each entity's ability to continue as a going concern and, if they had substantial doubt about continued existence, they listed evidence they would seek as mitigating factors. There are seven principal results. First, information acquisition and, by inference, problem representations were sensitive to differences in task settings. Second, financial mitigating factors dominated nonfinancial mitigating factors in both start-up and mature settings. Third, auditors' behavior reflected configural processing. Fourth, categorizing information into financial and nonfinancial dimensions was critical to understanding how auditors' information acquisition and, by inference, problem representations differed across settings. Fifth, Type I errors (determining that a healthy company is a going-concern problem) differed from correct judgments in terms of information acquisition, although Type II errors (determining that a problem company is viable) did not. This may indicate that Type II errors are primarily due to deficiencies in other stages of processing, such as evaluation. Sixth, auditors who were more accurate tended to follow flexible strategies for financial information acquisition. Finally, accurate performance in the going-concern task was found to be related to acquiring (1) fewer information cues, (2) proportionately more liquidity information and (3) nonfinancial information earlier in the process.


PEDIATRICS ◽  
1973 ◽  
Vol 51 (4) ◽  
pp. 753-753
Author(s):  
Emperor Watcher ◽  
C. A. S.

Was the layout editor making a sly comment on the present state of American pediatrics by juxtaposing Mrs. Seymour's letter with the articles concerning Child Health Associates in the January issue (Pediatrics 51:1-16, 1973)? If the word "pediatrician" is substituted for "surgeon " in the 1754 letter, it has a surprisingly modern ring. One gets the impression from reading the four articles that CHAs have demonstrated that they are capable of doing good when compared with practicing pediatricians, but it is not clear whether evidence has been collected to deal with the question of whether the associates cause less harm (in testing hypotheses one is liable to two kinds of error, and the relationships between type I and type II errors is the basis for the Neyman-Pearson theory).


1989 ◽  
Vol 25 (3) ◽  
pp. 451-454 ◽  
Author(s):  
Joel Berger ◽  
Michael D. Kock
Keyword(s):  
Type I ◽  
Type Ii ◽  
The Real ◽  

2019 ◽  
Vol 8 (4) ◽  
pp. 1849-1853

Nowadays people are interested to avail loans in banks for their needs, but providing loans to all people is not possible to banks, so they are using some measures to identify eligible customers. To measure the performance of categorical variables sensitivity and specificity are widely used in Medical and tangentially in econometrics, after using some measures also if banks provide the loans to the wrong customers whom might not able to repay the loans, and not providing to customers who can repay will lead to the type I errors and type II errors, to minimize these errors, this study explains one, how to know sensitivity is large or small and second to study the bench marks on forecasting the model by Fuzzy analysis based on fuzzy based weights and it is compared with the sensitivity analysis.


Sign in / Sign up

Export Citation Format

Share Document