The Effect of Noisy Fair Value Measures on Bank Capital Adequacy Ratios

2013 ◽  
Vol 27 (4) ◽  
pp. 693-710 ◽  
Author(s):  
Adrian Valencia ◽  
Thomas J. Smith ◽  
James Ang

SYNOPSIS Fair value accounting has been a hotly debated topic during the recent financial crisis. Supporters argue that fair values are more relevant to investors, while detractors point to the measurement error in the estimation of the reported fair values to attack its reliability. This study examines how noise in reported fair values impacts bank capital adequacy ratios. If measurement error causes reported capital levels to deviate from fundamental levels, then regulators could misidentify a financially healthy bank as troubled (type I error) or a financially troubled bank as safe (type II error), leading to suboptimal resource allocations for banks, regulators, and investors. We use a Monte Carlo simulation to generate our data, and find that while noise leads to both type I and type II errors around key Federal Deposit Insurance Corporation (FDIC) capital adequacy benchmarks, the type I error dominates. Specifically, noise is associated with 2.58 (2.60) [1.092], 5.67 (6.44) [1.94], and 10.60 (26.83) [3.423] times more type I errors than type II errors around the Tier 1 (Total) [Leverage] well-capitalized, adequately capitalized, and significantly undercapitalized benchmarks, respectively. Economically, our results suggest that noise can lead to inefficient allocation of resources on the part of regulators (increased monitoring costs) and banks (increased compliance costs). JEL Classifications: D52; M41; C15; G21.

Author(s):  
Narayan Prasad Nagendra ◽  
Gopalakrishnan Narayanamurthy ◽  
Roger Moser

Abstract Farmers submit claims to insurance providers when affected by sowing/planting risk, standing crop risk, post-harvest risk, and localized calamities risk. Decision making for settlement of claims submitted by farmers has been observed to comprise of type-I and type-II errors. The existence of these errors reduces confidence on agri-insurance providers and government in general as it fails to serve the needy farmers (type-I error) and sometimes serve the ineligible farmers (type-II error). The gaps in currently used underlying data, methods and timelines including anomalies in locational data used in crop sampling, inclusion of invalid data points in computation, estimation of crop yield, and determination of the total sown area create barriers in executing the indemnity payments for small and marginal farmers in India. In this paper, we present a satellite big data analytics based case study in a region in India and explain how the anomalies in the legacy processes were addressed to minimize type-I and type-II errors and thereby make ethical decisions while approving farmer claims. Our study demonstrates what big data analytics can offer to increase the ethicality of the decisions and the confidence at which the decision is made, especially when the beneficiaries of the decision are poor and powerless.


2001 ◽  
Vol 13 (1) ◽  
pp. 63-84 ◽  
Author(s):  
Susan C. Borkowski ◽  
Mary Jeanne Welsh ◽  
Qinke (Michael) Zhang

Attention to statistical power and effect size can improve the design and the reporting of behavioral accounting research. Three accounting journals representative of current empirical behavioral accounting research are analyzed for their power (1−β), or control of Type II errors (β), and compared to research in other disciplines. Given this study's findings, additional attention should be directed to adequacy of sample sizes and study design to ensure sufficient power when Type I error is controlled at α = .05 as a baseline. We do not suggest replacing traditional significance testing, but rather augmenting it with the reporting of β to complement and interpret the relevance of a reported α in any given study. In addition, the presentation of results in alternative formats, such as those suggested in this study, will enhance the current reporting of significance tests. In turn, this will allow the reader a richer understanding of, and an increased trust in, a study's results and implications.


2015 ◽  
Vol 15 (2) ◽  
Author(s):  
Kong-Pin Chen ◽  
Tsung-Sheng Tsai

AbstractJudicial torture to extract information or to elicit a confession was a common practice in pre-modern societies, both in the east and the west. This paper proposes a positive theory for judicial torture. It is shown that torture reflects the magistrate’s attempt to balance type I and type II errors in the decision-making, by forcing the guilty to confess with a higher probability than the innocent, and thereby decreases the type I error at the cost of the type II error. Moreover, there is a non-monotonic relationship between the superiority of torture and the informativeness of investigation: when investigation is relatively uninformative, an improvement in technology used in the investigation actually lends an advantage to torture so that torture is even more attractive to the magistrates; however, when technological progress reaches a certain threshold, the advantage of torture is weakened, so that a judicial system based on torture becomes inferior to one based on evidence. This result can explain the historical development of the judicial system.


Genetics ◽  
1994 ◽  
Vol 138 (3) ◽  
pp. 871-881 ◽  
Author(s):  
R C Jansen

Abstract Although the interval mapping method is widely used for mapping quantitative trait loci (QTLs), it is not very well suited for mapping multiple QTLs. Here, we present the results of a computer simulation to study the application of exact and approximate models for multiple QTLs. In particular, we focus on an automatic two-stage procedure in which in the first stage "important" markers are selected in multiple regression on markers. In the second stage a QTL is moved along the chromosomes by using the preselected markers as cofactors, except for the markers flanking the interval under study. A refined procedure for cases with large numbers of marker cofactors is described. Our approach will be called MQM mapping, where MQM is an acronym for "multiple-QTL models" as well as for "marker-QTL-marker." Our simulation work demonstrates the great advantage of MQM mapping compared to interval mapping in reducing the chance of a type I error (i.e., a QTL is indicated at a location where actually no QTL is present) and in reducing the chance of a type II error (i.e., a QTL is not detected).


2016 ◽  
Vol 32 (2) ◽  
pp. 155-181 ◽  
Author(s):  
Bikki Jaggi ◽  
Leo Tang

We document in this study that lack of soft information as a result of longer distance between the firm and rating agency headquarters leads to higher errors in bond ratings, reflected by Type I and Type II errors for missed defaults and false warnings, respectively. Our results show that for each 100 km in terms of the distance between a firm’s and the rating agency headquarters, the likelihood of missed defaults (Type I error) increases by 4.9% and false warnings (Type II error) increases by 2.1%. In addition, our analyses show that the downgrades are less timely for firms that are further away from the rating agency headquarters. The results also show that missed defaults are especially higher, and timeliness of downgrades is lower for firms with higher complexity, lower analyst following, and lower accessibility to the rating agency headquarters. Although analysts adjust their ratings lower when soft information is lacking as a result of longer distance, their adjustments do not fully compensate for the lack of soft information.


1973 ◽  
Vol 37 (2) ◽  
pp. 647-652 ◽  
Author(s):  
Richard L. Rogers

The cognitive dimension called category width may be related to decision-making behavior in the following way: broad categorizers tend to make more Type I errors while narrow categorizers are more inclined to make Type II errors. This contention was investigated using measures of decision-making performance on an auditory detection task. No correlations obtained between category-width measures and decision measures for 43 female Ss. However, for 38 male Ss, the correlation between category-width score and Type I error rate was significant as predicted. The correlation between category-width scores and over-all correct decision rate was also significant. A substantial relationship was also found between quantitative aptitude and category-width scores and factor scores for the two category-width factors for males. For women, quantitative aptitude did not seem to be related to category-width measures.


2020 ◽  
pp. 37-55 ◽  
Author(s):  
A. E. Shastitko ◽  
O. A. Markova

Digital transformation has led to changes in business models of traditional players in the existing markets. What is more, new entrants and new markets appeared, in particular platforms and multisided markets. The emergence and rapid development of platforms are caused primarily by the existence of so called indirect network externalities. Regarding to this, a question arises of whether the existing instruments of competition law enforcement and market analysis are still relevant when analyzing markets with digital platforms? This paper aims at discussing advantages and disadvantages of using various tools to define markets with platforms. In particular, we define the features of the SSNIP test when being applyed to markets with platforms. Furthermore, we analyze adjustment in tests for platform market definition in terms of possible type I and type II errors. All in all, it turns out that to reduce the likelihood of type I and type II errors while applying market definition technique to markets with platforms one should consider the type of platform analyzed: transaction platforms without pass-through and non-transaction matching platforms should be tackled as players in a multisided market, whereas non-transaction platforms should be analyzed as players in several interrelated markets. However, if the platform is allowed to adjust prices, there emerges additional challenge that the regulator and companies may manipulate the results of SSNIP test by applying different models of competition.


2018 ◽  
Vol 41 (1) ◽  
pp. 1-30 ◽  
Author(s):  
Chelsea Rae Austin

ABSTRACT While not explicitly stated, many tax avoidance studies seek to investigate tax avoidance that is the result of firms' deliberate actions. However, measures of firms' tax avoidance can also be affected by factors outside the firms' control—tax surprises. This study examines potential complications caused by tax surprises when measuring tax avoidance by focusing on one specific type of surprise tax savings—the unanticipated tax benefit from employees' exercise of stock options. Because the cash effective tax rate (ETR) includes the benefits of this tax surprise, the cash ETR mismeasures firms' deliberate tax avoidance. The analyses conducted show this mismeasurement is material and can lead to both Type I and Type II errors in studies of deliberate tax avoidance. Suggestions to aid researchers in mitigating these concerns are also provided.


Sign in / Sign up

Export Citation Format

Share Document