scholarly journals Optimal estimators of the position of a mass extinction when recovery potential is uniform

Paleobiology ◽  
2009 ◽  
Vol 35 (3) ◽  
pp. 447-459 ◽  
Author(s):  
Steve C. Wang ◽  
David J. Chudzicki ◽  
Philip J. Everson

Numerous methods have been developed to estimate the position of a mass extinction boundary while accounting for the incompleteness of the fossil record. Here we describe the point estimator and confidence interval for the extinction that are optimal under the assumption of uniform preservation and recovery potential, and independence among taxa. First, one should pool the data from all taxa into one combined “supersample.” Next, one can then apply methods proposed by Strauss and Sadler (1989) for a single taxon. This gives the optimal point estimator in the sense that it has the smallest variance among all possible unbiased estimators. The corresponding confidence interval is optimal in the sense that it has the shortest average width among all possible intervals that are invariant to measurement scale. These optimality properties hold even among methods that have not yet been discovered. Using simulations, we show that the optimal estimators substantially improve upon the performance of other existing methods. Because the assumptions of uniform recovery and independence among taxa are strong ones, it is important to assess to what extent they are satisfied by the data. We demonstrate the use of probability plots for this purpose. Finally, we use simulations to explore the sensitivity of the optimal point estimator and confidence interval to nonuniformity and lack of independence, and we compare their performance under these conditions with existing methods. We find that nonuniformity strongly biases the point estimators for all methods studied, inflates their standard errors, and degrades the coverage probabilities of confidence intervals. Lack of independence has less effect on the accuracy of point estimates as long as recovery potential is uniform, but it, too, inflates the standard errors and degrades confidence interval coverage probabilities.

2021 ◽  
Vol 23 ◽  
Author(s):  
Peyton Cook

This article is intended to help students understand the concept of a coverage probability involving confidence intervals. Mathematica is used as a language for describing an algorithm to compute the coverage probability for a simple confidence interval based on the binomial distribution. Then, higher-level functions are used to compute probabilities of expressions in order to obtain coverage probabilities. Several examples are presented: two confidence intervals for a population proportion based on the binomial distribution, an asymptotic confidence interval for the mean of the Poisson distribution, and an asymptotic confidence interval for a population proportion based on the negative binomial distribution.


Author(s):  
Christiana Petrou

This case examines the issue of compliance by patients at the University of Louisville School of Dentistry (ULSD). The focus is defining compliance and constructing a measurement scale. Confidence interval estimation and bootstrapping were explored to assist with the allocation of patients to compliance levels. Patients who were within the 95% confidence interval of the median for visit intervals at least 80% of the time were defined as fully compliant, with decreasing levels of compliance as the percentage decreases. Research on compliance has developed over the past few years, but a lot of work still needs to be done. A new measure of compliance could assist in understanding the patients’ needs and concerns other than the obvious financial, fear and psychological reasons as well as shedding some light on the way dentists operate and how that affects compliance. A new way of defining and measuring compliance is proposed in this research study.


Paleobiology ◽  
2018 ◽  
Vol 44 (2) ◽  
pp. 199-218 ◽  
Author(s):  
Steve C. Wang ◽  
Ling Zhong

AbstractThe Signor-Lipps effect states that even a sudden mass extinction will invariably appear gradual in the fossil record, due to incomplete fossil preservation. Most previous work on the Signor–Lipps effect has focused on testing whether taxa in a mass extinction went extinct simultaneously or gradually. However, many authors have proposed scenarios in which taxa went extinct in distinct pulses. Little methodology has been developed for quantifying characteristics of such pulsed extinction events. Here we introduce a method for estimating the number of pulses in a mass extinction, based on the positions of fossil occurrences in a stratigraphic section. Rather than using a hypothesis test and assuming simultaneous extinction as the default, we reframe the question by asking what number of pulses best explains the observed fossil record.Using a two-step algorithm, we are able to estimate not just the number of extinction pulses but also a confidence level or posterior probability for each possible number of pulses. In the first step, we find the maximum likelihood estimate for each possible number of pulses. In the second step, we calculate the Akaike information criterion and Bayesian information criterion weights for each possible number of pulses, and then apply ak-nearest neighbor classifier to these weights. This method gives us a vector of confidence levels for the number of extinction pulses—for instance, we might be 80% confident that there was a single extinction pulse, 15% confident that there were two pulses, and 5% confident that there were three pulses. Equivalently, we can state that we are 95% confident that the number of extinction pulses is one or two. Using simulation studies, we show that the method performs well in a variety of situations, although it has difficulty in the case of decreasing fossil recovery potential, and it is most effective for small numbers of pulses unless the sample size is large. We demonstrate the method using a data set of Late Cretaceous ammonites.


Paleobiology ◽  
2016 ◽  
Vol 42 (2) ◽  
pp. 240-256 ◽  
Author(s):  
Steve C. Wang ◽  
Philip J. Everson ◽  
Heather Jianan Zhou ◽  
Dasol Park ◽  
David J. Chudzicki

AbstractNumerous methods exist for estimating the true stratigraphic range of a fossil taxon based on the stratigraphic positions of its fossil occurrences. Many of these methods require the assumption of uniform fossil recovery potential—that fossils are equally likely to be found at any point within the taxon's true range. This assumption is unrealistic, because factors such as stratigraphic architecture, sampling effort, and the taxon's abundance and geographic range affect recovery potential. Other methods do not make this assumption, but they instead require a priori quantitative knowledge of recovery potential that may be difficult to obtain. We present a new Bayesian method, the Adaptive Beta method, for estimating the true stratigraphic range of a taxon that works for both uniform and non-uniform recovery potential. In contrast to existing methods, we explicitly estimate recovery potential from the positions of the occurrences themselves, so that a priori knowledge of recovery potential is not required. Using simulated datasets, we compare the performance of our method with existing methods. We show that the Adaptive Beta method performs well in that it achieves or nearly achieves nominal coverage probabilities and provides reasonable point estimates of the true extinction in a variety of situations. We demonstrate the method using a dataset of the Cambrian molluscAnabarella.


2018 ◽  
Vol 285 (1886) ◽  
pp. 20181191 ◽  
Author(s):  
Rafał Nawrot ◽  
Daniele Scarponi ◽  
Michele Azzarone ◽  
Troy A. Dexter ◽  
Kristopher M. Kusnerik ◽  
...  

Stratigraphic patterns of last occurrences (LOs) of fossil taxa potentially fingerprint mass extinctions and delineate rates and geometries of those events. Although empirical studies of mass extinctions recognize that random sampling causes LOs to occur earlier than the time of extinction (Signor–Lipps effect), sequence stratigraphic controls on the position of LOs are rarely considered. By tracing stratigraphic ranges of extant mollusc species preserved in the Holocene succession of the Po coastal plain (Italy), we demonstrated that, if mass extinction took place today, complex but entirely false extinction patterns would be recorded regionally due to shifts in local community composition and non-random variation in the abundance of skeletal remains, both controlled by relative sea-level changes. Consequently, rather than following an apparent gradual pattern expected from the Signor–Lipps effect, LOs concentrated within intervals of stratigraphic condensation and strong facies shifts mimicking sudden extinction pulses. Methods assuming uniform recovery potential of fossils falsely supported stepwise extinction patterns among studied species and systematically underestimated their stratigraphic ranges. Such effects of stratigraphic architecture, co-produced by ecological, sedimentary and taphonomic processes, can easily confound interpretations of the timing, duration and selectivity of mass extinction events. Our results highlight the necessity of accounting for palaeoenvironmental and sequence stratigraphic context when inferring extinction dynamics from the fossil record.


PeerJ ◽  
2020 ◽  
Vol 8 ◽  
pp. e9662
Author(s):  
Noppadon Yosboonruang ◽  
Sa-Aat Niwitpong ◽  
Suparat Niwitpong

The coefficient of variation is often used to illustrate the variability of precipitation. Moreover, the difference of two independent coefficients of variation can describe the dissimilarity of rainfall from two areas or times. Several researches reported that the rainfall data has a delta-lognormal distribution. To estimate the dynamics of precipitation, confidence interval construction is another method of effectively statistical inference for the rainfall data. In this study, we propose confidence intervals for the difference of two independent coefficients of variation for two delta-lognormal distributions using the concept that include the fiducial generalized confidence interval, the Bayesian methods, and the standard bootstrap. The performance of the proposed methods was gauged in terms of the coverage probabilities and the expected lengths via Monte Carlo simulations. Simulation studies shown that the highest posterior density Bayesian using the Jeffreys’ Rule prior outperformed other methods in virtually cases except for the cases of large variance, for which the standard bootstrap was the best. The rainfall series from Songkhla, Thailand are used to illustrate the proposed confidence intervals.


2011 ◽  
Vol 1 (2) ◽  
pp. 52 ◽  
Author(s):  
Richard L. Gorsuch ◽  
Curtis S. Lehmann

Non-zero correlation coefficients have non-normal distributions, affecting both means and standard deviations. Previous research suggests that z transformation may effectively correct mean bias for N's less than 30. In this study, simulations with small (20 and 30) and large (50 and 100) N's found that mean bias adjustments for larger N's are seldom needed. However, z transformations improved confidence intervals even for N = 100. The improvement was not in the estimated standard errors so much as in the asymmetrical CI's estimates based upon the z transformation. The resulting observed probabilities were generally accurate to within 1 point in the first non-zero digit. These issues are an order of magnitude less important for accuracy than design issues influencing the accuracy of the results, such as reliability, restriction of range, and N. DOI:10.2458/azu_jmmss_v1i2_gorsuch


2011 ◽  
Vol 1 (2) ◽  
pp. 52 ◽  
Author(s):  
Richard L. Gorsuch ◽  
Curtis S. Lehmann

Non-zero correlation coefficients have non-normal distributions, affecting both means and standard deviations. Previous research suggests that z transformation may effectively correct mean bias for N's less than 30. In this study, simulations with small (20 and 30) and large (50 and 100) N's found that mean bias adjustments for larger N's are seldom needed. However, z transformations improved confidence intervals even for N = 100. The improvement was not in the estimated standard errors so much as in the asymmetrical CI's estimates based upon the z transformation. The resulting observed probabilities were generally accurate to within 1 point in the first non-zero digit. These issues are an order of magnitude less important for accuracy than design issues influencing the accuracy of the results, such as reliability, restriction of range, and N. DOI:10.2458/azu_jmmss_v1i2_gorsuch


Symmetry ◽  
2019 ◽  
Vol 11 (4) ◽  
pp. 484 ◽  
Author(s):  
Gadde Srinivasa Rao ◽  
Mohammed Albassam ◽  
Muhammad Aslam

This paper assesses the bootstrap confidence intervals of a newly proposed process capability index (PCI) for Weibull distribution, using the logarithm of the analyzed data. These methods can be applied when the quality of interest has non-symmetrical distribution. Bootstrap confidence intervals, which consist of standard bootstrap (SB), percentile bootstrap (PB), and bias-corrected percentile bootstrap (BCPB) confidence interval are constructed for the proposed method. A Monte Carlo simulation study is used to determine the efficiency of newly proposed index Cpkw over the existing method by addressing the coverage probabilities and average widths. The outcome shows that the BCPB confidence interval is recommended. The methodology of the proposed index has been explained by using the real data of breaking stress of carbon fibers.


Sign in / Sign up

Export Citation Format

Share Document