scholarly journals How many grains are needed for quantifying catchment erosion from tracer thermochronology?

2021 ◽  
Author(s):  
Andrea Madella ◽  
Christoph Glotzbach ◽  
Todd A. Ehlers

Abstract. Detrital tracer thermochronology exploits the relationship between bedrock thermochronometric age-elevation profiles and a distribution of detrital grain-ages collected from river, glacial, or other sediment to study spatial changes in the distribution of catchment erosion. If ages increase linearly with elevation, spatially uniform erosion is expected to yield a detrital age distribution that mirrors the catchment's hypsometric curve. Alternatively, a mismatch between detrital and hypsometric distributions may indicate non-uniform erosion within a catchment. For studies seeking to identify the pattern of erosion, measured grain-age populations rarely exceed 100 grains due largely to the time and costs related to individual measurements. With sample sizes of this order, discerning between two detrital age distributions produced by different catchment erosion scenarios can be difficult at a high statistical confidence level. However, there is no established method to quantify the sample-size-dependent uncertainty inherent to detrital tracer thermochronology, and practitioners are often left wondering how many grains is enough?. Here, we investigate how sample size affects the uncertainty of detrital age distributions and how such uncertainty affects the ability to uniquely infer the erosional pattern of the upstream area. We do this using the Kolmogorov-Smirnov statistic as metric of dissimilarity among distributions, based on which the statistical confidence of detecting an erosional pattern is determined through Monte Carlo sampling. The techniques are implemented in a new tool (ESD_thermotrace) to consistently report confidence levels as a function of sample size and application-specific variables. The proposed tool is made available as a new open-source Python-based script along with test data. Testing between different hypothesized erosion scenarios with this tool provides thermochronologists with the minimum sample size (i.e. number of bedrock and detrital grain-ages) required to answer their specific scientific question, at their desired level of statistical confidence. Furthermore, in cases of unavoidably small sample size (e.g., due to poor grain quality or low sample volume), we provide a means to calculate the confidence level of interpretations made from the data.

2021 ◽  
Author(s):  
Andrea Madella ◽  
Christoph Glotzbach ◽  
Todd A. Ehlers

<p>Detrital tracer thermochronology exploits the relationship between bedrock thermochronometric ages and elevation to study spatial variations of upstream erosion from the distribution of detrital grain ages. If ages increase linearly with elevation and analytical uncertainties are negligible, spatially uniform erosion is expected to yield a detrital age distribution that mirrors the catchment’s hypsometric curve. Alternatively, a mismatch between detrital and hypsometric distributions may indicate non-uniform erosion within a catchment. For studies of this sort, measured age populations rarely exceed 100 grains, because applying thermochronology is time consuming and expensive. With such limited sample sizes, discerning between two detrital age distributions produced by different catchment erosion scenarios may be statistically impossible with high confidence. However, there is no established method to quantify the sample-size-dependent uncertainty inherent to detrital tracer thermochronology. Here, we investigate how sample size affects the uncertainty of detrital age distributions and how such uncertainty affects the ability to uniquely infer the erosional pattern of the upstream area. We developed a new tool to consistently report confidence levels as a function of sample size and case-specific variables. The proposed tool will be made available as open-source script along with test data. Testing the hypothesized erosion scenarios will help tracer thermochronologists define the minimum sample size (i.e. number of grain ages) to answer their specific scientific question with high level of statistical confidence. Alternatively, in cases of unavoidable small sample size, the related confidence level can be quantified.</p>


Author(s):  
Zhigang Wei ◽  
Limin Luo ◽  
Burt Lin ◽  
Dmitri Konson ◽  
Kamran Nikbin

Good durability/reliability performance of products can be achieved by properly constructing and implementing design curves, which are usually obtained by analyzing test data, such as fatigue S-N data. A good design curve construction approach should consider sample size, failure probability and confidence level, and these features are especially critical when test sample size is small. The authors have developed a design S-N curve construction method based on the tolerance limit concept. However, recent studies have shown that the analytical solutions based on the tolerance limit approach may not be accurate for very small sample size because of the assumptions and approximations introduced to the analytical approach. In this paper a Monte Carlo simulation approach is used to construct design curves for test data with an assumed underlining normal (or lognormal) distribution. The difference of factor K, which measures the confidence level of the test data, between the analytical solution and the Monte Carlo simulation solutions is compared. Finally, the design curves constructed based on these methods are demonstrated and compared using fatigue S-N data with small sample size.


2020 ◽  
Vol 21 ◽  
Author(s):  
Roberto Gabbiadini ◽  
Eirini Zacharopoulou ◽  
Federica Furfaro ◽  
Vincenzo Craviotto ◽  
Alessandra Zilli ◽  
...  

Background: Intestinal fibrosis and subsequent strictures represent an important burden in inflammatory bowel disease (IBD). The detection and evaluation of the degree of fibrosis in stricturing Crohn’s disease (CD) is important to address the best therapeutic strategy (medical anti-inflammatory therapy, endoscopic dilation, surgery). Ultrasound elastography (USE) is a non-invasive technique that has been proposed in the field of IBD for evaluating intestinal stiffness as a biomarker of intestinal fibrosis. Objective: The aim of this review is to discuss the ability and current role of ultrasound elastography in the assessment of intestinal fibrosis. Results and Conclusion: Data on USE in IBD are provided by pilot and proof-of-concept studies with small sample size. The first type of USE investigated was strain elastography, while shear wave elastography has been introduced lately. Despite the heterogeneity of the methods of the studies, USE has been proven to be able to assess intestinal fibrosis in patients with stricturing CD. However, before introducing this technique in current practice, further studies with larger sample size and homogeneous parameters, testing reproducibility, and identification of validated cut-off values are needed.


Author(s):  
Jonah T Hansen ◽  
Luca Casagrande ◽  
Michael J Ireland ◽  
Jane Lin

Abstract Statistical studies of exoplanets and the properties of their host stars have been critical to informing models of planet formation. Numerous trends have arisen in particular from the rich Kepler dataset, including that exoplanets are more likely to be found around stars with a high metallicity and the presence of a “gap” in the distribution of planetary radii at 1.9 R⊕. Here we present a new analysis on the Kepler field, using the APOGEE spectroscopic survey to build a metallicity calibration based on Gaia, 2MASS and Strömgren photometry. This calibration, along with masses and radii derived from a Bayesian isochrone fitting algorithm, is used to test a number of these trends with unbiased, photometrically derived parameters, albeit with a smaller sample size in comparison to recent studies. We recover that planets are more frequently found around higher metallicity stars; over the entire sample, planetary frequencies are 0.88 ± 0.12 percent for [Fe/H] < 0 and 1.37 ± 0.16 percent for [Fe/H] ≥ 0 but at two sigma we find that the size of exoplanets influences the strength of this trend. We also recover the planet radius gap, along with a slight positive correlation with stellar mass. We conclude that this method shows promise to derive robust statistics of exoplanets. We also remark that spectrophotometry from Gaia DR3 will have an effective resolution similar to narrow band filters and allow to overcome the small sample size inherent in this study.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Shinya Hosokawa ◽  
Kyosuke Momota ◽  
Anthony A. Chariton ◽  
Ryoji Naito ◽  
Yoshiyuki Nakamura

AbstractDiversity indices are commonly used to measure changes in marine benthic communities. However, the reliability (and therefore suitability) of these indices for detecting environmental change is often unclear because of small sample size and the inappropriate choice of communities for analysis. This study explored uncertainties in taxonomic density and two indices of community structure in our target region, Japan, and in two local areas within this region, and explored potential solutions. Our analysis of the Japanese regional dataset showed a decrease in family density and a dominance of a few species as sediment conditions become degraded. Local case studies showed that species density is affected by sediment degradation at sites where multiple communities coexist. However, two indices of community structure could become insensitive because of masking by community variability, and small sample size sometimes caused misleading or inaccurate estimates of these indices. We conclude that species density is a sensitive indicator of change in marine benthic communities, and emphasise that indices of community structure should only be used when the community structure of the target community is distinguishable from other coexisting communities and there is sufficient sample size.


Sign in / Sign up

Export Citation Format

Share Document