Efficiency is Doing Things Right: High Throughput, Automated, 3D Methods in the Modern Era of Otolith Morphometrics

Author(s):  
Micah James Quindazzi ◽  
Adam Summers ◽  
Francis Juanes

The morphometrics of fish otoliths have been commonly used to investigate population structures and the environmental impacts on ontogeny. These studies can require hundreds if not thousands of otoliths to be collected and processed. Processing these otoliths takes up valuable time, money, and resources that can be saved by automation. These structures also contain relevant information in three dimensions that is lost with 2D morphometric methods from photographic analysis. In this study, the otoliths of three populations of Coho Salmon (Oncorhynchus kisutch) were examined with manual 2D, automated 2D, and automated 3D otolith measurement methods. The automated 3D method was able to detect an 8% difference in average otolith density, while 2D methods could not. Due to the loss of information in the z-axis, and the longer processing time, 2D methods can take up to 100 times longer to reach the same statistical power as automated 3D methods. Automated 3D methods are faster, can answer a wider range of questions, and allow fisheries scientists to automate rather monotonous tasks.

2020 ◽  
Vol 228 (1) ◽  
pp. 43-49 ◽  
Author(s):  
Michael Kossmeier ◽  
Ulrich S. Tran ◽  
Martin Voracek

Abstract. Currently, dedicated graphical displays to depict study-level statistical power in the context of meta-analysis are unavailable. Here, we introduce the sunset (power-enhanced) funnel plot to visualize this relevant information for assessing the credibility, or evidential value, of a set of studies. The sunset funnel plot highlights the statistical power of primary studies to detect an underlying true effect of interest in the well-known funnel display with color-coded power regions and a second power axis. This graphical display allows meta-analysts to incorporate power considerations into classic funnel plot assessments of small-study effects. Nominally significant, but low-powered, studies might be seen as less credible and as more likely being affected by selective reporting. We exemplify the application of the sunset funnel plot with two published meta-analyses from medicine and psychology. Software to create this variation of the funnel plot is provided via a tailored R function. In conclusion, the sunset (power-enhanced) funnel plot is a novel and useful graphical display to critically examine and to present study-level power in the context of meta-analysis.


2005 ◽  
Vol 62 (12) ◽  
pp. 2716-2726 ◽  
Author(s):  
Michael J Bradford ◽  
Josh Korman ◽  
Paul S Higgins

There is considerable uncertainty about the effectiveness of fish habitat restoration programs, and reliable monitoring programs are needed to evaluate them. Statistical power analysis based on traditional hypothesis tests are usually used for monitoring program design, but here we argue that effect size estimates and their associated confidence intervals are more informative because results can be compared with both the null hypothesis of no effect and effect sizes of interest, such as restoration goals. We used a stochastic simulation model to compare alternative monitoring strategies for a habitat alteration that would change the productivity and capacity of a coho salmon (Oncorhynchus kisutch) producing stream. Estimates of the effect size using a freshwater stock–recruit model were more precise than those from monitoring the abundance of either spawners or smolts. Less than ideal monitoring programs can produce ambiguous results, which are cases in which the confidence interval includes both the null hypothesis and the effect size of interest. Our model is a useful planning tool because it allows the evaluation of the utility of different types of monitoring data, which should stimulate discussion on how the results will ultimately inform decision-making.


1983 ◽  
Vol 40 (8) ◽  
pp. 1212-1223 ◽  
Author(s):  
Randall M. Peterman ◽  
Richard D. Routledge

Large-scale experimental manipulation of juvenile salmon (Oncorhynchus spp.) abundance can provide a test of the hypothesis of linearity in the smolt-to-adult abundance relation. However, not all manipulations will be equally informative owing to large variability in marine survival. We use Monte Carlo simulation and an analytical approximation to calculate for Oregon coho salmon (O. kisutch) the statistical power of the test involving different controlled smolt abundances and durations of experiments. One recently proposed experimental release of 48 million smolts for each of 3 yr has a relatively low power and, as a consequence, is unlikely to show clearly whether the smolt-to-adult relationship is linear. The number of smolts required for a powerful test of the hypothesis of linearity is closer to the 88 million suggested in another proposal. To prevent confounding of interpretation of results, all other human sources of variability in fish should be minimized by establishing standardized rearing and release procedures during the experiment. In addition, appropriate preexperiment data on coho food, predators, and competitors will increase effectiveness of experiments by providing information on mechanisms of change in marine survival.


Author(s):  
Natasha Leal Rivas

The present study proposes the analysis of the different (sub)cognitive processes that are activated in written practice in order to improve the academic competency profile in Spanish Foreign Language (SFL) of Italian university students. The qualitative analysis provides relevant information from a group of 26 participants, basic users of the language. The textual samples collected have formed a linguistic corpus examined by means of evaluation rubrics, created in previous studies but adapted to the type of diagnostic test and the results revealed in the Needs Analysis on the academic competence level of informants. The diagnostic test has provided two actions: one guided and the other free that analyze the three dimensions of planning, textualization and review of written production. The results provide interesting information about the cognitive processes inherent in textual creation and especially those developed through guided planning and understanding of discursive genres and textual typologies useful for academic writing. It also highlights optimal results in the application of the Critical Intercultural Communicative Competence (CICC) model which allows the teaching/learning of SFL to a functional and social approach to the language from the experiential and interpersonal meaning of the texts. In particular, the results show an improvement in written discursive competence and an activation of (sub)cognitive processes both in the activity of critical understanding and in the process of conscious writing: (1) less errors in textual planning, in the organization of ideas and in the superstructure of the text-abstract; (2) an increase in the wealth of specialized vocabulary and the use of discursive markers and textual organization in the abstract genre; (3) simple syntax but with a textual construction that shows communicative intentionality (4) improvement of intercultural competence of the language, effective academic texts and socially committed to gender stereotypes in contrast to the culture itself and the L1.


2012 ◽  
Vol 5 (1) ◽  
pp. 13-17 ◽  
Author(s):  
Gina S. Lovasi ◽  
Lindsay J. Underhill ◽  
Darby Jack ◽  
Catherine Richards ◽  
Christopher Weiss ◽  
...  

Purpose: Research on obesity and the built environment has often featured logistic regression and the corresponding parameter, the odds ratio. Use of odds ratios for common outcomes such obesity may unnecessarily hinder the validity, interpretation, and communication of research findings. Methods: We identified three key issues raised by the use of odds ratios, illustrating them with data on walkability and body mass index from a study of 13,102 New York City residents. Results: First, dichotomization of continuous measures such as body mass index discards theoretically relevant information, reduces statistical power, and amplifies measurement error. Second, odds ratios are systematically higher (further from the null) than prevalence ratios; this inflation is trivial for rare outcomes, but substantial for common outcomes like obesity. Third, odds ratios can lead to incorrect conclusions during tests of interactions. The odds ratio in a particular subgroup might higher simply because the outcome is more common (and the odds ratio inflated) compared with other subgroups. Conclusion: Our recommendations are to take full advantage of continuous outcome data when feasible and to use prevalence ratios in place of odds ratios for common dichotomous outcomes. When odds ratios must be used, authors should document outcome prevalence across exposure groups.


1989 ◽  
Vol 17 (4_part_1) ◽  
pp. 569-578 ◽  
Author(s):  
Richard W. Morris

Tests of statistical hypotheses concerning treatment effect on the development of hepatocellular foci can be carried out directly on two-dimensional observations made on histologic sections or on estimates of the density and volume of foci in three dimensions. Inferences about differences in the density or size of foci from tests based on two-dimensional observations, however, can be misleading. This is because both the number of focus cross-sections observed in a tissue section and the percent area occupied by foci can be expressed in terms of the number of foci per unit volume of liver tissue and the mean focus size. As a consequence, a treatment difference may be caused by a difference in the density of foci, their average size, or both. Of more serious concern is the possibility that failure to detect a treatment effect may occur not only when there is no treatment effect but also when the density and size of foci differ between treatments in such a way that their product is unchanged. This can happen if the effect of treatment is to increase the number of foci and decrease their average size, or vice versa. A similar difficulty of interpretation is associated with hypothesis tests based on average focus cross-section area. Tests based on estimates of the number of foci per unit volume and mean focus volume allow direct inference about the quantities of interest, but these estimates are unstable because they have large variances. Empirical estimates of statistical power for the Wilcoxon rank sum test and the t-test from data on control rats suggest power may be limited in experiments with group sizes of ten and low observed numbers of focus cross-sections. If hypothesis tests based on estimates of the density and size of foci are to form the basis for a bioassay, then the power of statistical tests used to identify treatment effects should be investigated.


PeerJ ◽  
2018 ◽  
Vol 6 ◽  
pp. e5133 ◽  
Author(s):  
Lloyd A. Courtenay ◽  
Miguel Ángel Maté-González ◽  
Julia Aramendi ◽  
José Yravedra ◽  
Diego González-Aguilera ◽  
...  

The analysis of bone surface modifications (BSMs) is a prominent part of paleoanthropological studies, namely taphonomic research. Behavioral interpretations of the fossil record hinge strongly upon correct assessment of BSMs. With the significant impact of microscopic analysis to the study of BSMs, multiple authors have discussed the reliability of these technological improvements for gaining resolution in BSM discrimination. While a certain optimism is present, some important questions are ignored and others overemphasized without appropriate empirical support. This specifically affects the study of cut marks. A diversity of geometric morphometric approaches applied to the study of cut marks have resulted in the coexistence (and competition) of different 2D and 3D methods. The present work builds upon the foundation of experiments presented by Maté-González et al. (2015), Courtenay et al. (2017) and Otárola-Castillo et al. (2018) to contrast for the first time 2D and 3D methods in their resolution of cut mark interpretation and classification. The results presented here show that both approaches are equally valid and that the use of sophisticated 3D methods do not contribute to an improvement in accuracy.


2011 ◽  
Vol 12 (3) ◽  
pp. 72 ◽  
Author(s):  
Shih-Mo Lin ◽  
Karen Craft Denning ◽  
K. Victor Chow

This paper analyzes the use of three dimensional spectral analyses. We highlight the power of this technique by examining the economic interrelationships between Pacific Basin countries, Tokyo, New York and London equity markets. To the best of our knowledge, no research in financial economics has ever employed the spectral technique with greater than two dimensions. Our use of three way spectral analysis, or partial coherence measures, allows us to ferret out the influence of correlated variables, in this case, major world markets (New York, London, Tokyo) before examining the relationship among the variables of primary interest, the Pacific Basin markets. WE hold constant the influence of New York, London, and Tokyo before examining the relationship among Pacific Basin Newly Industrialized countries equity indices to illustrate our methodology. Simulation techniques enable us to document the statistical power of our technique.


2020 ◽  
Author(s):  
Alyssa R. Lindrose ◽  
Lauren W. Y. McLester-Davis ◽  
Renee I. Tristano ◽  
Leila Kataria ◽  
Shahinaz M. Gadalla ◽  
...  

AbstractUse of telomere length (TL) as a biomarker for various environmental exposures and diseases has increased in recent years. Various methods have been developed to measure telomere length. PCR-based methods remain wide-spread for population-based studies due to the high-throughput capability. While several studies have evaluated TL measurement methods, the results have been variable. We conducted a literature review of TL measurement cross-method comparison studies that included a PCR-based method published between January 1, 2002 and May 25, 2020. A total of 25 articles were found that matched the inclusion criteria. Papers were reviewed for quality of methodologic reporting of sample and DNA quality, PCR assay characteristics, sample blinding, and analytic approaches to determine final TL. Overall, methodologic reporting was low as assessed by two different reporting guidelines for qPCR-based TL measurement. There was a wide range in the reported correlation between methods (as assessed by Pearson’s r) and few studies utilized the recommended intra-class correlation coefficient (ICC) for assessment of assay repeatability and methodologic comparisons. The sample size for nearly all studies was less than 100, raising concerns about statistical power. Overall, this review found that the current literature on the relation between TL measurement methods is lacking in validity and scientific rigor. In light of these findings, we present reporting guidelines for PCR-based TL measurement methods and results of analyses of the effect of assay repeatability (ICC) on statistical power of cross-sectional and longitudinal studies. Additional cross-laboratory studies with rigorous methodologic and statistical reporting, adequate sample size, and blinding are essential to accurately determine assay repeatability and replicability as well as the relation between TL measurement methods.


Sign in / Sign up

Export Citation Format

Share Document