Experimental Management of Oregon Coho Salmon (Oncorhynchus kisutch): Designing for Yield of Information

1983 ◽  
Vol 40 (8) ◽  
pp. 1212-1223 ◽  
Author(s):  
Randall M. Peterman ◽  
Richard D. Routledge

Large-scale experimental manipulation of juvenile salmon (Oncorhynchus spp.) abundance can provide a test of the hypothesis of linearity in the smolt-to-adult abundance relation. However, not all manipulations will be equally informative owing to large variability in marine survival. We use Monte Carlo simulation and an analytical approximation to calculate for Oregon coho salmon (O. kisutch) the statistical power of the test involving different controlled smolt abundances and durations of experiments. One recently proposed experimental release of 48 million smolts for each of 3 yr has a relatively low power and, as a consequence, is unlikely to show clearly whether the smolt-to-adult relationship is linear. The number of smolts required for a powerful test of the hypothesis of linearity is closer to the 88 million suggested in another proposal. To prevent confounding of interpretation of results, all other human sources of variability in fish should be minimized by establishing standardized rearing and release procedures during the experiment. In addition, appropriate preexperiment data on coho food, predators, and competitors will increase effectiveness of experiments by providing information on mechanisms of change in marine survival.


1989 ◽  
Vol 46 (7) ◽  
pp. 1183-1187 ◽  
Author(s):  
Randall M. Peterman

Nickelson (1986; Can. J. Fish. Aquat. Sci. 43: 527–535) was unable to reject the null hypothesis (Ho) of density-independent marine survival rate for Oregon coho salmon (Oncorhynchus kisutch) when wild, private hatchery, and public hatchery stocks were analyzed separately. Thus, even though there appears to have been no consistent increase in adult abundance in recent years in spite of large increases in smolt abundance, Nickelson's analysis does not support the alternative hypothesis (HA) of density-dependent marine survival. Some fishery managers are using Nickelson's results to support proposals to increase smolt production further. I calculated statistical power for these cases, i.e. the probability that the null hypothesis of density-independence could have been rejected, even if marine survival were truly density-dependent. Power was below 0.19 for all cases, which meant that Nickelson (1986) had at least an 81% chance of making a Type II error (incorrectly accepting Ho), if Ho was actually false. Therefore, Oregon fishery managers should be cautious about making decisions on increased smolt production based on current data; they run a high risk of mistakenly assuming density-independent marine survival. More generally, managers should not take action based on a failure to reject a null hypothesis unless power is high.



2013 ◽  
Vol 718-720 ◽  
pp. 1872-1877 ◽  
Author(s):  
Xu Xi Chang ◽  
Xie Jian Ming ◽  
Jiang Ling Fa ◽  
Chen Shan Xiong

Currently, the soil-aggregate mixture has been widely used in some large-scale site preparation projects, compaction characteristics has been pay more attention by many engineers and researchers. However, systematic research is insufficient on how to choose the filler. Moreover, some industry regulations are different on the requirements about filler. This paper relies on a certain big site preparation projects, discussing statistical characteristics and correlation on the maximal grain size, contents of the coarse grain, gradation and other parameters of soil-aggregate mixture. The results show that the maximal and the median grain size have small discreteness and normal distribution, indicating site filler is easy to reach the requirement; The coefficient of curvature, coefficient of nonuniformity and the coarse grain content have large discreteness, and dont obey normal distribution, indicating the filler has large variability. The median grain size is highly relevant to the coarse grain content; the maximal grain size isnt relevant to the coefficient of nonuniformity, the coefficient of curvature and the coarse grain content. According to the results of correlation analysis, we suggest that the importance order follow by coarse grain content, the maximum grain size and gradation for the control parameters of filler. This research may be significant to other similar projects.



Author(s):  
Bat-hen Nahmias-Biran ◽  
Yafei Han ◽  
Shlomo Bekhor ◽  
Fang Zhao ◽  
Christopher Zegras ◽  
...  

Smartphone-based travel surveys have attracted much attention recently, for their potential to improve data quality and response rate. One of the first such survey systems, Future Mobility Sensing (FMS), leverages sensors on smartphones, and machine learning techniques to collect detailed personal travel data. The main purpose of this research is to compare data collected by FMS and traditional methods, and study the implications of using FMS data for travel behavior modeling. Since its initial field test in Singapore, FMS has been used in several large-scale household travel surveys, including one in Tel Aviv, Israel. We present comparative analyses that make use of the rich datasets from Singapore and Tel Aviv, focusing on three main aspects: (1) richness in activity behaviors observed, (2) completeness of travel and activity data, and (3) data accuracy. Results show that FMS has clear advantages over traditional travel surveys: it has higher resolution and better accuracy of times, locations, and paths; FMS represents out-of-work and leisure activities well; and reveals large variability in day-to-day activity pattern, which is inadequately captured in a one-day snapshot in typical traditional surveys. FMS also captures travel and activities that tend to be under-reported in traditional surveys such as multiple stops in a tour and work-based sub-tours. These richer and more complete and accurate data can improve future activity-based modeling.



2005 ◽  
Vol 62 (12) ◽  
pp. 2716-2726 ◽  
Author(s):  
Michael J Bradford ◽  
Josh Korman ◽  
Paul S Higgins

There is considerable uncertainty about the effectiveness of fish habitat restoration programs, and reliable monitoring programs are needed to evaluate them. Statistical power analysis based on traditional hypothesis tests are usually used for monitoring program design, but here we argue that effect size estimates and their associated confidence intervals are more informative because results can be compared with both the null hypothesis of no effect and effect sizes of interest, such as restoration goals. We used a stochastic simulation model to compare alternative monitoring strategies for a habitat alteration that would change the productivity and capacity of a coho salmon (Oncorhynchus kisutch) producing stream. Estimates of the effect size using a freshwater stock–recruit model were more precise than those from monitoring the abundance of either spawners or smolts. Less than ideal monitoring programs can produce ambiguous results, which are cases in which the confidence interval includes both the null hypothesis and the effect size of interest. Our model is a useful planning tool because it allows the evaluation of the utility of different types of monitoring data, which should stimulate discussion on how the results will ultimately inform decision-making.



2017 ◽  
Author(s):  
Sebastiaan Mathôt ◽  
Jasper Fabius ◽  
Elle van Heusden ◽  
Stefan Van der Stigchel

Measurement of pupil size (pupillometry) has recently gained renewed interest from psychologists, but there is little agreement on how pupil-size data is best analyzed. Here we focus on one aspect of pupillometric analyses: baseline correction, that is, analyzing changes in pupil size relative to a baseline period. Baseline correction is useful in experiments that investigate the effect of some experimental manipulation on pupil size. In such experiments, baseline correction improves statistical power by taking into account random fluctuations in pupil size over time. However, we show that baseline correction can also distort data if unrealistically small pupil sizes are recorded during the baseline period, which can easily occur due to eye blinks, data loss, or other distortions. Divisive baseline correction (corrected pupil size = pupil size / baseline) is affected more strongly by such distortions than subtractive baseline correction (corrected pupil size = pupil size - baseline). We make four recommendations for safe and sensible baseline correction of pupil-size data: 1) use subtractive baseline correction; 2) visually compare your corrected and uncorrected data; 3) be wary of pupil-size effects that emerge faster than the latency of the pupillary response allows (within ±220 ms after the manipulation that induces the effect); and 4) remove trials on which baseline pupil size is unrealistically small (indicative of blinks and other distortions).



2016 ◽  
Author(s):  
Hieab HH Adams ◽  
Hadie Adams ◽  
Lenore J Launer ◽  
Sudha Seshadri ◽  
Reinhold Schmidt ◽  
...  

Joint analysis of data from multiple studies in collaborative efforts strengthens scientific evidence, with the gold standard approach being the pooling of individual participant data (IPD). However, sharing IPD often has legal, ethical, and logistic constraints for sensitive or high-dimensional data, such as in clinical trials, observational studies, and large-scale omics studies. Therefore, meta-analysis of study-level effect estimates is routinely done, but this compromises on statistical power, accuracy, and flexibility. Here we propose a novel meta-analytical approach, named partial derivatives meta-analysis, that is mathematically equivalent to using IPD, yet only requires the sharing of aggregate data. It not only yields identical results as pooled IPD analyses, but also allows post-hoc adjustments for covariates and stratification without the need for site-specific re-analysis. Thus, in case that IPD cannot be shared, partial derivatives meta-analysis still produces gold standard results, which can be used to better inform guidelines and policies on clinical practice.



2019 ◽  
Author(s):  
Eduard Klapwijk ◽  
Wouter van den Bos ◽  
Christian K. Tamnes ◽  
Nora Maria Raschle ◽  
Kathryn L. Mills

Many workflows and tools that aim to increase the reproducibility and replicability of research findings have been suggested. In this review, we discuss the opportunities that these efforts offer for the field of developmental cognitive neuroscience, in particular developmental neuroimaging. We focus on issues broadly related to statistical power and to flexibility and transparency in data analyses. Critical considerations relating to statistical power include challenges in recruitment and testing of young populations, how to increase the value of studies with small samples, and the opportunities and challenges related to working with large-scale datasets. Developmental studies involve challenges such as choices about age groupings, lifespan modelling, analyses of longitudinal changes, and data that can be processed and analyzed in a multitude of ways. Flexibility in data acquisition, analyses and description may thereby greatly impact results. We discuss methods for improving transparency in developmental neuroimaging, and how preregistration can improve methodological rigor. While outlining challenges and issues that may arise before, during, and after data collection, solutions and resources are highlighted aiding to overcome some of these. Since the number of useful tools and techniques is ever-growing, we highlight the fact that many practices can be implemented stepwise.



2020 ◽  
Author(s):  
Joshua Conrad Jackson ◽  
Joseph Watts ◽  
Johann-Mattis List ◽  
Ryan Drabble ◽  
Kristen Lindquist

Humans have been using language for thousands of years, but psychologists seldom consider what natural language can tell us about the mind. Here we propose that language offers a unique window into human cognition. After briefly summarizing the legacy of language analyses in psychological science, we show how methodological advances have made these analyses more feasible and insightful than ever before. In particular, we describe how two forms of language analysis—comparative linguistics and natural language processing—are already contributing to how we understand emotion, creativity, and religion, and overcoming methodological obstacles related to statistical power and culturally diverse samples. We summarize resources for learning both of these methods, and highlight the best way to combine language analysis techniques with behavioral paradigms. Applying language analysis to large-scale and cross-cultural datasets promises to provide major breakthroughs in psychological science.



2018 ◽  
Author(s):  
Easton R White

Long-term time series are necessary to better understand population dynamics, assess species' conservation status, and make management decisions. However, population data are often expensive, requiring a lot of time and resources. What is the minimum population time series length required to detect significant trends in abundance? I first present an overview of the theory and past work that has tried to address this question. As a test of these approaches, I then examine 822 populations of vertebrate species. I show that 72% of time series required at least 10 years of continuous monitoring in order to achieve a high level of statistical power. However, the large variability between populations casts doubt on commonly used simple rules of thumb, like those employed by the IUCN Red List. I argue that statistical power needs to be considered more often in monitoring programs. Short time series are likely under-powered and potentially misleading.



Author(s):  
Spencer A. Hill ◽  
Simona Bordoni ◽  
Jonathan L. Mitchell

AbstractHow far the Hadley circulation’s ascending branch extends into the summer hemisphere is a fundamental but incompletely understood characteristic of Earth’s climate. Here, we present a predictive, analytical theory for this ascending edge latitude based on the extent of supercritical forcing. Supercriticality sets the minimum extent of a large-scale circulation based on the angular momentum and absolute vorticity distributions of the hypothetical state were the circulation absent. We explicitly simulate this latitude-by-latitude radiative-convective equilibrium (RCE) state. Its depth-averaged temperature profile is suitably captured by a simple analytical approximation that increases linearly with sinφ, where φ is latitude, from the winter to the summer pole. This, in turn, yields a one-third power-law scaling of the supercritical forcing extent with the thermal Rossby number. In moist and dry idealized GCM simulations under solsticial forcing performed with a wide range of planetary rotation rates, the ascending edge latitudes largely behave according to this scaling.



Sign in / Sign up

Export Citation Format

Share Document