size estimates
Recently Published Documents


TOTAL DOCUMENTS

556
(FIVE YEARS 138)

H-INDEX

42
(FIVE YEARS 6)

2021 ◽  
Author(s):  
Rebecca Hooper ◽  
Becky Brett ◽  
Alex Thornton

There are multiple hypotheses for the evolution of cognition. The most prominent hypotheses are the Social Intelligence Hypothesis (SIH) and the Ecological Intelligence Hypothesis (EIH), which are often pitted against one another. These hypotheses tend to be tested using broad-scale comparative studies of brain size, where brain size is used as a proxy of cognitive ability, and various social and/or ecological variables are included as predictors. Here, we test how methodologically robust such analyses are. First, we investigate variation in brain and body size measurements across >1000 species of bird. We demonstrate that there is substantial variation in brain and body size estimates across datasets, indicating that conclusions drawn from comparative brain size models are likely to differ depending on the source of the data. Following this, we subset our data to the Corvides infraorder and interrogate how modelling decisions impact results. We show that model results change substantially depending on variable inclusion, source and classification. Indeed, we could have drawn multiple contradictory conclusions about the principal drivers of brain size evolution. These results reflect recent concerns that current methods in comparative brain size studies are not robust. We add our voices to a growing community of researchers suggesting that we move on from using such methods to investigate cognitive evolution. We suggest that a more fruitful way forward is to instead use direct measures of cognitive performance to interrogate why variation in cognition arises within species and between closely related taxa.


2021 ◽  
Author(s):  
Maximilian Primbs ◽  
Charlotte Rebecca Pennington ◽  
Daniel Lakens ◽  
Miguel Alejandro Silan ◽  
Dwayne Sean Noah Lieck ◽  
...  

Götz et al. (2021) argue that small effects are the indispensable foundation for a cumulative psychological science. Whilst we applaud their efforts to bring this important discussion to the forefront, we argue that their core arguments do not hold up under scrutiny, and if left uncorrected have the potential to undermine best practices in reporting and interpreting effect size estimates. Their article can be used as a convenient blanket defense to justify ‘small’ effects as meaningful. In our reply, we first argue that comparisons between psychological science and genetics are fundamentally flawed because these disciplines have vastly different goals and methodology. Second, we argue that p-values, not effect sizes, are the main currency for publication in psychology, meaning that any biases in the literature are caused by this pressure to publish statistically significant results, not a pressure to publish large effects. Third, we contend that claims regarding small effects as important and consequential must be supported by empirical evidence, or at least require a falsifiable line of reasoning. Finally, we propose that researchers should evaluate effect sizes in relative, not absolute terms, and provide several approaches of how this can be achieved.


Author(s):  
Gaute Lyngstad ◽  
Per Skjelbred ◽  
David M. Swanson ◽  
Lasse A. Skoglund

Abstract Purpose Effect size estimates of analgesic drugs can be misleading. Ibuprofen (400 mg, 600 mg, 800 mg), paracetamol (1000 mg, 500 mg), paracetamol 1000 mg/codeine 60 mg, and placebo were investigated to establish the multidimensional pharmacodynamic profiles of each drug on acute pain with calculated effect size estimates. Methods A randomized, double-blind, single-dose, placebo-controlled, parallel-group, single-centre, outpatient, and single-dose study used 350 patients (mean age 25 year, range 18 to 30 years) of homogenous ethnicity after third molar surgery. Primary outcome was sum pain intensity over 6 h. Secondary outcomes were time to analgesic onset, duration of analgesia, time to rescue drug intake, number of patients taking rescue drug, sum pain intensity difference, maximum pain intensity difference, time to maximum pain intensity difference, number needed to treat values, adverse effects, overall drug assessment as patient-reported outcome measure (PROM), and the effect size estimates NNT and NNTp. Results Ibuprofen doses above 400 mg do not significantly increase analgesic effect. Paracetamol has a very flat analgesic dose–response profile. Paracetamol 1000/codeine 60 mg gives similar analgesia as ibuprofen from 400 mg, but has a shorter time to analgesic onset. Active drugs show no significant difference in maximal analgesic effect. Other secondary outcomes support these findings. The frequencies of adverse effects were low, mild to moderate in all active groups. NNT and NTTp values did not coincide well with PROMs. Conclusion Ibuprofen doses above 400 mg for acute pain offer limited analgesic gain. Paracetamol 1000 mg/codeine 60 mg is comparable to ibuprofen doses from 400 mg. Calculated effect size estimates and PROM in our study seem not to relate well as clinical analgesic efficacy estimators. Trial registration NCT00699114.


2021 ◽  
Vol 8 ◽  
Author(s):  
Mingming Liu ◽  
Mingli Lin ◽  
Xiaoming Tang ◽  
Lijun Dong ◽  
Peijun Zhang ◽  
...  

Observer-based counts and photo-identification are two well-established methods with an extensive use in cetacean studies. Using these two methods, group size has been widely reported, especially for small dolphins. Both methods may come with potential errors in estimating the group size, yet there is still a lack of comparison between both methods over a broad range of group size. Particularly, biogeographical variances in group size estimates were often mixed with methodological variances, making it difficult to compare estimates from different geographic regions. Here, group size estimates of a small, shallow-water, and near-shore delphinid species, Indo-Pacific humpback dolphins (Sousa chinensis), were simultaneously sampled using observer-based counts and photo-identification at three regions in the northern South China Sea. Data showed that dolphin group size from two methods were highly variable and associated with sampling regions. Generalized linear mixed models (GLMMs) indicated that dolphin group size significantly differed among regions. Statistical examinations further demonstrated dolphin group size could be affected by a complex combination of methodological and biogeographical variances. A common hurdle to examine potential factors influencing the estimation process is the inability to know the true group size at each sample. Therefore, other methods that could generate comparable estimates to represent true group size are warranted in future studies. To conclude, our findings present a better understanding of methodological and biogeographical variances in group size estimates of humpback dolphins, and help yield more robust abundance and density estimation for these vulnerable animals.


Author(s):  
Stephanie Manzo ◽  
E. Griffin Nicholson ◽  
Zachary Devereux ◽  
Robert N. Fisher ◽  
Chris W. Brown ◽  
...  

Accurate status assessments of long-lived, widely distributed taxa depend on the availability of long-term monitoring data from multiple populations. However, monitoring populations across large temporal and spatial scales is often beyond the scope of any one researcher or research group. Consequently, wildlife managers may be tasked with utilizing limited information from different sources to detect range-wide evidence of population declines and their causes. When assessments need to be made under such constraints, the research and management communities must determine how to extrapolate from variable population data to species-level inferences. Here, using three different approaches, we integrate and analyze data from the peer-reviewed literature and government agency reports to inform conservation for northwestern pond turtles (NPT) Actinemys marmorata and southwestern pond turtles (SPT) Actinemys pallida. Both NPT and SPT are long-lived freshwater turtles distributed along the west coast of the United States and Mexico. Conservation concerns exist for both species; however, SPT may face more severe threats and are thought to exist at lower densities throughout their range than NPT. For each species, we ranked the impacts of 13 potential threats, estimated population sizes, and modeled population viability with and without long-term droughts. Our results suggest that predation of hatchlings by invasive predators, such as American bullfrogs Lithobates catesbeianus and Largemouth Bass Micropterus salmoides, is a high-ranking threat for NPT and SPT. Southwestern pond turtles may also face more severe impacts associated with natural disasters (droughts, wildfires, and floods) than NPT. Population size estimates from trapping surveys indicate that SPT have smaller population sizes on average than NPT (p = 0.0003), suggesting they may be at greater risk of local extirpation. Population viability analysis models revealed that long-term droughts are a key environmental parameter; as the frequency of severe droughts increases with climate change, the likelihood of population recovery decreases, especially when census sizes are low. Given current population trends and vulnerability to natural disasters throughout their range, we suggest that conservation and recovery actions first focus on SPT to prevent further population declines.


Linguistics ◽  
2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Shravan Vasishth ◽  
Andrew Gelman

Abstract The use of statistical inference in linguistics and related areas like psychology typically involves a binary decision: either reject or accept some null hypothesis using statistical significance testing. When statistical power is low, this frequentist data-analytic approach breaks down: null results are uninformative, and effect size estimates associated with significant results are overestimated. Using an example from psycholinguistics, several alternative approaches are demonstrated for reporting inconsistencies between the data and a theoretical prediction. The key here is to focus on committing to a falsifiable prediction, on quantifying uncertainty statistically, and learning to accept the fact that – in almost all practical data analysis situations – we can only draw uncertain conclusions from data, regardless of whether we manage to obtain statistical significance or not. A focus on uncertainty quantification is likely to lead to fewer excessively bold claims that, on closer investigation, may turn out to be not supported by the data.


Sign in / Sign up

Export Citation Format

Share Document