scholarly journals Discomfort, pain and fatigue levels of 160 cyclists after a kinematic bike-fitting method: an experimental study

2021 ◽  
Vol 7 (3) ◽  
pp. e001096
Author(s):  
Robson Dias Scoz ◽  
Cesar Ferreira Amorim ◽  
Thiago Espindola ◽  
Mateus Santiago ◽  
Jose Joao Baltazar Mendes ◽  
...  

ObjectiveTo analyse rider’s subjective responses after a standardised bicycle ergonomic adjustment method.MethodsExperimental study of 160 healthy, amateur mountain bikers analysed previously and 30 days after a bike-fitting session. The main outcome measures were subjective comfort level (Feeling Scale, FEEL), fatigue (OMINI Scale) and pain (Visual Analogue Scale, VAS).ResultsAll variables demonstrated statistical significance between groups pre and post bike-fit session (p<0001). FEEL, OMNI and VAS-knee demonstrated large effect sizes (d=1.30; d=1.39 and d=0.86, respectively). VAS-hands, VAS-neck and VAS-back indicated moderate effect size (d=0.58; d=0.52 and d=0.43, respectively). VAS-groin and VAS-ankle indicated a small size effect (d=0.46 and d=0.43, respectively).ConclusionsOverall discomfort, fatigue and pain in healthy mountain biker adults improved according to all three scales. The major improvements in pain levels were detected on the knee, hands, back and neck compared with presession values. Groin and ankle pain had smaller improvements but were still significant. Future clinical trials should address the bias effects of this experimental study.

2021 ◽  
pp. 1-6
Author(s):  
David M. Garner ◽  
Gláucia S. Barreto ◽  
Vitor E. Valenti ◽  
Franciele M. Vanderlei ◽  
Andrey A. Porto ◽  
...  

Abstract Introduction: Approximate Entropy is an extensively enforced metric to evaluate chaotic responses and irregularities of RR intervals sourced from an eletrocardiogram. However, to estimate their responses, it has one major problem – the accurate determination of tolerances and embedding dimensions. So, we aimed to overt this potential hazard by calculating numerous alternatives to detect their optimality in malnourished children. Materials and methods: We evaluated 70 subjects split equally: malnourished children and controls. To estimate autonomic modulation, the heart rate was measured lacking any physical, sensory or pharmacologic stimuli. In the time series attained, Approximate Entropy was computed for tolerance (0.1→0.5 in intervals of 0.1) and embedding dimension (1→5 in intervals of 1) and the statistical significances between the groups by their Cohen’s ds and Hedges’s gs were totalled. Results: The uppermost value of statistical significance accomplished for the effect sizes for any of the combinations was −0.2897 (Cohen’s ds) and −0.2865 (Hedges’s gs). This was achieved with embedding dimension = 5 and tolerance = 0.3. Conclusions: Approximate Entropy was able to identify a reduction in chaotic response via malnourished children. The best values of embedding dimension and tolerance of the Approximate Entropy to identify malnourished children were, respectively, embedding dimension = 5 and embedding tolerance = 0.3. Nevertheless, Approximate Entropy is still an unreliable mathematical marker to regulate this.


Author(s):  
María Vicent ◽  
Cándido J. Inglés ◽  
Carolina Gonzálvez ◽  
Ricardo Sanmartín ◽  
José Manuel García-Fernández

The aim of this study was to examine the relationship between Socially Prescribed Perfectionism (SPP) and the Big Five personality traits in a sample of 804 Primary School students between 8 and 11 years old (M=9.57; SD=1.12). The SPP subscale of the Child and Adolescent Perfectionism Scale (CAPS) and the Big Five Questionnaire for Children (BFQ-N), which evaluate the traits of Neuroticism, Extraversion, Openness, Agreeableness, and Conscientiousness, were used. The mean difference analysis showed that students with high levels of SPP scored significantly higher on Conscientiousness, Agreeableness, Extraversion and Openness, with small effect sizes for all cases. In contrast, no significant differences were observed in Neuroticism. Logistic regression analysis revealed that all personality traits, except neuroticism, whose results didn’t reach the statistical significance, significantly and positively predicted higher scores on PSP, with OR levels ranging from 1.01 (for Conscientiousness and Agreeableness) to 1.03 (for Openness and Extraversion).


2021 ◽  
pp. 1-11
Author(s):  
Valentina Escott-Price ◽  
Karl Michael Schmidt

<b><i>Background:</i></b> Genome-wide association studies (GWAS) were successful in identifying SNPs showing association with disease, but their individual effect sizes are small and require large sample sizes to achieve statistical significance. Methods of post-GWAS analysis, including gene-based, gene-set and polygenic risk scores, combine the SNP effect sizes in an attempt to boost the power of the analyses. To avoid giving undue weight to SNPs in linkage disequilibrium (LD), the LD needs to be taken into account in these analyses. <b><i>Objectives:</i></b> We review methods that attempt to adjust the effect sizes (β<i>-</i>coefficients) of summary statistics, instead of simple LD pruning. <b><i>Methods:</i></b> We subject LD adjustment approaches to a mathematical analysis, recognising Tikhonov regularisation as a framework for comparison. <b><i>Results:</i></b> Observing the similarity of the processes involved with the more straightforward Tikhonov-regularised ordinary least squares estimate for multivariate regression coefficients, we note that current methods based on a Bayesian model for the effect sizes effectively provide an implicit choice of the regularisation parameter, which is convenient, but at the price of reduced transparency and, especially in smaller LD blocks, a risk of incomplete LD correction. <b><i>Conclusions:</i></b> There is no simple answer to the question which method is best, but where interpretability of the LD adjustment is essential, as in research aiming at identifying the genomic aetiology of disorders, our study suggests that a more direct choice of mild regularisation in the correction of effect sizes may be preferable.


Author(s):  
H. S. Styn ◽  
S. M. Ellis

The determination of significance of differences in means and of relationships between variables is of importance in many empirical studies. Usually only statistical significance is reported, which does not necessarily indicate an important (practically significant) difference or relationship. With studies based on probability samples, effect size indices should be reported in addition to statistical significance tests in order to comment on practical significance. Where complete populations or convenience samples are worked with, the determination of statistical significance is strictly speaking no longer relevant, while the effect size indices can be used as a basis to judge significance. In this article attention is paid to the use of effect size indices in order to establish practical significance. It is also shown how these indices are utilized in a few fields of statistical application and how it receives attention in statistical literature and computer packages. The use of effect sizes is illustrated by a few examples from the research literature.


Author(s):  
Adam Thomas Biggs ◽  
Hugh M. Dainer ◽  
Lanny F Littlejohn

Hyperbaric oxygen therapy has been proposed as a method to treat traumatic brain injuries. The combination of pressure and increased oxygen concentration produces a higher content of dissolved oxygen in the bloodstream, which could generate a therapeutic benefit for brain injuries. This dissolved oxygen penetrates deeper into damaged brain tissue than otherwise possible and promotes healing. The result includes improved cognitive functioning and an alleviation of symptoms. However, randomized controlled trials have failed to produce consistent conclusions across multiple studies. There are numerous explanations that might account for the mixed evidence, although one possibility is that prior evidence focuses primarily on statistical significance. The current analyses explored existing evidence by calculating an effect size from each active treatment group and each control group among previous studies. An effect size measure offers several advantages when comparing across studies as it can be used to directly contrast evidence from different scales, and it provides a proximal measure of clinical significance. When exploring the therapeutic benefit through effect sizes, there was a robust and consistent benefit to individuals who underwent hyperbaric oxygen therapy. Placebo effects from the control condition could account for approximately one-third of the observed benefits, but there appeared to be a clinically significant benefit to using hyperbaric oxygen therapy as a treatment intervention for traumatic brain injuries. This evidence highlights the need for design improvements when exploring interventions for traumatic brain injury as well as the importance of focusing on clinical significance in addition to statistical significance.


Author(s):  
Valentin Amrhein ◽  
Fränzi Korner-Nievergelt ◽  
Tobias Roth

The widespread use of 'statistical significance' as a license for making a claim of a scientific finding leads to considerable distortion of the scientific process (American Statistical Association, Wasserstein & Lazar 2016). We review why degrading p-values into 'significant' and 'nonsignificant' contributes to making studies irreproducible, or to making them seem irreproducible. A major problem is that we tend to take small p-values at face value, but mistrust results with larger p-values. In either case, p-values can tell little about reliability of research, because they are hardly replicable even if an alternative hypothesis is true. Also significance (p≤0.05) is hardly replicable: at a realistic statistical power of 40%, given that there is a true effect, only one in six studies will significantly replicate the significant result of another study. Even at a good power of 80%, results from two studies will be conflicting, in terms of significance, in one third of the cases if there is a true effect. This means that a replication cannot be interpreted as having failed only because it is nonsignificant. Many apparent replication failures may thus reflect faulty judgement based on significance thresholds rather than a crisis of unreplicable research. Reliable conclusions on replicability and practical importance of a finding can only be drawn using cumulative evidence from multiple independent studies. However, applying significance thresholds makes cumulative knowledge unreliable. One reason is that with anything but ideal statistical power, significant effect sizes will be biased upwards. Interpreting inflated significant results while ignoring nonsignificant results will thus lead to wrong conclusions. But current incentives to hunt for significance lead to publication bias against nonsignificant findings. Data dredging, p-hacking and publication bias should be addressed by removing fixed significance thresholds. Consistent with the recommendations of the late Ronald Fisher, p-values should be interpreted as graded measures of the strength of evidence against the null hypothesis. Also larger p-values offer some evidence against the null hypothesis, and they cannot be interpreted as supporting the null hypothesis, falsely concluding that 'there is no effect'. Information on possible true effect sizes that are compatible with the data must be obtained from the observed effect size, e.g., from a sample average, and from a measure of uncertainty, such as a confidence interval. We review how confusion about interpretation of larger p-values can be traced back to historical disputes among the founders of modern statistics. We further discuss potential arguments against removing significance thresholds, such as 'we need more stringent decision rules', 'sample sizes will decrease' or 'we need to get rid of p-values'.


1998 ◽  
Vol 15 (2) ◽  
pp. 103-118 ◽  
Author(s):  
Vinson H. Sutlive ◽  
Dale A. Ulrich

The unqualified use of statistical significance tests for interpreting the results of empirical research has been called into question by researchers in a number of behavioral disciplines. This paper reviews what statistical significance tells us and what it does not, with particular attention paid to criticisms of using the results of these tests as the sole basis for evaluating the overall significance of research findings. In addition, implications for adapted physical activity research are discussed. Based on the recent literature of other disciplines, several recommendations for evaluating and reporting research findings are made. They include calculating and reporting effect sizes, selecting an alpha level larger than the conventional .05 level, placing greater emphasis on replication of results, evaluating results in a sample size context, and employing simple research designs. Adapted physical activity researchers are encouraged to use specific modifiers when describing findings as significant.


2019 ◽  
Vol 35 (2) ◽  
pp. 350-356 ◽  
Author(s):  
Juan Botella ◽  
Juan I. Durán

Meta-analysis is a firmly established methodology and an integral part of the process of generating knowledge across the empirical sciences. Meta-analysis has also focused on methodology and has become a dominant critic of methodological shortcomings. We highlight several problematic issues on how we research in psychology: excess of heterogeneity in the results and difficulties for replication, publication bias, suboptimal methodological quality, and questionable practices of the researchers. These and other problems led to a “crisis of confidence” in psychology. We discuss how the meta-analytical perspective and its procedures can help to overcome the crisis. A more cooperative perspective, instead of a competitive one, can shift to consider replication as a more valuable contribution. Knowledge cannot be based in isolated studies. Given the nature of the object of study of psychology the natural unit to generate knowledge must be the estimated distribution of the effect sizes, not the dichotomous decision on statistical significance in specific studies. Some suggestions are offered on how to redirect researchers' research and practices, so that their personal interests and those of science as such are better aligned.


Sign in / Sign up

Export Citation Format

Share Document