scholarly journals Kernel Regression Coefficients for Practical Significance

2022 ◽  
Vol 15 (1) ◽  
pp. 32
Author(s):  
Hrishikesh D. Vinod

Quantitative researchers often use Student’s t-test (and its p-values) to claim that a particular regressor is important (statistically significantly) for explaining the variation in a response variable. A study is subject to the p-hacking problem when its author relies too much on formal statistical significance while ignoring the size of what is at stake. We suggest reporting estimates using nonlinear kernel regressions and the standardization of all variables to avoid p-hacking. We are filling an essential gap in the literature because p-hacking-related papers do not even mention kernel regressions or standardization. Although our methods have general applicability in all sciences, our illustrations refer to risk management for a cross-section of firms and financial management in macroeconomic time series. We estimate nonlinear, nonparametric kernel regressions for both examples to illustrate the computation of scale-free generalized partial correlation coefficients (GPCCs). We suggest supplementing the usual p-values by “practical significance” revealed by scale-free GPCCs. We show that GPCCs also yield new pseudo regression coefficients to measure each regressor’s relative (nonlinear) contribution in a kernel regression.

2017 ◽  
Vol 16 (3) ◽  
pp. 1
Author(s):  
Laura Badenes-Ribera ◽  
Dolores Frias-Navarro

Resumen La “Práctica Basada en la Evidencia” exige que los profesionales valoren de forma crítica los resultados de las investigaciones psicológicas. Sin embargo, las interpretaciones incorrectas de los valores p de probabilidad son abundantes y repetitivas. Estas interpretaciones incorrectas afectan a las decisiones profesionales y ponen en riesgo la calidad de las intervenciones y la acumulación de un conocimiento científico válido. Identificar el tipo de falacia que subyace a las decisiones estadísticas es fundamental para abordar y planificar estrategias de educación estadística dirigidas a intervenir sobre las interpretaciones incorrectas. En consecuencia, el objetivo de este estudio es analizar la interpretación del valor p en estudiantes y profesores universitarios de Psicología. La muestra estuvo formada por 161 participantes (43 profesores y 118 estudiantes). La antigüedad media como profesor fue de 16.7 años (DT = 10.07). La edad media de los estudiantes fue de 21.59 (DT = 1.3). Los hallazgos sugieren que los estudiantes y profesores universitarios no conocen la interpretación correcta del valor p. La falacia de la probabilidad inversa presenta mayores problemas de comprensión. Además, se confunde la significación estadística y la significación práctica o clínica. Estos resultados destacan la necesidad de la educación estadística y re-educación estadística. Abstract The "Evidence Based Practice" requires professionals to critically assess the results of psychological research. However, incorrect interpretations of p values of probability are abundant and repetitive. These misconceptions affect professional decisions and compromise the quality of interventions and the accumulation of a valid scientific knowledge. Identifying the types of fallacies that underlying statistical decisions is fundamental for approaching and planning statistical education strategies designed to intervene in incorrect interpretations. Therefore, the aim of this study is to analyze the interpretation of p value among college students of psychology and academic psychologist. The sample was composed of 161 participants (43 academic and 118 students). The mean number of years as academic was 16.7 (SD = 10.07). The mean age of college students was 21.59 years (SD = 1.3). The findings suggest that college students and academic do not know the correct interpretation of p values. The fallacy of the inverse probability presents major problems of comprehension. In addition, statistical significance and practical significance or clinical are confused. There is a need for statistical education and statistical re-education.


2019 ◽  
Vol 29 (3) ◽  
pp. 765-777 ◽  
Author(s):  
Giovanna Cilluffo ◽  
Gianluca Sottile ◽  
Stefania La Grutta ◽  
Vito MR Muggeo

This paper focuses on hypothesis testing in lasso regression, when one is interested in judging statistical significance for the regression coefficients in the regression equation involving a lot of covariates. To get reliable p-values, we propose a new lasso-type estimator relying on the idea of induced smoothing which allows to obtain appropriate covariance matrix and Wald statistic relatively easily. Some simulation experiments reveal that our approach exhibits good performance when contrasted with the recent inferential tools in the lasso framework. Two real data analyses are presented to illustrate the proposed framework in practice.


The present study explored the relationship between spot and futures coffee prices. The Correlation and Regression analysis were carried out based on monthly observations of International Coffee Organization (ICO) indicator prices of the four groups (Colombian Milds, Other Milds, Brazilian Naturals, and Robustas) representing Spot markets and the averages of 2nd and 3rd positions of the Intercontinental Exchange (ICE) New York for Arabica and ICE Europe for Robusta representing the Futures market for the period 1990 to 2019. The study also used the monthly average prices paid to coffee growers in India from 1990 to 2019. The estimated correlation coefficients indicated both the Futures prices and Spot prices of coffee are highly correlated. Further, estimated regression coefficients revealed a very strong relationship between Futures prices and Spot prices for all four ICO group indicator prices. Hence, the ICE New York (Arabica) and ICE Europe (Robusta) coffee futures prices are very closely related to Spot prices. The estimated regression coefficients between Futures prices and the price paid to coffee growers in India confirmed the positive relationship, but the dispersion of more prices over the trend line indicates a lesser degree of correlation between the price paid to growers at India and Futures market prices during the study period.


Mathematics ◽  
2021 ◽  
Vol 9 (6) ◽  
pp. 603
Author(s):  
Leonid Hanin

I uncover previously underappreciated systematic sources of false and irreproducible results in natural, biomedical and social sciences that are rooted in statistical methodology. They include the inevitably occurring deviations from basic assumptions behind statistical analyses and the use of various approximations. I show through a number of examples that (a) arbitrarily small deviations from distributional homogeneity can lead to arbitrarily large deviations in the outcomes of statistical analyses; (b) samples of random size may violate the Law of Large Numbers and thus are generally unsuitable for conventional statistical inference; (c) the same is true, in particular, when random sample size and observations are stochastically dependent; and (d) the use of the Gaussian approximation based on the Central Limit Theorem has dramatic implications for p-values and statistical significance essentially making pursuit of small significance levels and p-values for a fixed sample size meaningless. The latter is proven rigorously in the case of one-sided Z test. This article could serve as a cautionary guidance to scientists and practitioners employing statistical methods in their work.


2018 ◽  
Vol 29 (08) ◽  
pp. 1850075
Author(s):  
Tingyuan Nie ◽  
Xinling Guo ◽  
Mengda Lin ◽  
Kun Zhao

The quantification for the invulnerability of complex network is a fundamental problem in which identifying influential nodes is of theoretical and practical significance. In this paper, we propose a novel definition of centrality named total information (TC) which derives from a local sub-graph being constructed by a node and its neighbors. The centrality is then defined as the sum of the self-information of the node and the mutual information of its neighbor nodes. We use the proposed centrality to identify the importance of nodes through the evaluation of the invulnerability of scale-free networks. It shows both the efficiency and the effectiveness of the proposed centrality are improved, compared with traditional centralities.


2014 ◽  
Vol 99 (6) ◽  
pp. 729-733 ◽  
Author(s):  
Tasiopoulos Konstantinos ◽  
Komnos Apostolos ◽  
Paraforos Georgios ◽  
Tepetes Konstantinos

Abstract Studies on surgical patients provide some evidence of prompt detection of enteric ischemia with microdialysis. The purpose of the study was to measure intraperitoneal microdialysis values (glucose, glycerol, pyruvate, and lactate) in patients hospitalized in an intensive care unit (ICU) with an underlying abdominal surgical condition and to correlate these values with patients' outcomes. Twenty-one patients, 10 female, were enrolled in the study. The intraperitoneal metabolite values were measured for 3 consecutive days, starting from the first day of ICU hospitalization. Descriptive and inferential statistics were performed. The t-test, repeated measures analysis, Holm's test, and a logistic regression model were applied. Level of statistical significance was set at P = 0.05. Mean age of participants was 68.10 ± 8.02 years old. Survivors exhibited statistically significantly higher glucose values on day 3 (6.61 ± 2.01 against 3.67 ± 1.62; P = 0.002). Mean lactate/ pyruvate (L/P) values were above 20 (35.35 ± 27.11). All non-survivors had a mean three day L/P values greater than 25.94. Low L/P values were related to increased survival possibilities. High microdialysis glucose concentration, high L/P ratio and low glucose concentration were the major findings during the first three ICU hospitalization days in non-survivors. Intraperitoneal microdialysis may serve as a useful tool in understanding enteric ischemia pathophysiology.


2021 ◽  
Vol 17 (4) ◽  
pp. 664-681
Author(s):  
Yuliya N. STETSYUNICH ◽  
Andrei A. ZAITSEV

Subject. The article discusses the consistency of accounting policies, internal control and constituents of corporate economic security. Objectives. The study determines the process of articulating the term Accounting Policy at the legislative and normative levels nationwide and worldwide. We compare our own interpretations of corporate economic security. The article traces the impact of the accounting policy on areas of internal control and corporate economic securities. Methods. The study is based on general methods of research, such as the dialectical method, methods of analysis and synthesis, induction and deduction, and semantic analysis. Results. The article shows the impact of internal and external factors, which also influence corporate economic security and translate into the formation of corporate accounting policies. Conclusions and Relevance. Economic security is an aspect that every entity pursues. Therefore, it is important to thoroughly study how clauses of accounting policies influence the aspect from perspectives of business entities. The impact of constituents of corporate accounting policies, as evaluated herein, allow to consider adverse factors and help prevent negative consequences that internal and external factors may cause to corporate economic security. The findings contribute to accounting techniques for financial management in order to ensure the economic security. They are of practical significance for business leaders and financial personnel.


Stroke ◽  
2021 ◽  
Vol 52 (Suppl_1) ◽  
Author(s):  
Sarah E Wetzel-Strong ◽  
Shantel M Weinsheimer ◽  
Jeffrey Nelson ◽  
Ludmila Pawlikowska ◽  
Dewi Clark ◽  
...  

Objective: Circulating plasma protein profiling may aid in the identification of cerebrovascular disease signatures. This study aimed to identify circulating angiogenic and inflammatory biomarkers that may serve as biomarkers to differentiate sporadic brain arteriovenous malformation (bAVM) patients from other conditions with brain AVMs, including hereditary hemorrhagic telangiectasia (HHT) patients. Methods: The Quantibody Human Angiogenesis Array 1000 (Raybiotech) is an ELISA multiplex panel that was used to assess the levels of 60 proteins related to angiogenesis and inflammation in heparin plasma samples from 13 sporadic unruptured bAVM patients (69% male, mean age 51 years) and 37 patients with HHT (40% male, mean age 47 years, n=19 (51%) with bAVM). The Quantibody Q-Analyzer tool was used to calculate biomarker concentrations based on the standard curve for each marker and log-transformed marker levels were evaluated for associations between disease states using a multivariable interval regression model adjusted for age, sex, ethnicity and collection site. Statistical significance was based on Bonferroni correction for multiple testing of 60 biomarkers (P< 8.3x10 - 4 ). Results: Circulating levels of two plasma proteins differed significantly between sporadic bAVM and HHT patients: PDGF-BB (P=2.6x10 -4 , PI= 3.37, 95% CI:1.76-6.46) and CCL5 (P=6.0x10 -6 , PI=3.50, 95% CI=2.04-6.03). When considering markers with a nominal p-value of less than 0.01, MMP1 and angiostatin levels also differed between patients with sporadic bAVM and HHT. Markers with nominal p-values less than 0.05 when comparing sporadic brain AVM and HHT patients also included angiostatin, IL2, VEGF, GRO, CXCL16, ITAC, and TGFB3. Among HHT patients, the circulating levels of UPAR and IL6 were elevated in patients with documented bAVMs when considering markers with nominal p-values less than 0.05. Conclusions: This study identified differential expression of two promising plasma biomarkers that differentiate sporadic bAVMs from patients with HHT. Furthermore, this study allowed us to evaluate markers that are associated with the presence of bAVMs in HHT patients, which may offer insight into mechanisms underlying bAVM pathophysiology.


2013 ◽  
Vol 12 (3) ◽  
pp. 345-351 ◽  
Author(s):  
Jessica Middlemis Maher ◽  
Jonathan C. Markey ◽  
Diane Ebert-May

Statistical significance testing is the cornerstone of quantitative research, but studies that fail to report measures of effect size are potentially missing a robust part of the analysis. We provide a rationale for why effect size measures should be included in quantitative discipline-based education research. Examples from both biological and educational research demonstrate the utility of effect size for evaluating practical significance. We also provide details about some effect size indices that are paired with common statistical significance tests used in educational research and offer general suggestions for interpreting effect size measures. Finally, we discuss some inherent limitations of effect size measures and provide further recommendations about reporting confidence intervals.


Sign in / Sign up

Export Citation Format

Share Document