spurious result
Recently Published Documents


TOTAL DOCUMENTS

20
(FIVE YEARS 4)

H-INDEX

6
(FIVE YEARS 1)

2021 ◽  
Vol 8 (12) ◽  
Author(s):  
David Robert Grimes ◽  
James Heathers

A concerning amount of biomedical research is not reproducible. Unreliable results impede empirical progress in medical science, ultimately putting patients at risk. Many proximal causes of this irreproducibility have been identified, a major one being inappropriate statistical methods and analytical choices by investigators. Within this, we formally quantify the impact of inappropriate redaction beyond a threshold value in biomedical science. This is effectively truncation of a dataset by removing extreme data points, and we elucidate its potential to accidentally or deliberately engineer a spurious result in significance testing. We demonstrate that the removal of a surprisingly small number of data points can be used to dramatically alter a result. It is unknown how often redaction bias occurs in the broader literature, but given the risk of distortion to the literature involved, we suggest that it must be studiously avoided, and mitigated with approaches to counteract any potential malign effects to the research quality of medical science.


Author(s):  
Willliams H. Tega ◽  
Adeyanju O. David

This study demonstrated new evidence sustaining the idea that the issue of fraud has a long history and that fraud in deposit money banks affect performance when fraud are proxied with data extracted from inappropriate auditing process, peer group pressure, computer fraud and management looting using the Generalized Least Square Method (GLS) and the z-statistics as method of data analysis. The data used in the study were adjusted with the Jarque-Bera test of normality to remove any form of spurious result as variable normality is a standardized requirement for any linear model while the cronbach apha value was used to test the validity, consistency and reliability of the data. The main aim of the study is to investigate if variables like computer fraud, managers looting; inappropriate auditing, peer group pressure affect bank performance. To achieve this objective, research questions and hypotheses were formulated and variables were proxied for fraud and deposit banks performance as distilled from related literatures. The GLS result was used to test the formulated hypotheses with a standard z-value of 1.96. The GLS regression results revealed that there are negative relationships between bank frauds and performance while the z-test shows that bank frauds affect deposit money bank performance in Nigeria. The study therefore recommends that an efficient and modern financial technological structure such as Computer Aided Auditing Tools & Techniques (CAATTs) would combat fraud in deposit money banks in Nigeria.


2020 ◽  
Vol 494 (4) ◽  
pp. 6030-6035 ◽  
Author(s):  
Mattia C Sormani ◽  
Zhi Li

ABSTRACT It has been recently suggested that (i) nuclear rings in barred galaxies (including our own Milky Way) form at the radius where the shear parameter of the rotation curve reaches a minimum; and (ii) the acoustic instability of Montenegro et al. is responsible for driving the turbulence and angular momentum transport in the central regions of barred galaxies. Here, we test these suggestions by running simple hydrodynamical simulations in a logarithmic barred potential. Since the rotation curve of this potential is scale free, the shear minimum theory predicts that no ring should form. We find that in contrast to this prediction, a ring does form in the simulation, with morphology consistent with that of nuclear rings in real barred galaxies. This proves that the presence of a shear-minimum is not a necessary condition for the formation of a ring. We also find that perturbations that are predicted to be acoustically unstable wind up and eventually propagate off to infinity, so that the system is actually stable. We conclude that (i) the shear-minimum theory is an unlikely mechanism for the formation of nuclear rings in barred galaxies; and (ii) the acoustic instability is a spurious result and may not be able to drive turbulence in the interstellar medium, at least for the case without self-gravity. The question of the role of turbulent viscosity remains open.


2019 ◽  
Author(s):  
Samuel A Mehr

I ran a sensitivity analysis on the original data from "False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant." (Simmons et al., 2011, Psych Science) and found that the intentionally spurious result reported therein was attributable to a single outlier. This is presented as an example of why sensitivity analysis is a useful piece of the analytic toolkit in psychology.


Author(s):  
Ndubuisi Odoemelam ◽  
Regina Okafor

The study investigates the influence of corporate governance on environmental disclosure of non-financial firms listed in Nigeria Stock Exchange (NSE), anchoring on “trinity theory” (agency, stakeholder and legitimacy theories). 86 firm-year observations across 86 companies listed in Nigeria Stock Exchange (NSE) using content analysis, cross-sectional data, OLS regression techniques were used to analyze the influence of board characteristics on the extent of overall environmental disclosure (OED). The results show that board independence, board meeting and the environmental committee were statistically significant while audit committee independence and board size were insignificant. Among the three company attributes used to mitigate spurious result only Firm size significantly influence the quantity of overall environmental disclosure of the sample companies. Auditor type “big 4” (Ernest Young, Deloitte, KPMG and PWC) and industry membership show insignificant relation to environmental disclosure. The findings indicate that the level of environmental disclosure of nonfinancial companies in Nigeria is quite insufficient at an average of 10.5 %. It is not surprising that environmentally sensitive industry and auditor type had no significant influence on the extent of environmental disclosure. This buttress the point that the environment the companies operate is institutionally and legally weak. Hence it calls for improvement on environmental law and implementation as well as harmonized environmental reporting infrastructure and standard to aid comparison.


2018 ◽  
Vol 9 (2) ◽  
pp. 69
Author(s):  
Dwi Cahyaningdyah

Trade off theory of capital structure predicts that firms have optimal target leverage. However, empirical studies provide evidence that firms’ capital structure  often deviate from the target because of economic shocks. Therefore firm should make adjustment toward target leverage to maintain optimal trade off between cost and benefit of their financing decision.Understanding of adjustment behavior of the firm is key factor to comprehend firms’ capital structure dynamic. Nevertheless, asan important issue in corporate finance, speed of adjustment estimation still have several problems caused bias and spurious result. Recent studies identified several econometric problems of the model used by previous studies. This paper revisited these problems and provide several alternative solutions from recent studies. 


2017 ◽  
Vol 12 (S331) ◽  
pp. 248-253
Author(s):  
Sladjana Knežević ◽  
Ronald Läsker ◽  
Glenn van de Ven ◽  
Joan Font ◽  
John C. Raymond ◽  
...  

AbstractWe present wide-field, spatially and highly resolved spectroscopic observations of Balmer filaments in the northeastern rim of Tycho’s supernova remnant in order to investigate the signal of cosmic-ray (CR) acceleration. The spectra of Balmer-dominated shocks (BDSs) have characteristic narrow (FWHM ~ 10 km s−1) and broad (FWHM ~ 1000 km s−1) Hα components. CRs affect the Hα-line parameters: heating the cold neutrals in the interstellar medium results in broadening of the narrow Hα-line width beyond 20 km s−1, but also in reduction of the broad Hα-line width due to energy being removed from the protons in the post-shock region. For the first time we show that the width of the narrow Hα line, much larger than 20 km s−1, is not a resolution or geometric effect nor a spurious result of a neglected intermediate (FWHM ~ 100 km s−1) component resulting from hydrogen atoms undergoing charge exchange with warm protons in the broad-neutral precursor. Moreover, we show that a narrow line width ≫ 20 km s−1extends across the entire NE rim, implying CR acceleration is ubiquitous, and making it possible to relate its strength to locally varying shock conditions. Finally, we find several locations along the rim, where spectra are significantly better explained (based on Bayesian evidence) by inclusion of the intermediate component, with a width of 180 km s−1on average.


2016 ◽  
Author(s):  
Michael Powell ◽  
Mahan Hosseini ◽  
John Collins ◽  
Chloe Callahan-Flintoft ◽  
William Jones ◽  
...  

ABSTRACTMachine learning is a powerful set of techniques that has enhanced the abilities of neuroscientists to interpret information collected through EEG, fMRI, and MEG data. With these powerful techniques comes the danger of overfitting of hyper-parameters which can render results invalid, and cause a failure to generalize beyond the data set. We refer to this problem as ‘over-hyping’ and show that it is pernicious despite commonly used precautions. In particular, over-hyping occurs when an analysis is run repeatedly with slightly different analysis parameters and one set of results is selected based on the analysis. When this is done, the resulting method is unlikely to generalize to a new dataset, rendering it a partially, or perhaps even completely spurious result that will not be valid outside of the data used in the original analysis. While it is commonly assumed that cross-validation is an effective protection against such spurious results generated through overfitting or overhyping, this is not actually true. In this article, we show that both one-shot and iterative optimization of an analysis are prone to over-hyping, despite the use of cross-validation. We demonstrate that non-generalizable results can be obtained even on non-informative (i.e. random) data by modifying hyper-parameters in seemingly innocuous ways. We recommend a number of techniques for limiting over-hyping, such as lock-boxes, blind analyses, pre-registrations, and nested cross-validation. These techniques, are common in other fields that use machine learning, including computer science and physics. Adopting similar safeguards is critical for ensuring the robustness of machine-learning techniques in the neurosciences.


Sign in / Sign up

Export Citation Format

Share Document