A systematic neglect to disclose critical visual properties in clinical, developmental, and social psychology

2021 ◽  
Author(s):  
Zhicheng Lin ◽  
Qi Ma ◽  
Yang Zhang

To promote research transparency and reproducibility, current journal guidelines focus primarily on the availability of data, code, and research materials. How materials were actually used or presented in a given study has been skimmed over. Across major disciplines of psychology—clinical, developmental, and social/personality psychology—we have identified a systematic neglect to disclose critical properties of visual display and stimuli in current practice. This failure of disclosures presents a roadblock to reproducible science, as direct replications are necessarily made difficult if not impossible. It also introduces heterogeneity that can increase measurement errors and potentially reduce statistical power. This finding has immediate implications for journal policy—there is a pressing need to explicitly emphasize transparent reporting of how materials were used in the research. To help achieve this goal, reporting templates and accessible definitions of technical terms are provided.

F1000Research ◽  
2021 ◽  
Vol 9 ◽  
pp. 678
Author(s):  
Miranda S. Cumpston ◽  
Joanne E. McKenzie ◽  
James Thomas ◽  
Sue E. Brennan

Introduction: Systematic reviews involve synthesis of research to inform decision making by clinicians, consumers, policy makers and researchers. While guidance for synthesis often focuses on meta-analysis, synthesis begins with specifying the ’PICO for each synthesis’ (i.e. the criteria for deciding which populations, interventions, comparators and outcomes are eligible for each analysis). Synthesis may also involve the use of statistical methods other than meta-analysis (e.g. vote counting based on the direction of effect, presenting the range of effects, combining P values) augmented by visual display, tables and text-based summaries. This study examines these two aspects of synthesis. Objectives: To identify and describe current practice in systematic reviews of health interventions in relation to: (i) approaches to grouping and definition of PICO characteristics for synthesis; and (ii) methods of summary and synthesis when meta-analysis is not used. Methods: We will randomly sample 100 systematic reviews of the quantitative effects of public health and health systems interventions published in 2018 and indexed in the Health Evidence and Health Systems Evidence databases. Two authors will independently screen citations for eligibility. Two authors will confirm eligibility based on full text, then extract data for 20% of reviews on the specification and use of PICO for synthesis, and the presentation and synthesis methods used (e.g. statistical synthesis methods, tabulation, visual displays, structured summary). The remaining reviews will be confirmed as eligible and data extracted by a single author. We will use descriptive statistics to summarise the specification of methods and their use in practice. We will compare how clearly the PICO for synthesis is specified in reviews that primarily use meta-analysis and those that do not. Conclusion: This study will provide an understanding of current practice in two important aspects of the synthesis process, enabling future research to test the feasibility and impact of different approaches.


2019 ◽  
Vol 24 (5) ◽  
pp. 185-189 ◽  
Author(s):  
Emil Eik Nielsen ◽  
Anders Kehlet Nørskov ◽  
Theis Lange ◽  
Lehana Thabane ◽  
Jørn Wetterslev ◽  
...  

In order to ensure the validity of results of randomised clinical trials and under some circumstances to optimise statistical power, most statistical methods require validation of underlying statistical assumptions. The present paper describes how trialists in major medical journals report tests of underlying statistical assumptions when analysing results of randomised clinical trials. We also consider possible solutions how to improve current practice by adequate reporting of tests of underlying statistical assumptions. We conclude that there is a need to reach consensus on which underlying assumptions should be assessed, how these underlying assumptions should be assessed and what should be done if the underlying assumptions are violated.


2020 ◽  
Vol 2020 (56) ◽  
pp. 176-187 ◽  
Author(s):  
Ethel S Gilbert ◽  
Mark P Little ◽  
Dale L Preston ◽  
Daniel O Stram

Abstract This article addresses issues relevant to interpreting findings from 26 epidemiologic studies of persons exposed to low-dose radiation. We review the extensive data from both epidemiologic studies of persons exposed at moderate or high doses and from radiobiology that together have firmly established radiation as carcinogenic. We then discuss the use of the linear relative risk model that has been used to describe data from both low- and moderate- or high-dose studies. We consider the effects of dose measurement errors; these can reduce statistical power and lead to underestimation of risks but are very unlikely to bring about a spurious dose response. We estimate statistical power for the low-dose studies under the assumption that true risks of radiation-related cancers are those expected from studies of Japanese atomic bomb survivors. Finally, we discuss the interpretation of confidence intervals and statistical tests and the applicability of the Bradford Hill principles for a causal relationship.


Author(s):  
Chao Cheng ◽  
Donna Spiegelman ◽  
Zuoheng Wang ◽  
Molin Wang

Abstract Interest in investigating gene-environment (GxE) interactions has rapidly increased over the last decade. Although GxE interactions have been extremely investigated in large studies, few such effects have been identified and replicated, highlighting the need to develop statistical GxE tests with greater statistical power. The reverse test has been proposed for testing the interaction effect between a continuous exposure and genetic variants in relation to a binary disease outcome, which leverages the idea of linear discriminant analysis, significantly increasing statistical power comparing to the standard logistic regression approach. However, this reverse approach did not take into consideration adjustment for confounders. Since GxE interaction studies are inherently non-experimental, adjusting for potential confounding effects is critical for valid evaluation of GxE interactions. In this paper, we extend the reverse test to allow for confounders. The proposed reverse test also allows for exposure measurement errors as typically occurs. Extensive simulation experiments demonstrated that the proposed method not only provides greater statistical power under most simulation scenarios but also provides substantive computational efficiency, which achieves a computation time that is more than sevenfold less than that of the standard logistic regression test. In an illustrative example, we applied the proposed approach to the Veterans Aging Cohort Study (VACS) to search for genetic susceptibility loci modifying the smoking–HIV status association.


2021 ◽  
Author(s):  
Zhicheng Lin ◽  
Qi Ma ◽  
Yang Zhang

Materials in research studies are often presented on digital screens to participants across many subfields of psychology, including clinical, developmental, and social/personality psychology. What is often neglected in the current practice is the reporting of critical visual properties, such as luminance, color, contrast, and gamma, which can dramatically affect the appearance of visual materials. Conventional luminance measurement equipment in vision science is both expensive and onerous to operate for novices. A pressing need—if we are to improve current research practice and education in psychology—is to develop affordable and user-friendly tools to measure and calibrate luminance/color. Here we have developed a software package—PsyCalibrator—that takes advantage of low-cost hardware (SpyderX) and makes luminance and color measurement and calibration accessible and user-friendly. Validation of luminance measurement shows that, in addition to excellent accuracy in linear correction, SpyderX performs at the same level compared with professional, high-cost photometers (MKII and PR-670) in measurement accuracy. SpyderX also has very low measurement variances for practical purposes. A tutorial is provided on how to use PsyCalibrator to measure luminance and color and to calibrate displays. Finally, gamma calibration based on visual methods (without photometers) is discussed, together with its own validation and tutorial.


2021 ◽  
Vol 99 (Supplement_1) ◽  
pp. 104-105
Author(s):  
Nick V Serão ◽  
Mike D Tokach ◽  
Neil Paton

Abstract Experimental design and statistical data analyses are fundamental components of animal science research. Proper design of experiments and adequate sampling permits testing hypotheses raised by researchers and sets the stage for collecting required data and subsequent statistical analysis. When designing experiments, researchers should respect rules of randomization of treatments to avoid statistical bias and permit proper inference to be drawn. Use of sample sizes that result in adequate statistical power to identify the hypothesized differences among factor levels of interest is key and should be driven by formal processes determining such. Best practices for data collection should be performed to obtain high quality data by reducing collection (e.g., mislabeling, improper technique) and measurement errors. With sound data, appropriate and optimal statistical methods should be used to generate valid results. The statistical method deployed should be chosen based on assumptions about residuals (e.g., normality, correlation, and homogeneity) and on the type of data (e.g., quantitative continuous or categorical). The appropriate statistical model used should also be consistent with the experimental design to validate the respective test statistics. The science of statistics is changing rapidly. With the development of high-throughput technologies, the generation of large datasets, high performance and sophisticated models and the interest in Big Data, the training of animal science graduate students in data management and rigorous statistical analyses is more important than ever. In order to meet the demands of current trends, animal science graduate students must be trained in several complex statistical and computational skills to meet the challenges imposed by these complicated, sophisticated and nuanced analytical methods. The livestock production sector will benefit from improved training, use of advanced and appropriate experimental designs, and collection and analysis of quality data in research.


2020 ◽  
Author(s):  
Gabriela Hofer

The sample size in a quantitative study does not only affect statistical power but also the precision with which effects can be estimated. Schönbrodt and Perugini (2013) have applied Monte-Carlo simulations to establish the point of stability of correlation coefficients (i.e., the sample size from which on they only fluctuate around the true value within acceptable margins). They reported that the sample size necessary to achieve stability depends on the size of the underlying correlation, the width of the margins within which fluctuations are tolerated (the corridor of stability), and the desired confidence that the correlation does not exceed these margins. According to their suggestion, a sample around 250 is desirable for typical scenarios in personality psychology. The present contribution aimed to replicate these findings and extend them by determining the point of stability for rho = .0 and very narrow corridors of stability. Results pertaining to the replication were virtually identical to those reported by the original authors. In addition, correlations of .0 to became stable within a corridor of .1 at around 260 participants. Considerably more data points should be collected if only correlations within a very narrow corridor of stability are to be accepted. Future research could extend these findings by investigating the point of stability in data with different types of non-normal distributions.


Author(s):  
W.J. de Ruijter ◽  
Sharma Renu

Established methods for measurement of lattice spacings and angles of crystalline materials include x-ray diffraction, microdiffraction and HREM imaging. Structural information from HREM images is normally obtained off-line with the traveling table microscope or by the optical diffractogram technique. We present a new method for precise measurement of lattice vectors from HREM images using an on-line computer connected to the electron microscope. It has already been established that an image of crystalline material can be represented by a finite number of sinusoids. The amplitude and the phase of these sinusoids are affected by the microscope transfer characteristics, which are strongly influenced by the settings of defocus, astigmatism and beam alignment. However, the frequency of each sinusoid is solely a function of overall magnification and periodicities present in the specimen. After proper calibration of the overall magnification, lattice vectors can be measured unambiguously from HREM images.Measurement of lattice vectors is a statistical parameter estimation problem which is similar to amplitude, phase and frequency estimation of sinusoids in 1-dimensional signals as encountered, for example, in radar, sonar and telecommunications. It is important to properly model the observations, the systematic errors and the non-systematic errors. The observations are modelled as a sum of (2-dimensional) sinusoids. In the present study the components of the frequency vector of the sinusoids are the only parameters of interest. Non-systematic errors in recorded electron images are described as white Gaussian noise. The most important systematic error is geometric distortion. Lattice vectors are measured using a two step procedure. First a coarse search is obtained using a Fast Fourier Transform on an image section of interest. Prior to Fourier transformation the image section is multiplied with a window, which gradually falls off to zero at the edges. The user indicates interactively the periodicities of interest by selecting spots in the digital diffractogram. A fine search for each selected frequency is implemented using a bilinear interpolation, which is dependent on the window function. It is possible to refine the estimation even further using a non-linear least squares estimation. The first two steps provide the proper starting values for the numerical minimization (e.g. Gauss-Newton). This third step increases the precision with 30% to the highest theoretically attainable (Cramer and Rao Lower Bound). In the present studies we use a Gatan 622 TV camera attached to the JEM 4000EX electron microscope. Image analysis is implemented on a Micro VAX II computer equipped with a powerful array processor and real time image processing hardware. The typical precision, as defined by the standard deviation of the distribution of measurement errors, is found to be <0.003Å measured on single crystal silicon and <0.02Å measured on small (10-30Å) specimen areas. These values are ×10 times larger than predicted by theory. Furthermore, the measured precision is observed to be independent on signal-to-noise ratio (determined by the number of averaged TV frames). Obviously, the precision is restricted by geometric distortion mainly caused by the TV camera. For this reason, we are replacing the Gatan 622 TV camera with a modern high-grade CCD-based camera system. Such a system not only has negligible geometric distortion, but also high dynamic range (>10,000) and high resolution (1024x1024 pixels). The geometric distortion of the projector lenses can be measured, and corrected through re-sampling of the digitized image.


2008 ◽  
Vol 18 (1) ◽  
pp. 31-40 ◽  
Author(s):  
David J. Zajac

Abstract The purpose of this opinion article is to review the impact of the principles and technology of speech science on clinical practice in the area of craniofacial disorders. Current practice relative to (a) speech aerodynamic assessment, (b) computer-assisted single-word speech intelligibility testing, and (c) behavioral management of hypernasal resonance are reviewed. Future directions and/or refinement of each area are also identified. It is suggested that both challenging and rewarding times are in store for clinical researchers in craniofacial disorders.


Sign in / Sign up

Export Citation Format

Share Document