statistical correction
Recently Published Documents


TOTAL DOCUMENTS

104
(FIVE YEARS 23)

H-INDEX

18
(FIVE YEARS 1)

2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Siddharth Srivastava ◽  
Emma Condy ◽  
Erin Carmody ◽  
Rajna Filip-Dhima ◽  
Kush Kapur ◽  
...  

Abstract Background Phelan McDermid syndrome (PMS) is a neurogenetic condition associated with a high prevalence of intellectual disability (ID) and autism spectrum disorder (ASD). This study provides a more comprehensive and quantitative profile of repetitive behaviors within the context of ID seen with the condition. Methods Individuals age 3–21 years with a confirmed PMS diagnosis participated in a multicenter observational study evaluating the phenotype and natural history of the disorder. We evaluated data collected from this study pertaining to repetitive behaviors from the Repetitive Behavior Scales-Revised (RBS-R). Results There were n = 90 participants who were part of this analysis. Forty-seven percent (n = 42/90) were female, and the average age at baseline evaluation was 8.88 ± 4.72 years. The mean best estimate IQ of the cohort was 26.08 ± 17.67 (range = 3.4–88), with n = 8 with mild ID (or no ID), n = 20 with moderate ID, and n = 62 with severe-profound ID. The RBS-R total overall score was 16.46 ± 13.9 (compared to 33.14 ± 20.60 reported in previous studies of ASD) (Lam and Aman, 2007), and the total number of items endorsed was 10.40 ± 6.81 (range = 0–29). After statistical correction for multiple comparisons, IQ correlated with the RBS-R stereotypic behavior subscale score (rs = − 0.33, unadjusted p = 0.0014, adjusted p = 0.01) and RBS-R stereotypic behavior total number of endorsed items (rs = − 0.32, unadjusted p = 0.0019, adjusted p = 0.01). IQ did not correlate with any other RBS-R subscale scores. Conclusions The RBS-R total overall score in a PMS cohort appears milder compared to individuals with ASD characterized in previous studies. Stereotypic behavior in PMS may reflect cognitive functioning.


2021 ◽  
Vol 13 (18) ◽  
pp. 3583
Author(s):  
Qingqing Wang ◽  
Wei Zheng ◽  
Wenjie Yin ◽  
Guohua Kang ◽  
Gangqiang Zhang ◽  
...  

The Gravity Recovery and Climate Experiment (GRACE) satellite solutions have been considerably applied to assess the reliability of hydrological models on a global scale. However, no single hydrological model can be suitable for all regions. Here, a New Statistical Correction Hydrological Model Weighting (NSCHMW) method is developed based on the root mean square error and correlation coefficient between hydrological models and GRACE mass concentration (mascon) data. The NSCHMW method can highlight the advantages of good models compared with the previous average method. Additionally, to verify the effect of the NSCHMW method, taking the Haihe River Basin (HRB) as an example, the spatiotemporal patterns of Terrestrial Water Storage Anomalies (TWSA) in HRB are analyzed through a comprehensive comparison of decadal trends (2003–2014) from GRACE and different hydrological models (Noah from GLDAS-2.1, VIC from GLDAS-2.1, CLSM from GLDAS-2.1, CLSM from GLDAS-2.0, WGHM, PCR-GLOBWB, and CLM-4.5). Besides, the NSCHMW method is applied to estimate TWSA trends in the HRB. Results demonstrate that (1) the NSCHMW method can improve the accuracy of TWSA estimation by hydrological models; (2) the TWSA trends continue to decrease through the study period at a rate of 15.7 mm/year; (3) the WGHM and PCR-GLOBWB have positive reliability with respect to GRACE with r > 0.9, while all the other models underestimate TWSA trends; (4) the NSCHMW method can effectively improve RMSE, NES, and r with 3–96%, 35–282%, 1–255%, respectively, by weighting the WGHM and PCR-GLOBWB. Indeed, groundwater depletion in HRB also proves the necessity of the South-North Water Diversion Project, which has already contributed to groundwater recovery.


Author(s):  
M van der Bruggen ◽  
B Spronck ◽  
S Bos ◽  
M H G Heusinkveld ◽  
S Taddei ◽  
...  

Abstract BACKGROUND Conventional measures for assessing arterial stiffness are inherently pressure-dependent. Whereas statistical pressure adjustment is feasible in (larger) populations, it is unsuited for the evaluation of an individual patient. Moreover, statistical “correction” for blood pressure may actually correct for: (1) the acute dependence of arterial stiffness on blood pressure at the time of measurement; and/or (2) the remodelling effect that blood pressure (hypertension) may have on arterial stiffness, but it cannot distinguish between these processes. METHODS We derived — assuming a single-exponential pressure-diameter relationship — three theoretically pressure-independent carotid stiffness measures suited for individual patient evaluation: (1) stiffness index β0, (2) pressure-corrected carotid pulse wave velocity (cPWVcorr), and (3) pressure-corrected Young’s modulus (Ecorr). Using linear regression analysis, we evaluated in a sample of the CATOD study cohort changes in mean arterial pressure (∆MAP) and comparatively the changes in the novel (∆β0, ∆cPWVcorr, ∆Ecorr) as well as conventional (∆cPWV, ∆E) stiffness measures after a 2.9±1.0-year follow-up. RESULTS We found no association between ∆MAP and ∆β0, ∆cPWVcorr, or ∆Ecorr. In contrast, we did find a significant association between ∆MAP and conventional measures ∆cPWV and ∆E. Additional adjustments for biomechanical confounders and traditional risk factors did neither materially change these associations nor the lack thereof. CONCLUSIONS Our newly proposed pressure-independent carotid stiffness measures avoid the need for statistical correction. Hence, these measures (β0, cPWVcorr, Ecorr) can be used in a clinical setting for (1) patient-specific risk assessment and (2) investigation of potential remodelling effects of (changes in) blood pressure on intrinsic arterial stiffness.


2021 ◽  
Vol 120 (3) ◽  
pp. 173a-174a
Author(s):  
Barmak Mostofian ◽  
Russell McFarland ◽  
Aidan Estelle ◽  
Jesse J. Howe ◽  
Elisar J. Barbar ◽  
...  

2020 ◽  
Vol 4 ◽  
pp. 28-42
Author(s):  
Yu.V. Alferov . ◽  
◽  
E.G. Klimova ◽  

A possibility of using the one-dimensional Kalman filter to improve the forecast of surface air temperature at an irregular grid of point is studied. This mechanism is tested using the forecasts obtained from different configurations of two different numerical weather prediction models. An algorithm for the statistical correction of numerical forecasts of surface air temperature based on the one-dimensional Kalman filter is constructed. Two methods are proposed for estimating the bias noise dispersion. The series of experiments demonstrated the effectiveness of the algorithm for the bias compensation.The most significantresults are achieved for the models with large bias or for long-range forecasts. At the same time, the use of the algorithm has little effect on the root-meansquare error of the forecast. Keywords: hydrodynamic model of the atmosphere, numerical weather prediction, statistical correction of numerical forecasts, Kalman filter


GigaScience ◽  
2020 ◽  
Vol 9 (11) ◽  
Author(s):  
Alexandra J Lee ◽  
YoSon Park ◽  
Georgia Doing ◽  
Deborah A Hogan ◽  
Casey S Greene

Abstract Motivation In the past two decades, scientists in different laboratories have assayed gene expression from millions of samples. These experiments can be combined into compendia and analyzed collectively to extract novel biological patterns. Technical variability, or "batch effects," may result from combining samples collected and processed at different times and in different settings. Such variability may distort our ability to extract true underlying biological patterns. As more integrative analysis methods arise and data collections get bigger, we must determine how technical variability affects our ability to detect desired patterns when many experiments are combined. Objective We sought to determine the extent to which an underlying signal was masked by technical variability by simulating compendia comprising data aggregated across multiple experiments. Method We developed a generative multi-layer neural network to simulate compendia of gene expression experiments from large-scale microbial and human datasets. We compared simulated compendia before and after introducing varying numbers of sources of undesired variability. Results The signal from a baseline compendium was obscured when the number of added sources of variability was small. Applying statistical correction methods rescued the underlying signal in these cases. However, as the number of sources of variability increased, it became easier to detect the original signal even without correction. In fact, statistical correction reduced our power to detect the underlying signal. Conclusion When combining a modest number of experiments, it is best to correct for experiment-specific noise. However, when many experiments are combined, statistical correction reduces our ability to extract underlying patterns.


Sign in / Sign up

Export Citation Format

Share Document