variance estimate
Recently Published Documents


TOTAL DOCUMENTS

59
(FIVE YEARS 10)

H-INDEX

12
(FIVE YEARS 1)

Author(s):  
A. D. Pluzhnikov ◽  
L. V. Kogteva ◽  
E. N. Pribludova ◽  
S. B. Sidorov ◽  
E. G. Chuzhaykin

Introduction. Conical scanning is applied for optimizing hardware resources in new devices, as well as when upgrading existing systems. All this explains the relevance of studying this type of direction finding systems.Aim. To adjust and complement the known calculation relations for the variance of direction finding results – an indicator of the quality (accuracy) of direction finding, as well as to determine the possibilities of optimizing direction finding and automatic object tracking processes.Materials and methods. Factors limiting the accuracy of direction finding via conical scanning were analyzed using spectral analysis. Mathematical modeling followed by statistical processing of quantitative results makes it possible to determine the conditions under which the influence of certain factors is predominant, as well as the conditions under which adjustment (completion) of the known calculation relations is required. The specified conditions are the errors at which the objects of direction finding are tracked. New calculation relations for the mentioned adjustment were determined by the methods of statistical radio engineering.Results. The validity of the calculation relations found is confirmed by mathematical modeling. Calculations and modeling lead to the need to optimize parameters for automatic object tracking systems.Conclusion. The study shows that, when choosing parameters for auto-tracking systems with conical scanning, it is important to implement object tracking not with minimal, but rather with optimized tracking errors in angular coordinates, which are to be estimated during direction finding. Moreover, the optimized errors (the values of static errors and the most probable values of the dynamic tracking errors) will require adjustment of the known analytical estimates for the variance of the direction finding results – the qualitative indicator of the direction finder (accuracy indicator). The determined analytical relationships allow such an adjustment to be performed, leading to an increased variance estimate by 10 dB.


Author(s):  
Evgeniia S. Chetvertakova ◽  
◽  
Ekaterina V. Chimitova ◽  

This paper considers the Wiener degradation model with random effects. Random-effect models take into account the unit-to-unit variability of the degradation index. It is assumed that a random parameter has a truncated normal distribution. During the research, the expression for the maximum likelihood estimates and the reliability function has been obtained. Two statistical tests have been proposed to reveal the existence of random effects in degradation data corresponding to the Wiener degradation model. The first test is a well-known likelihood ratio test, and the second one is based on the variance estimate of the random parameter. These tests have been compared in terms of power with the Monte-Carlo simulation method. The result of the research has shown that the criterion based on the variance estimate of the random parameter is more powerful than the likelihood ratio test in the case of the considered pairs of competing hypotheses. An example of the analysis using the proposed tests for the turbofan engine degradation data has been considered. The data set includes the measurements recorded from 18 sensors for 100 engines. Before constructing the degradation model, the single degradation index has been obtained using the principal component method. The hypothesis of the random effect insignificance in the model has been rejected for both tests. It has been shown that the random-effect Wiener degradation model describes the failure time distribution more accurately than the fixed-effect Wiener degradation model.


2021 ◽  
Vol 111 ◽  
pp. 611-615
Author(s):  
Yuehao Bai ◽  
Hung Ho ◽  
Guillaume A. Pouliot ◽  
Joshua Shea

We provide large-sample distribution theory for support vector regression (SVR) with l1-norm along with error bars for the SVR regression coefficients. Although a classical Wald confidence interval obtains from our theory, its implementation inherently depends on the choice of a tuning parameter that scales the variance estimate and thus the width of the error bars. We address this shortcoming by further proposing an alternative large-sample inference method based on the inversion of a novel test statistic that displays competitive power properties and does not depend on the choice of a tuning parameter.


2021 ◽  
Vol 15 (5) ◽  
pp. 1-52
Author(s):  
Lorenzo De Stefani ◽  
Erisa Terolli ◽  
Eli Upfal

We introduce Tiered Sampling , a novel technique for estimating the count of sparse motifs in massive graphs whose edges are observed in a stream. Our technique requires only a single pass on the data and uses a memory of fixed size M , which can be magnitudes smaller than the number of edges. Our methods address the challenging task of counting sparse motifs—sub-graph patterns—that have a low probability of appearing in a sample of M edges in the graph, which is the maximum amount of data available to the algorithms in each step. To obtain an unbiased and low variance estimate of the count, we partition the available memory into tiers (layers) of reservoir samples. While the base layer is a standard reservoir sample of edges, other layers are reservoir samples of sub-structures of the desired motif. By storing more frequent sub-structures of the motif, we increase the probability of detecting an occurrence of the sparse motif we are counting, thus decreasing the variance and error of the estimate. While we focus on the designing and analysis of algorithms for counting 4-cliques, we present a method which allows generalizing Tiered Sampling to obtain high-quality estimates for the number of occurrence of any sub-graph of interest, while reducing the analysis effort due to specific properties of the pattern of interest. We present a complete analytical analysis and extensive experimental evaluation of our proposed method using both synthetic and real-world data. Our results demonstrate the advantage of our method in obtaining high-quality approximations for the number of 4 and 5-cliques for large graphs using a very limited amount of memory, significantly outperforming the single edge sample approach for counting sparse motifs in large scale graphs.


Brain ◽  
2021 ◽  
Author(s):  
Patricia A Boyle ◽  
Tianhao Wang ◽  
Lei Yu ◽  
Robert S Wilson ◽  
Robert Dawe ◽  
...  

Abstract The aging brain is vulnerable to a wide array of neuropathologies. Prior work estimated that the three most studied of these, Alzheimer’s disease (AD), infarcts, and Lewy bodies, account for about 40% of the variation in late life cognitive decline. However, that estimate did not incorporate many other diseases that are now recognized as potent drivers of cognitive decline (e.g. limbic predominant age-related TDP-43 encephalopathy [LATE-NC], hippocampal sclerosis, other cerebrovascular conditions). We examined the degree to which person-specific cognitive decline in old age is driven by a wide array of neuropathologies. 1,164 deceased participants from two longitudinal clinical-pathologic studies, the Rush Memory and Aging Project and Religious Orders Study, completed up to 24 annual evaluations including 17 cognitive performance tests and underwent brain autopsy. Neuropathologic examinations provided 11 pathologic indices, including markers of AD, non-AD neurodegenerative diseases (i.e. LATE-NC, hippocampal sclerosis, Lewy bodies), and cerebrovascular conditions (i.e. macroscopic infarcts, microinfarcts, cerebral amyloid angiopathy, atherosclerosis, and arteriolosclerosis). Mixed effects models examined the linear relation of pathologic indices with global cognitive decline, and random change point models examined the relation of the pathologic indices with the onset of terminal decline and rates of preterminal and terminal decline. Cognition declined an average of about 0.10 unit per year (estimate = -0.101, SE = 0.003, p < 0.001) with considerable heterogeneity in rates of decline (variance estimate for the person-specific slope of decline was 0.0094, p < 0.001). When considered separately, 10 of the 11 pathologic indices were associated with faster decline and accounted for between 2 and 34% of the variation in decline, respectively. When considered simultaneously, the 11 pathologic indices together accounted for a total of 43% of the variation in decline; AD-related indices accounted for 30–36% of the variation, non-AD neurodegenerative indices 4–10%, and cerebrovascular indices 3–8%. Finally, the 11 pathologic indices combined accounted for less than a third of the variation in the onset of terminal decline (28%) and rates of preterminal (32%) and terminal decline (19%). Although age-related neuropathologies account for a large proportion of the variation in late life cognitive decline, considerable variation remains unexplained even after considering a wide array of neuropathologies. These findings highlight the complexity of cognitive aging and have important implications for the ongoing effort to develop effective therapeutics and identify novel treatment targets.


2021 ◽  
pp. 105381512198980
Author(s):  
Bailey J. Sone ◽  
Jordan Lee ◽  
Megan Y. Roberts

Family involvement is a cornerstone of early intervention (EI). Therefore, positive caregiver outcomes are vital, particularly in caregiver-implemented interventions. As such, caregiver instructional approaches should optimize adult learning. This study investigated the comparative efficacy of coaching and traditional caregiver instruction on caregiver outcomes across EI disciplines. A systematic search for articles was conducted using PRISMA guidelines. Meta-analysis methodology was used to analyze caregiver outcomes, and a robust variance estimate model was used to control for within-study effect size correlations. Seven relevant studies were ultimately included in the analysis. A significant, large effect of coaching on caregiver outcomes was observed compared to other models of instruction ( g = 0.745, SE = 0.125, p = .0013). These results support the adoption of a coaching framework to optimize caregiver outcomes in EI. Future research should examine how coaching and traditional instruction can be used in tiered intervention models with a variety of populations.


2020 ◽  
Vol 26 (4) ◽  
pp. 325-334
Author(s):  
Ahad Malekzadeh ◽  
Seyed Mahdi Mahmoudi

AbstractIn this paper, to construct a confidence interval (general and shortest) for quantiles of normal distribution in one population, we present a pivotal quantity that has non-central t distribution. In the case of two independent normal populations, we propose a confidence interval for the ratio of quantiles based on the generalized pivotal quantity, and we introduce a simple method for extracting its percentiles, based on which a shorter confidence interval can be created. Also, we provide general and shorter confidence intervals using the method of variance estimate recovery. The performance of five proposed methods will be examined by using simulation and examples.


2020 ◽  
Vol 189 (12) ◽  
pp. 1628-1632
Author(s):  
Mark J Giganti ◽  
Bryan E Shepherd

Abstract In observational studies using routinely collected data, a variable with a high level of missingness or misclassification may determine whether an observation is included in the analysis. In settings where inclusion criteria are assessed after imputation, the popular multiple-imputation variance estimator proposed by Rubin (“Rubin’s rules” (RR)) is biased due to incompatibility between imputation and analysis models. While alternative approaches exist, most analysts are not familiar with them. Using partially validated data from a human immunodeficiency virus cohort, we illustrate the calculation of an imputation variance estimator proposed by Robins and Wang (RW) in a scenario where the study exclusion criteria are based on a variable that must be imputed. In this motivating example, the corresponding imputation variance estimate for the log odds was 29% smaller using the RW estimator than using the RR estimator. We further compared these 2 variance estimators with a simulation study which showed that coverage probabilities of 95% confidence intervals based on the RR estimator were too high and became worse as more observations were imputed and more subjects were excluded from the analysis. The RW imputation variance estimator performed much better and should be employed when there is incompatibility between imputation and analysis models. We provide analysis code to aid future analysts in implementing this method.


Science ◽  
2019 ◽  
Vol 366 (6463) ◽  
pp. 351-356 ◽  
Author(s):  
Pejman Mohammadi ◽  
Stephane E. Castel ◽  
Beryl B. Cummings ◽  
Jonah Einson ◽  
Christina Sousa ◽  
...  

Transcriptome data can facilitate the interpretation of the effects of rare genetic variants. Here, we introduce ANEVA (analysis of expression variation) to quantify genetic variation in gene dosage from allelic expression (AE) data in a population. Application of ANEVA to the Genotype-Tissues Expression (GTEx) data showed that this variance estimate is robust and correlated with selective constraint in a gene. Using these variance estimates in a dosage outlier test (ANEVA-DOT) applied to AE data from 70 Mendelian muscular disease patients showed accuracy in detecting genes with pathogenic variants in previously resolved cases and led to one confirmed and several potential new diagnoses. Using our reference estimates from GTEx data, ANEVA-DOT can be incorporated in rare disease diagnostic pipelines to use RNA-sequencing data more effectively.


Sign in / Sign up

Export Citation Format

Share Document