heterogeneity variance
Recently Published Documents


TOTAL DOCUMENTS

25
(FIVE YEARS 8)

H-INDEX

9
(FIVE YEARS 1)

2021 ◽  
Author(s):  
D.A. Pinotsis ◽  
S. Fitzgerald ◽  
C. See ◽  
A. Sementsova ◽  
A. S. Widge

AbstractA major difficulty with treating psychiatric disorders is their heterogeneity: different neural causes can lead to the same phenotype. To address this, we propose describing the underlying pathophysiology in terms of interpretable, biophysical parameters of a neural model derived from the electroencephalogram. We analyzed data from a small patient cohort of patients with depression and controls. We constructed biophysical models that describe neural dynamics in a cortical network activated during a task that is used to assess depression state. We show that biophysical model parameters are biomarkers, that is, variables that allow subtyping of depression at a biological level. They yield a low dimensional, interpretable feature space that allowed description of differences between individual patients with depressive symptoms. They capture internal heterogeneity/variance of depression state and achieve significantly better classification than commonly used EEG features. Our work is a proof of concept that a combination of biophysical models and machine learning may outperform earlier approaches based on classical statistics and raw brain data.


2021 ◽  
Author(s):  
Sepideh Zayandeh ◽  
Zahra Yaghoubi ◽  
Kosar Hosseini

Abstract Background: Dental caries is the most common chronic untreated disease worldwide. The simplest and most important factor in preventing dental caries is maintaining oral hygiene and removing microbial plaque using a toothbrush. Despite the relationship between toothbrush filament wear and plaque removal effectiveness as a potentially important factor in maintaining oral health, there is little objective standard evidence as to 1) what constitutes a worn-out brush and 2) the degree of loss in plaque removal effectiveness due to brush wear. Contradictions in the results of studies on toothbrushing and the loss of its effectiveness in removing plaque based on the time spent using the toothbrush have led to conflicting recommendations for changing toothbrushes after different periods. While some studies generally question the relationship between toothbrush age and effectiveness. The lack of comprehensive evidence in this area necessitates a structured review study.Methods: We will search the electronic databases ISI, Scopus, and PubMed to find related articles. Our main inclusion criterion is Clinical trial and observational studies investigating manual toothbrush longevity in the natural toothbrush-worn model on each objective indicator of oral health (including plaque removal and gingival indices ...). All funded citations are entered into the Endnote software. the full texts of potentially relevant studies are prepared. study selection and extracting the data will be performed by two reviewers. Also, the studies quality will be assessed. The findings will be displayed using figures, summary tables and narrative summaries. If the similarity of studies and their quality is desirable, meta-analysis will be performed. We will assess the heterogeneity on the bias of the magnitude of heterogeneity variance parameter. We are also going to conduct subgroup analysis and sensitivity analysis if needed.Discussion: The final systematic review highlights the gaps in the available evidence about the effectiveness of toothbrush longevity on each oral indices to provide the best recommendation for toothbrush renewal periods. Registration: The review subject has been submitted in PROSPERO database


2021 ◽  
Vol 4 ◽  
Author(s):  
Tejas I. Dhamecha ◽  
Soumyadeep Ghosh ◽  
Mayank Vatsa ◽  
Richa Singh

Cross-view or heterogeneous face matching involves comparing two different views of the face modality such as two different spectrums or resolutions. In this research, we present two heterogeneity-aware subspace techniques, heterogeneous discriminant analysis (HDA) and its kernel version (KHDA) that encode heterogeneity in the objective function and yield a suitable projection space for improved performance. They can be applied on any feature to make it heterogeneity invariant. We next propose a face recognition framework that uses existing facial features along with HDA/KHDA for matching. The effectiveness of HDA and KHDA is demonstrated using both handcrafted and learned representations on three challenging heterogeneous cross-view face recognition scenarios: (i) visible to near-infrared matching, (ii) cross-resolution matching, and (iii) digital photo to composite sketch matching. It is observed that, consistently in all the case studies, HDA and KHDA help to reduce the heterogeneity variance, clearly evidenced in the improved results. Comparison with recent heterogeneous matching algorithms shows that HDA- and KHDA-based matching yields state-of-the-art or comparable results on all three case studies. The proposed algorithms yield the best rank-1 accuracy of 99.4% on the CASIA NIR-VIS 2.0 database, up to 100% on the CMU Multi-PIE for different resolutions, and 95.2% rank-10 accuracies on the e-PRIP database for digital to composite sketch matching.


2021 ◽  
pp. 096228022110130
Author(s):  
Elena Kulinskaya ◽  
David C. Hoaglin ◽  
Ilyas Bakbergenuly

Contemporary statistical publications rely on simulation to evaluate performance of new methods and compare them with established methods. In the context of random-effects meta-analysis of log-odds-ratios, we investigate how choices in generating data affect such conclusions. The choices we study include the overall log-odds-ratio, the distribution of probabilities in the control arm, and the distribution of study-level sample sizes. We retain the customary normal distribution of study-level effects. To examine the impact of the components of simulations, we assess the performance of the best available inverse–variance–weighted two-stage method, a two-stage method with constant sample-size-based weights, and two generalized linear mixed models. The results show no important differences between fixed and random sample sizes. In contrast, we found differences among data-generation models in estimation of heterogeneity variance and overall log-odds-ratio. This sensitivity to design poses challenges for use of simulation in choosing methods of meta-analysis.


Author(s):  
Fahad M. Al Amer ◽  
Christopher G. Thompson ◽  
Lifeng Lin

Bayesian methods are an important set of tools for performing meta-analyses. They avoid some potentially unrealistic assumptions that are required by conventional frequentist methods. More importantly, meta-analysts can incorporate prior information from many sources, including experts’ opinions and prior meta-analyses. Nevertheless, Bayesian methods are used less frequently than conventional frequentist methods, primarily because of the need for nontrivial statistical coding, while frequentist approaches can be implemented via many user-friendly software packages. This article aims at providing a practical review of implementations for Bayesian meta-analyses with various prior distributions. We present Bayesian methods for meta-analyses with the focus on odds ratio for binary outcomes. We summarize various commonly used prior distribution choices for the between-studies heterogeneity variance, a critical parameter in meta-analyses. They include the inverse-gamma, uniform, and half-normal distributions, as well as evidence-based informative log-normal priors. Five real-world examples are presented to illustrate their performance. We provide all of the statistical code for future use by practitioners. Under certain circumstances, Bayesian methods can produce markedly different results from those by frequentist methods, including a change in decision on statistical significance. When data information is limited, the choice of priors may have a large impact on meta-analytic results, in which case sensitivity analyses are recommended. Moreover, the algorithm for implementing Bayesian analyses may not converge for extremely sparse data; caution is needed in interpreting respective results. As such, convergence should be routinely examined. When select statistical assumptions that are made by conventional frequentist methods are violated, Bayesian methods provide a reliable alternative to perform a meta-analysis.


2021 ◽  
Author(s):  
Elena Kulinskaya ◽  
Eung Yaw Mah

Cumulative meta-analysis (CMA) is a process of updating the results of existing meta-analysis to incorporate new study results. This is a popular way to present time-varying evidence. We investigate the properties of CMA, suggest possible improvements and provide the first in-depth simulation study of the use of CMA and CUSUM methods for detection of temporal trends in random-effects meta-analysis. We use the standardized mean difference (SMD) as an effect measure of interest. For CMA, we compare the standard inverse-variance-weighted estimation of the overall effect using REML-estimated between-study variance $\tau^2$ with the sample-size-weighted estimation of the effect combined with Kulinskaya-Dollinger-Bjørkestøl (2011) (KDB) estimation of $\tau^2$. For all methods, we consider type 1 error under no shift and power under shift in the mean. To ameliorate the lack of power in CMA, we introduce the two-stage CMA, where the heterogeneity variance $\tau^2$ is estimated at stage 1 (first 5-10 studies), and the further CMA monitors a target value of effect, keeping the $\tau^2$ value fixed. We recommend the use of this two-stage CMA combined with cumulative testing for positive shift in $\tau^2$.


2018 ◽  
Vol 10 (1) ◽  
pp. 83-98 ◽  
Author(s):  
Dean Langan ◽  
Julian P.T. Higgins ◽  
Dan Jackson ◽  
Jack Bowden ◽  
Areti Angeliki Veroniki ◽  
...  

Author(s):  
Anantapon Nitidejvisit ◽  
Chukiat Viwatwongkasem ◽  
Jutatip Sillabutra ◽  
Pichitpong Soontornpipit ◽  
Pratana Satitvipawee

Sign in / Sign up

Export Citation Format

Share Document