mixture modelling
Recently Published Documents


TOTAL DOCUMENTS

217
(FIVE YEARS 65)

H-INDEX

28
(FIVE YEARS 3)

2022 ◽  
Vol 22 (1) ◽  
Author(s):  
Lan Hong ◽  
Tao Le ◽  
Yinping Lu ◽  
Xiang Shi ◽  
Ludan Xiang ◽  
...  

Abstract Background Current research on perinatal depression rarely pays attention to the continuity and volatility of depression symptoms over time, which is very important for the early prediction and prognostic evaluation of perinatal depression. This study investigated the trajectories of perinatal depression symptoms and aimed to explore the factors related to these trajectories. Methods The study recruited 550 women during late pregnancy (32 ± 4 weeks of gestation) and followed them up 1 and 6 weeks postpartum. Depressive symptoms were measured using the Edinburgh Postnatal Depression Scale (EPDS). Latent growth mixture modelling (LGMM) was used to identify trajectories of depressive symptoms during pregnancy. Results Two trajectories of perinatal depressive symptoms were identified: “decreasing” (n = 524, 95.3%) and “increasing” (n = 26, 4.7%). History of smoking, alcohol use and gestational hypertension increased the chance of belonging to the increasing trajectories, and a high level of social support was a protective factor for maintaining a decreasing trajectory. Conclusions This study identified two trajectories of perinatal depression and the factors associated with each trajectory. Paying attention to these factors and providing necessary psychological support services during pregnancy would effectively reduce the incidence of perinatal depression and improve patient prognosis.


Author(s):  
Riko Kelter

AbstractTesting differences between a treatment and control group is common practice in biomedical research like randomized controlled trials (RCT). The standard two-sample t test relies on null hypothesis significance testing (NHST) via p values, which has several drawbacks. Bayesian alternatives were recently introduced using the Bayes factor, which has its own limitations. This paper introduces an alternative to current Bayesian two-sample t tests by interpreting the underlying model as a two-component Gaussian mixture in which the effect size is the quantity of interest, which is most relevant in clinical research. Unlike p values or the Bayes factor, the proposed method focusses on estimation under uncertainty instead of explicit hypothesis testing. Therefore, via a Gibbs sampler, the posterior of the effect size is produced, which is used subsequently for either estimation under uncertainty or explicit hypothesis testing based on the region of practical equivalence (ROPE). An illustrative example, theoretical results and a simulation study show the usefulness of the proposed method, and the test is made available in the R package . In sum, the new Bayesian two-sample t test provides a solution to the Behrens–Fisher problem based on Gaussian mixture modelling.


Author(s):  
J. Arbel ◽  
G. Kon Kam King ◽  
A. Lijoi ◽  
L. Nieto‐Barajas ◽  
I. Prünster

2021 ◽  
Author(s):  
Raj Kiran V ◽  
Nabeel P M ◽  
Malay Ilesh Shah ◽  
Mohanasankar Sivaprakasam ◽  
Jayaraj Joseph

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
C. Bottomley ◽  
M. Otiende ◽  
S. Uyoga ◽  
K. Gallagher ◽  
E. W. Kagucia ◽  
...  

AbstractAs countries decide on vaccination strategies and how to ease movement restrictions, estimating the proportion of the population previously infected with SARS-CoV-2 is important for predicting the future burden of COVID-19. This proportion is usually estimated from serosurvey data in two steps: first the proportion above a threshold antibody level is calculated, then the crude estimate is adjusted using external estimates of sensitivity and specificity. A drawback of this approach is that the PCR-confirmed cases used to estimate the sensitivity of the threshold may not be representative of cases in the wider population—e.g., they may be more recently infected and more severely symptomatic. Mixture modelling offers an alternative approach that does not require external data from PCR-confirmed cases. Here we illustrate the bias in the standard threshold-based approach by comparing both approaches using data from several Kenyan serosurveys. We show that the mixture model analysis produces estimates of previous infection that are often substantially higher than the standard threshold analysis.


Author(s):  
Patrick Dwyer ◽  
Emilio Ferrer ◽  
Clifford D. Saron ◽  
Susan M. Rivera

AbstractThis study uses factor mixture modelling of the Short Sensory Profile (SSP) at two time points to describe subgroups of young autistic and typically-developing children. This approach allows separate SSP subscales to influence overall SSP performance differentially across subgroups. Three subgroups were described, one including almost all typically-developing participants plus many autistic participants. SSP performance of a second, largely-autistic subgroup was predominantly shaped by a subscale indexing behaviours of low energy/weakness. Finally, the third subgroup, again largely autistic, contained participants with low (or more “atypical”) SSP scores across most subscales. In this subgroup, autistic participants exhibited large P1 amplitudes to loud sounds. Autistic participants in subgroups with more atypical SSP scores had higher anxiety and more sleep disturbances.


2021 ◽  
pp. 1471082X2110331
Author(s):  
Giacomo De Nicola ◽  
Benjamin Sischka ◽  
Göran Kauermann

Mixture models are probabilistic models aimed at uncovering and representing latent subgroups within a population. In the realm of network data analysis, the latent subgroups of nodes are typically identified by their connectivity behaviour, with nodes behaving similarly belonging to the same community. In this context, mixture modelling is pursued through stochastic blockmodelling. We consider stochastic blockmodels and some of their variants and extensions from a mixture modelling perspective. We also explore some of the main classes of estimation methods available and propose an alternative approach based on the reformulation of the blockmodel as a graphon. In addition to the discussion of inferential properties and estimating procedures, we focus on the application of the models to several real-world network datasets, showcasing the advantages and pitfalls of different approaches.


Sign in / Sign up

Export Citation Format

Share Document