scholarly journals Multivariate Generalized Linear Mixed-Effects Models for the Analysis of Clinical Trial–Based Cost-Effectiveness Data

2021 ◽  
pp. 0272989X2110038
Author(s):  
Felix Achana ◽  
Daniel Gallacher ◽  
Raymond Oppong ◽  
Sungwook Kim ◽  
Stavros Petrou ◽  
...  

Economic evaluations conducted alongside randomized controlled trials are a popular vehicle for generating high-quality evidence on the incremental cost-effectiveness of competing health care interventions. Typically, in these studies, resource use (and by extension, economic costs) and clinical (or preference-based health) outcomes data are collected prospectively for trial participants to estimate the joint distribution of incremental costs and incremental benefits associated with the intervention. In this article, we extend the generalized linear mixed-model framework to enable simultaneous modeling of multiple outcomes of mixed data types, such as those typically encountered in trial-based economic evaluations, taking into account correlation of outcomes due to repeated measurements on the same individual and other clustering effects. We provide new wrapper functions to estimate the models in Stata and R by maximum and restricted maximum quasi-likelihood and compare the performance of the new routines with alternative implementations across a range of statistical programming packages. Empirical applications using observed and simulated data from clinical trials suggest that the new methods produce broadly similar results as compared with Stata’s merlin and gsem commands and a Bayesian implementation in WinBUGS. We highlight that, although these empirical applications primarily focus on trial-based economic evaluations, the new methods presented can be generalized to other health economic investigations characterized by multivariate hierarchical data structures.

2019 ◽  
Vol 11 (24) ◽  
pp. 2897 ◽  
Author(s):  
Yuhui Zheng ◽  
Feiyang Wu ◽  
Hiuk Jae Shim ◽  
Le Sun

Hyperspectral unmixing is a key preprocessing technique for hyperspectral image analysis. To further improve the unmixing performance, in this paper, a nonlocal low-rank prior associated with spatial smoothness and spectral collaborative sparsity are integrated together for unmixing the hyperspectral data. The proposed method is based on a fact that hyperspectral images have self-similarity in nonlocal sense and smoothness in local sense. To explore the spatial self-similarity, nonlocal cubic patches are grouped together to compose a low-rank matrix. Then, based on the linear mixed model framework, the nuclear norm is constrained to the abundance matrix of these similar patches to enforce low-rank property. In addition, the local spatial information and spectral characteristic are also taken into account by introducing TV regularization and collaborative sparse terms, respectively. Finally, the results of the experiments on two simulated data sets and two real data sets show that the proposed algorithm produces better performance than other state-of-the-art algorithms.


2020 ◽  
pp. 1471082X2093601
Author(s):  
Mirko Signorelli ◽  
Pietro Spitali ◽  
Roula Tsonaka

We present a new modelling approach for longitudinal overdispersed counts that is motivated by the increasing availability of longitudinal RNA-sequencing experiments. The distribution of RNA-seq counts typically exhibits overdispersion, zero-inflation and heavy tails; moreover, in longitudinal designs repeated measurements from the same subject are typically (positively) correlated. We propose a generalized linear mixed model based on the Poisson–Tweedie distribution that can flexibly handle each of the aforementioned features of longitudinal overdispersed counts. We develop a computational approach to accurately evaluate the likelihood of the proposed model and to perform maximum likelihood estimation. Our approach is implemented in the R package ptmixed, which can be freely downloaded from CRAN. We assess the performance of ptmixed on simulated data, and we present an application to a dataset with longitudinal RNA-sequencing measurements from healthy and dystrophic mice. The applicability of the Poisson–Tweedie mixed-effects model is not restricted to longitudinal RNA-seq data, but it extends to any scenario where non-independent measurements of a discrete overdispersed response variable are available.


2005 ◽  
Vol 360 (1459) ◽  
pp. 1443-1455 ◽  
Author(s):  
Karin Meyer ◽  
Mark Kirkpatrick

‘Repeated’ measurements for a trait and individual, taken along some continuous scale such as time, can be thought of as representing points on a curve, where both means and covariances along the trajectory can change, gradually and continually. Such traits are commonly referred to as ‘function-valued’ (FV) traits. This review shows that standard quantitative genetic concepts extend readily to FV traits, with individual statistics, such as estimated breeding values and selection response, replaced by corresponding curves, modelled by respective functions. Covariance functions are introduced as the FV equivalent to matrices of covariances. Considering the class of functions represented by a regression on the continuous covariable, FV traits can be analysed within the linear mixed model framework commonly employed in quantitative genetics, giving rise to the so-called random regression model. Estimation of covariance functions, either indirectly from estimated covariances or directly from the data using restricted maximum likelihood or Bayesian analysis, is considered. It is shown that direct estimation of the leading principal components of covariance functions is feasible and advantageous. Extensions to multi-dimensional analyses are discussed.


2020 ◽  
Vol 110 (10) ◽  
pp. 1623-1631
Author(s):  
Karyn L. Reeves ◽  
Clayton R. Forknall ◽  
Alison M. Kelly ◽  
Kirsty J. Owen ◽  
Joshua Fanning ◽  
...  

The root lesion nematode (RLN) species Pratylenchus thornei and P. neglectus are widely distributed within cropping regions of Australia and have been shown to limit grain production. Field experiments conducted to compare the performance of cultivars in the presence of RLNs investigate management options for growers by identifying cultivars with resistance, by limiting nematode reproduction, and tolerance, by yielding well in the presence of nematodes. A novel experimental design approach for RLN experiments is proposed where the observed RLN density, measured prior to sowing, is used to condition the randomization of cultivars to field plots. This approach ensured that all cultivars were exposed to consistent ranges of RLN in order to derive valid assessments of relative cultivar tolerance and resistance. Using data from a field experiment designed using the conditioned randomization approach and conducted in Formartin, Australia, the analysis of tolerance and resistance was undertaken in a linear mixed model framework. Yield response curves were derived using a random regression approach and curves modeling change in RLN densities between sowing and harvest were derived using splines to account for nonlinearity. Groups of cultivars sharing similar resistance levels could be identified. A comparison of slopes of yield response curves of cultivars belonging to the same resistance class identified differing tolerance levels for cultivars with equivalent exposures to both presowing and postharvest RLN densities. As such, the proposed design and analysis approach allowed tolerance to be assessed independently of resistance.


2019 ◽  
Vol 65 (5) ◽  
pp. 593-601
Author(s):  
James A Westfall ◽  
Megan B E Westfall ◽  
KaDonna C Randolph

Abstract Tree crown ratio is useful in various applications such as prediction of tree mortality probabilities, growth potential, and fire behavior. Crown ratio is commonly assessed in two ways: (1) compacted crown ratio (CCR—lower branches visually moved upwards to fill missing foliage gaps) and (2) uncompacted crown ratio (UNCR—no missing foliage adjustment). The national forest inventory of the United States measures CCR on all trees, whereas only a subset of trees also are assessed for UNCR. Models for 27 species groups are presented to predict UNCR for the northern United States. The model formulation is consistent with those developed for other US regions while also accounting for the presence of repeated measurements and heterogeneous variance in a mixed-model framework. Ignoring random-effects parameters, the fit index values ranged from 0.43 to 0.78, and root mean squared error spanned 0.08–0.15; considerable improvements in both goodness-of-fit statistics were realized via inclusion of the random effects. Comparison of UNCR predictions with models developed for the southern United States exhibited close agreement, whereas comparisons with models used in Forest Vegetation Simulator variants indicated poor association. The models provide additional analytical flexibility for using the breadth of northern region data in applications where UNCR is the appropriate crown characteristic.


2016 ◽  
Vol 113 (27) ◽  
pp. 7377-7382 ◽  
Author(s):  
David Heckerman ◽  
Deepti Gurdasani ◽  
Carl Kadie ◽  
Cristina Pomilla ◽  
Tommy Carstensen ◽  
...  

The linear mixed model (LMM) is now routinely used to estimate heritability. Unfortunately, as we demonstrate, LMM estimates of heritability can be inflated when using a standard model. To help reduce this inflation, we used a more general LMM with two random effects—one based on genomic variants and one based on easily measured spatial location as a proxy for environmental effects. We investigated this approach with simulated data and with data from a Uganda cohort of 4,778 individuals for 34 phenotypes including anthropometric indices, blood factors, glycemic control, blood pressure, lipid tests, and liver function tests. For the genomic random effect, we used identity-by-descent estimates from accurately phased genome-wide data. For the environmental random effect, we constructed a covariance matrix based on a Gaussian radial basis function. Across the simulated and Ugandan data, narrow-sense heritability estimates were lower using the more general model. Thus, our approach addresses, in part, the issue of “missing heritability” in the sense that much of the heritability previously thought to be missing was fictional. Software is available at https://github.com/MicrosoftGenomics/FaST-LMM.


2021 ◽  
Author(s):  
Lukas Roth ◽  
María Xosé Rodríguez-Álvarez ◽  
Fred van Eeuwijk ◽  
Hans-Peter Piepho ◽  
Andreas Hund

Decision-making in breeding increasingly depends on the ability to capture and predict crop responses to changing environmental factors. Advances in crop modeling as well as high-throughput field phenotyping (HTFP) hold promise to provide such insights. Processing HTFP data is an interdisciplinary task that requires broad knowledge on experimental design, measurement techniques, feature extraction, dynamic trait modeling, and prediction of genotypic values using statistical models. To get an overview of sources of variations in HTFP, we develop a general plot-level model for repeated measurements. Based on this model, we propose a seamless stage-wise process that allows to carry on estimated means and variances from stage to stage and approximates the gold standard of a single-stage analysis. The process builds on the extraction of three intermediate trait categories; (1) timing of key stages, (2) quantities at defined time points or periods, and (3) dose-response curves. In a first stage, these intermediate traits are extracted from low-level traits' time series (e.g., canopy height) using P-splines and the quarter of maximum elongation rate method (QMER), as well as final height percentiles. In a second and third stage, extracted traits are further processed using a stage-wise linear mixed model analysis. Using a wheat canopy growth simulation to generate canopy height time series, we demonstrate the suitability of the stage-wise process for traits of the first two above-mentioned categories. Results indicate that, for the first stage, the P-spline/QMER method was more robust than the percentile method. In the subsequent two-stage linear mixed model processing, weighting the second and third stage with error variance estimates from the previous stages improved the root mean squared error. We conclude that processing phenomics data in stages represents a feasible approach if using appropriate weighting through all stages. P-splines in combination with the QMER method are suitable tools to extract timing of key stages and quantities at defined time points from HTFP data.


2011 ◽  
Vol 89 (6) ◽  
pp. 529-537 ◽  
Author(s):  
J.G.A. Martin ◽  
F. Pelletier

Although mixed effects models are widely used in ecology and evolution, their application to standardized traits that change within season or across ontogeny remains limited. Mixed models offer a robust way to standardize individual quantitative traits to a common condition such as body mass at a certain point in time (within a year or across ontogeny), or parturition date for a given climatic condition. Currently, however, most researchers use simple linear models to accomplish this task. We use both empirical and simulated data to underline the application of mixed models for standardizing trait values to a common environment for each individual. We show that mixed model standardizations provide more accurate estimates of mass parameters than linear models for all sampling regimes and especially for individuals with few repeated measures. Our simulations and analyses on empirical data both confirm that mixed models provide a better way to standardize trait values for individuals with repeated measurements compared with classical least squares regression. Linear regression should therefore be avoided to adjust or standardize individual measurements


Sign in / Sign up

Export Citation Format

Share Document