scholarly journals Performance of longitudinal item response theory models in shortened or partial assessments

2020 ◽  
Vol 47 (5) ◽  
pp. 461-471
Author(s):  
Leticia Arrington ◽  
Sebastian Ueckert ◽  
Malidi Ahamadi ◽  
Sreeraj Macha ◽  
Mats O. Karlsson

Abstract This work evaluates the performance of longitudinal item response (IR) theory models in shortened assessments using an existing model for part II and III of the MDS-UPDRS score. Based on the item information content, the assessment was reduced by removal of items in multiple increments and the models’ ability to recover the item characteristics of the remaining items at each level was evaluated. This evaluation was done for both simulated and real data. The metric of comparison in both cases was the item information function. For real data, the impact of shortening on the estimated disease progression and drug effect was also studied. In the simulated data setting, the item characteristics did not differ between the full and the shortened assessments down to the lowest level of information remaining; indicating a considerable independence between items. In contrast when reducing the assessment in a real data setting, a substantial change in item information was observed for some of the items. Disease progression and drug effect estimates also decreased in the reduced assessments. These changes indicate a shift in the measured construct of the shortened assessment and warrant caution when comparing results from a partial assessment with results from the full assessment.

2020 ◽  
Author(s):  
Fanny Mollandin ◽  
Andrea Rau ◽  
Pascal Croiseau

ABSTRACTTechnological advances and decreasing costs have led to the rise of increasingly dense genotyping data, making feasible the identification of potential causal markers. Custom genotyping chips, which combine medium-density genotypes with a custom genotype panel, can capitalize on these candidates to potentially yield improved accuracy and interpretability in genomic prediction. A particularly promising model to this end is BayesR, which divides markers into four effect size classes. BayesR has been shown to yield accurate predictions and promise for quantitative trait loci (QTL) mapping in real data applications, but an extensive benchmarking in simulated data is currently lacking. Based on a set of real genotypes, we generated simulated data under a variety of genetic architectures, phenotype heritabilities, and we evaluated the impact of excluding or including causal markers among the genotypes. We define several statistical criteria for QTL mapping, including several based on sliding windows to account for linkage disequilibrium. We compare and contrast these statistics and their ability to accurately prioritize known causal markers. Overall, we confirm the strong predictive performance for BayesR in moderately to highly heritable traits, particularly for 50k custom data. In cases of low heritability or weak linkage disequilibrium with the causal marker in 50k genotypes, QTL mapping is a challenge, regardless of the criterion used. BayesR is a promising approach to simultaneously obtain accurate predictions and interpretable classifications of SNPs into effect size classes. We illustrated the performance of BayesR in a variety of simulation scenarios, and compared the advantages and limitations of each.


2003 ◽  
Vol 40 (4) ◽  
pp. 389-405 ◽  
Author(s):  
Baohong Sun ◽  
Scott A. Neslin ◽  
Kannan Srinivasan

Logit choice models have been used extensively to study promotion response. This article examines whether brand-switching elasticities derived from these models are overestimated as a result of rational consumer adjustment of purchase timing to coincide with promotion schedules and whether a dynamic structural model can address this bias. Using simulated data, the authors first show that if the structural model is correct, brand-switching elasticities are overestimated by stand-alone logit models. A nested logit model improves the estimates, but not completely. Second, the authors estimate the models on real data. The results indicate that the structural model fits better and produces sensible coefficient estimates. The authors then observe the same pattern in switching elasticities as they do in the simulation. Third, the authors predict sales assuming a 50% increase in promotion frequency. The reduced-form models predict much higher sales levels than does the dynamic structural model. The authors conclude that reduced-form model estimates of brand-switching elasticities can be overstated and that a dynamic structural model is best for addressing the problem. Reduced-form models that include incidence can partially, though not completely, address the issue. The authors discuss the implications for researchers and managers.


2014 ◽  
Vol 142 (12) ◽  
pp. 4559-4580 ◽  
Author(s):  
Jason A. Sippel ◽  
Fuqing Zhang ◽  
Yonghui Weng ◽  
Lin Tian ◽  
Gerald M. Heymsfield ◽  
...  

Abstract This study utilizes an ensemble Kalman filter (EnKF) to assess the impact of assimilating observations of Hurricane Karl from the High-Altitude Imaging Wind and Rain Airborne Profiler (HIWRAP). HIWRAP is a new Doppler radar on board the NASA Global Hawk unmanned airborne system, which has the benefit of a 24–26-h flight duration, or about 2–3 times that of a conventional aircraft. The first HIWRAP observations were taken during NASA’s Genesis and Rapid Intensification Processes (GRIP) experiment in 2010. Observations considered here are Doppler velocity (Vr) and Doppler-derived velocity–azimuth display (VAD) wind profiles (VWPs). Karl is the only hurricane to date for which HIWRAP data are available. Assimilation of either Vr or VWPs has a significant positive impact on the EnKF analyses and forecasts of Hurricane Karl. Analyses are able to accurately estimate Karl’s observed location, maximum intensity, size, precipitation distribution, and vertical structure. In addition, forecasts initialized from the EnKF analyses are much more accurate than a forecast without assimilation. The forecasts initialized from VWP-assimilating analyses perform slightly better than those initialized from Vr-assimilating analyses, and the latter are less accurate than EnKF-initialized forecasts from a recent proof-of-concept study with simulated data. Likely causes for this discrepancy include the quality and coverage of the HIWRAP data collected from Karl and the presence of model error in this real-data situation. The advantages of assimilating VWP data likely include the ability to simultaneously constrain both components of the horizontal wind and to circumvent reliance upon vertical velocity error covariance.


Author(s):  
Dimitri Marques Abramov ◽  
Saint-Clair Gomes Junior

ABSTRACTThe aim of this study was to develop a realistic network model to predict the relationship between lockdown duration and coverage in controlling the progression of the incidence curve of an epidemic with the characteristics of COVID-19 in two scenarios (1) a closed and non-immune population, and (2) a real scenario from State of Rio de Janeiro from May 6th 2020.Effects of lockdown time and rate on the progression of an epidemic incidence curve in a virtual population of 10 thousand subjects. Predictor variables were reproductive values established in the most recent literature (R0 =2.7 and 5.7, and Re = 1.28 from Rio de Janeiro State at May 6th), without lockdown and with coverages of 25%, 50%, and 90% for 21, 35, 70, and 140 days in up to 13 different scenarios for each R0/Re, where individuals remained infected and transmitters for 14 days. We estimated model validity in theoretical and real scenarios respectively by applying an exponential model on the incidence curve with no lockdown with growth rate coefficient observed in realistic scenarios, and (2) fitting real data series from RJ upon simulated data, respectively.For R0=5.7, the flattening of the curve occurs only with long lockdown periods (70 and 140 days) with a 90% coverage. For R0=2.7, coverages of 25 and 50% also result in curve flattening and reduction of total cases, provided they occur for a long period (70 days or more). For realistic scenario in Rio de Janeiro, lockdowns +25% or more from May 6th during 140 days showed expressive flattening and number of COVID cases two to five times lower. If a more intense coverage lockdown (about +25 to +50% as much as the current one) will be implemented until June 6th during at least 70 days, it is still possible reduce nearly 40-50% the impact of pandemy in state of Rio de Janeiro.These data corroborate the importance of lockdown duration regardless of virus transmission and sometimes of intensity of coverage, either in realistic or theoretical scenarios of COVID-10 epidemics. Even later, the improvement of lockdown coverage can be effective to minimize the impact of epidemic.


Life ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 716
Author(s):  
Yunhe Liu ◽  
Aoshen Wu ◽  
Xueqing Peng ◽  
Xiaona Liu ◽  
Gang Liu ◽  
...  

Despite the scRNA-seq analytic algorithms developed, their performance for cell clustering cannot be quantified due to the unknown “true” clusters. Referencing the transcriptomic heterogeneity of cell clusters, a “true” mRNA number matrix of cell individuals was defined as ground truth. Based on the matrix and the actual data generation procedure, a simulation program (SSCRNA) for raw data was developed. Subsequently, the consistency between simulated data and real data was evaluated. Furthermore, the impact of sequencing depth and algorithms for analyses on cluster accuracy was quantified. As a result, the simulation result was highly consistent with that of the actual data. Among the clustering algorithms, the Gaussian normalization method was the more recommended. As for the clustering algorithms, the K-means clustering method was more stable than K-means plus Louvain clustering. In conclusion, the scRNA simulation algorithm developed restores the actual data generation process, discovers the impact of parameters on classification, compares the normalization/clustering algorithms, and provides novel insight into scRNA analyses.


2020 ◽  
Vol 72 (5) ◽  
pp. 1959-1964
Author(s):  
E.H. Martins ◽  
G. Tarôco ◽  
G.A. Rovadoscki ◽  
M.H.V. Oliveira ◽  
G.B. Mourão ◽  
...  

ABSTRACT This study aimed to estimate genetic parameters for simulated data of body weight (BW), abdominal width (AW), abdominal length (AL), and oviposition. Simulation was performed based on real data collected at apiaries in the region of Campo das Vertentes, Minas Gerais, Brazil. Genetic evaluations were performed using single- and two-trait models and (co)variance components were estimated by the restricted maximum likelihood method. The heritability for BW, AW, AL and oviposition were 0.54, 0.47, 0.31 and 0.66, respectively. Positive genetic correlations of high magnitude were obtained between BW and AW (0.80), BW and oviposition (0.69), AW and oviposition (0.82), and AL and oviposition (0.96). The genetic correlations between BW and AL (0.11) and between AW and AL (0.26) were considered moderate and low. In contrast, the phenotypic correlations were positive and high between BW and AW (0.97), BW and AL (0.96), and AW and AL (0.98). Phenotypic correlations of low magnitude and close to zero were obtained for oviposition with AL (0.02), AW (-0.02), and BW (-0.03). New studies involving these characteristics should be conducted on populations with biological data in order to evaluate the impact of selection on traits of economic interest.


Author(s):  
Fanny Mollandin ◽  
Andrea Rau ◽  
Pascal Croiseau

Abstract Technological advances and decreasing costs have led to the rise of increasingly dense genotyping data, making feasible the identification of potential causal markers. Custom genotyping chips, which combine medium-density genotypes with a custom genotype panel, can capitalize on these candidates to potentially yield improved accuracy and interpretability in genomic prediction. A particularly promising model to this end is BayesR, which divides markers into four effect size classes. BayesR has been shown to yield accurate predictions and promise for quantitative trait loci (QTL) mapping in real data applications, but an extensive benchmarking in simulated data is currently lacking. Based on a set of real genotypes, we generated simulated data under a variety of genetic architectures, phenotype heritabilities, and we evaluated the impact of excluding or including causal markers among the genotypes. We define several statistical criteria for QTL mapping, including several based on sliding windows to account for linkage disequilibrium. We compare and contrast these statistics and their ability to accurately prioritize known causal markers. Overall, we confirm the strong predictive performance for BayesR in moderately to highly heritable traits, particularly for 50k custom data. In cases of low heritability or weak linkage disequilibrium with the causal marker in 50k genotypes, QTL mapping is a challenge, regardless of the criterion used. BayesR is a promising approach to simultaneously obtain accurate predictions and interpretable classifications of SNPs into effect size classes. We illustrated the performance of BayesR in a variety of simulation scenarios, and compared the advantages and limitations of each.


2019 ◽  
Author(s):  
B.R. Mâsse ◽  
P. Guibord ◽  
M.-C. Boily ◽  
M. Alary

AbstractBackgroundThe validity of measures used in follow-up studies to estimate the magnitude of the HIV-STD association will be the focus of this paper. A recent simulation study by Boily et al [1] based on a model of HIV and STD transmission showed that the relative risk (RR), estimated by the hazard rate ratio (HRR) obtained by the Cox model had poor validity, either in absence or in presence of a real association between HIV and STD. The HRR tends to underestimate the true magnitude of a non-null association. These results were obtained from simulated follow-up studies where HIV was periodicaly tested every three months and every month for the STD.Aims and MethodsThis paper extends the above results by investigating the impact of using different periodic testing intervals on the validity of HRR estimates. Issues regarding the definition of exposure to STDs in this context are explored. A stochastic model for the transmission of HIV and other STDs is used to simulate follow-up studies with different periodic testing intervals. HRR estimates obtained with the Cox model with a time-dependent STD exposure covariate are compared to the true magnitude of the HIV-STD association. In addition, real data are reanalysed using the STD exposure definition described in this paper. The data from Laga et al [2] are used for this purpose.Results(1) Simulated data: independently of the magnitude of the true association, we observed a greater reduction of the bias when increasing the frequency of HIV testing than that of the STD testing. (2) Real data: The STD exposure definition can create substantial differences in the estimation of the HIV-STD association. Laga et al [2] have found a HRR of 2.5 (1.1 - 6.4) for the association between HIV and genital ulcer disease compared to an estimate of 3.5 (1.5 - 8.3) with our improved definition of exposure.ConclusionsResults on the simulated data have an important impact on the design of field studies. For instance when choosing between two designs; one where both HIV and STD are screened every 3 months versus one where HIV and STD are screened every 3 months and monthly, respectively. The latter design is more expensive and involves more complicated logistics. Furthermore, this increment in cost may not be justified considering the relatively small gain in terms of validity and variability.


2018 ◽  
Vol 43 (1) ◽  
pp. 3-17 ◽  
Author(s):  
Katherine G. Jonas ◽  
Kristian E. Markon

Responses to survey data are determined not only by item characteristics and respondents’ trait standings but also by response styles. Recently, methods for modeling response style with personality and attitudinal data have turned toward the use of anchoring vignettes, which provide fixed rating targets. Although existing research is promising, a few outstanding questions remain. First, it is not known how many vignettes and vignette ratings are necessary to identify response style parameters. Second, the comparative accuracy of these models is largely unexplored. Third, it remains unclear whether correcting for response style improves criterion validity. Both simulated data and data observed from a population-representative sample responding to a measure of personality pathology (the Personality Inventory for DSM-5 [PID-5]) were modeled using an array of response style models. In simulations, most models estimating response styles outperformed the graded response model (GRM), and in observed data, all response style models were superior to the GRM. Correcting for response style had a small, but in some cases significant, effect on the prediction of self-reported social dysfunction.


Methodology ◽  
2016 ◽  
Vol 12 (3) ◽  
pp. 89-96 ◽  
Author(s):  
Tyler Hamby ◽  
Robert A. Peterson

Abstract. Using two meta-analytic datasets, we investigated the effect that two scale-item characteristics – number of item response categories and item response-category label format – have on the reliability of multi-item rating scales. The first dataset contained 289 reliability coefficients harvested from 100 samples that measured Big Five traits. The second dataset contained 2,524 reliability coefficients harvested from 381 samples that measured a wide variety of constructs in psychology, marketing, management, and education. We performed moderator analyses on the two datasets with the two item characteristics and their interaction. As expected, as the number of item response categories increased, so did reliability, but more importantly, there was a significant interaction between the number of item response categories and item response-category label format. Increasing the number of response categories increased reliabilities for scale-items with all response categories labeled more so than for other item response-category label formats. We explain that the interaction may be due to both statistical and psychological factors. The present results help to explain why findings on the relationships between the two scale-item characteristics and reliability have been mixed.


Sign in / Sign up

Export Citation Format

Share Document