scholarly journals Heterogeneous individual risk modelling of recurrent events

Biometrika ◽  
2020 ◽  
Author(s):  
Huijuan Ma ◽  
Limin Peng ◽  
Chiung-Yu Huang ◽  
Haoda Fu

Summary Progression of chronic disease is often manifested by repeated occurrences of disease-related events over time. Delineating the heterogeneity in the risk of such recurrent events can provide valuable scientific insight for guiding customized disease management. We propose a new sensible measure of individual risk of recurrent events and present a dynamic modelling framework thereof, which accounts for both observed covariates and unobservable frailty. The proposed modelling requires no distributional specification of the unobservable frailty, while permitting exploration of the dynamic effects of the observed covariates. We develop estimation and inference procedures for the proposed model through a novel adaptation of the principle of conditional score. The asymptotic properties of the proposed estimator, including the uniform consistency and weak convergence, are established. Extensive simulation studies demonstrate satisfactory finite-sample performance of the proposed method. We illustrate the practical utility of the new method via an application to a diabetes clinical trial that explores the risk patterns of hypoglycemia in type 2 diabetes patients.

2009 ◽  
Vol 25 (1) ◽  
pp. 117-161 ◽  
Author(s):  
Marcelo C. Medeiros ◽  
Alvaro Veiga

In this paper a flexible multiple regime GARCH(1,1)-type model is developed to describe the sign and size asymmetries and intermittent dynamics in financial volatility. The results of the paper are important to other nonlinear GARCH models. The proposed model nests some of the previous specifications found in the literature and has the following advantages. First, contrary to most of the previous models, more than two limiting regimes are possible, and the number of regimes is determined by a simple sequence of tests that circumvents identification problems that are usually found in nonlinear time series models. The second advantage is that the novel stationarity restriction on the parameters is relatively weak, thereby allowing for rich dynamics. It is shown that the model may have explosive regimes but can still be strictly stationary and ergodic. A simulation experiment shows that the proposed model can generate series with high kurtosis and low first-order autocorrelation of the squared observations and exhibit the so-called Taylor effect, even with Gaussian errors. Estimation of the parameters is addressed, and the asymptotic properties of the quasi-maximum likelihood estimator are derived under weak conditions. A Monte-Carlo experiment is designed to evaluate the finite-sample properties of the sequence of tests. Empirical examples are also considered.


2020 ◽  
Author(s):  
Huiling Yuan ◽  
Yong Zhou ◽  
Lu Xu ◽  
Yulei Sun ◽  
Xiangyu Cui

Volatility asymmetry is a hot topic in high-frequency financial market. In this paper, we propose a new econometric model, which could describe volatility asymmetry based on high-frequency historical data and low-frequency historical data. After providing the quasi-maximum likelihood estimators for the parameters, we establish their asymptotic properties. We also conduct a series of simulation studies to check the finite sample performance and volatility forecasting performance of the proposed methodologies. And an empirical application is demonstrated that the new model has stronger volatility prediction power than GARCH-It\^{o} model in the literature.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Honglong You ◽  
Chuncun Yin

Consider a spectrally negative Lévy process with unknown diffusion coefficient and Lévy measure and suppose that the high frequency trading data is given. We use the techniques of threshold estimation and regularized Laplace inversion to obtain the estimator of survival probability for a spectrally negative Lévy process. The asymptotic properties are given for the proposed estimator. Simulation studies are also given to show the finite sample performance of our estimator.


2017 ◽  
Vol 27 (10) ◽  
pp. 3092-3103 ◽  
Author(s):  
Jialiang Li ◽  
Qunqiang Feng ◽  
Jason P Fine ◽  
Michael J Pencina ◽  
Ben Van Calster

Polytomous discrimination index is a novel and important diagnostic accuracy measure for multi-category classification. After reconstructing its probabilistic definition, we propose a nonparametric approach to the estimation of polytomous discrimination index based on an empirical sample of biomarker values. In this paper, we provide the finite-sample and asymptotic properties of the proposed estimators and such analytic results may facilitate the statistical inference. Simulation studies are performed to examine the performance of the nonparametric estimators. Two real data examples are analysed to illustrate our methodology.


2012 ◽  
Vol 461 ◽  
pp. 48-52
Author(s):  
Huan Bin Liu ◽  
Ying Ye

In this paper, the additive-multiplicative hazards model for gap time data of recurrent events is investigated, and the estimating equation approach is presented for inference about regression parameters. Both asymptotic and finite sample properties of the proposed parameter estimates are established


Author(s):  
Guanghao Qi ◽  
Nilanjan Chatterjee

Abstract Background Previous studies have often evaluated methods for Mendelian randomization (MR) analysis based on simulations that do not adequately reflect the data-generating mechanisms in genome-wide association studies (GWAS) and there are often discrepancies in the performance of MR methods in simulations and real data sets. Methods We use a simulation framework that generates data on full GWAS for two traits under a realistic model for effect-size distribution coherent with the heritability, co-heritability and polygenicity typically observed for complex traits. We further use recent data generated from GWAS of 38 biomarkers in the UK Biobank and performed down sampling to investigate trends in estimates of causal effects of these biomarkers on the risk of type 2 diabetes (T2D). Results Simulation studies show that weighted mode and MRMix are the only two methods that maintain the correct type I error rate in a diverse set of scenarios. Between the two methods, MRMix tends to be more powerful for larger GWAS whereas the opposite is true for smaller sample sizes. Among the other methods, random-effect IVW (inverse-variance weighted method), MR-Robust and MR-RAPS (robust adjust profile score) tend to perform best in maintaining a low mean-squared error when the InSIDE assumption is satisfied, but can produce large bias when InSIDE is violated. In real-data analysis, some biomarkers showed major heterogeneity in estimates of their causal effects on the risk of T2D across the different methods and estimates from many methods trended in one direction with increasing sample size with patterns similar to those observed in simulation studies. Conclusion The relative performance of different MR methods depends heavily on the sample sizes of the underlying GWAS, the proportion of valid instruments and the validity of the InSIDE assumption. Down-sampling analysis can be used in large GWAS for the possible detection of bias in the MR methods.


Biometrika ◽  
2020 ◽  
Author(s):  
Zhenhua Lin ◽  
Jane-Ling Wang ◽  
Qixian Zhong

Summary Estimation of mean and covariance functions is fundamental for functional data analysis. While this topic has been studied extensively in the literature, a key assumption is that there are enough data in the domain of interest to estimate both the mean and covariance functions. In this paper, we investigate mean and covariance estimation for functional snippets in which observations from a subject are available only in an interval of length strictly (and often much) shorter than the length of the whole interval of interest. For such a sampling plan, no data is available for direct estimation of the off-diagonal region of the covariance function. We tackle this challenge via a basis representation of the covariance function. The proposed estimator enjoys a convergence rate that is adaptive to the smoothness of the underlying covariance function, and has superior finite-sample performance in simulation studies.


2021 ◽  
pp. 1-47
Author(s):  
Qianqian Zhu ◽  
Guodong Li

Many financial time series have varying structures at different quantile levels, and also exhibit the phenomenon of conditional heteroskedasticity at the same time. However, there is presently no time series model that accommodates both of these features. This paper fills the gap by proposing a novel conditional heteroskedastic model called “quantile double autoregression”. The strict stationarity of the new model is derived, and self-weighted conditional quantile estimation is suggested. Two promising properties of the original double autoregressive model are shown to be preserved. Based on the quantile autocorrelation function and self-weighting concept, three portmanteau tests are constructed to check the adequacy of the fitted conditional quantiles. The finite sample performance of the proposed inferential tools is examined by simulation studies, and the need for use of the new model is further demonstrated by analyzing the S&P500 Index.


Mathematics ◽  
2021 ◽  
Vol 9 (15) ◽  
pp. 1815
Author(s):  
Diego I. Gallardo ◽  
Mário de Castro ◽  
Héctor W. Gómez

A cure rate model under the competing risks setup is proposed. For the number of competing causes related to the occurrence of the event of interest, we posit the one-parameter Bell distribution, which accommodates overdispersed counts. The model is parameterized in the cure rate, which is linked to covariates. Parameter estimation is based on the maximum likelihood method. Estimates are computed via the EM algorithm. In order to compare different models, a selection criterion for non-nested models is implemented. Results from simulation studies indicate that the estimation method and the model selection criterion have a good performance. A dataset on melanoma is analyzed using the proposed model as well as some models from the literature.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Wilfredo Angulo ◽  
José M. Ramírez ◽  
Dany De Cecchis ◽  
Juan Primera ◽  
Henry Pacheco ◽  
...  

AbstractCOVID-19 is a highly infectious disease that emerged in China at the end of 2019. The COVID-19 pandemic is the first known pandemic caused by a coronavirus, namely, the new and emerging SARS-CoV-2 coronavirus. In the present work, we present simulations of the initial outbreak of this new coronavirus using a modified transmission rate SEIR model that takes into account the impact of government actions and the perception of risk by individuals in reaction to the proportion of fatal cases. The parameters related to these effects were fitted to the number of infected cases in the 33 provinces of China. The data for Hubei Province, the probable site of origin of the current pandemic, were considered as a particular case for the simulation and showed that the theoretical model reproduces the behavior of the data, thus indicating the importance of combining government actions and individual risk perceptions when the proportion of fatal cases is greater than $$4\%$$ 4 % . The results show that the adjusted model reproduces the behavior of the data quite well for some provinces, suggesting that the spread of the disease differs when different actions are evaluated. The proposed model could help to predict outbreaks of viruses with a biological and molecular structure similar to that of SARS-CoV-2.


Sign in / Sign up

Export Citation Format

Share Document