Measuring Recurrent Event Features in Univariate Data

Author(s):  
Don Harding ◽  
Adrian Pagan

This chapter begins with a discussion of why we would expect to find that the time spent in expansions (bull markets, etc.) would be much greater than the time spent in contractions (bear markets, etc.). By focusing on the probabilities of getting particular outcomes for the binary variables summarizing the recurrent events, we can provide an explanation of this long-observed feature. The remainder of the chapter looks at many proposals for summarizing other features of the recurrent events. These involve well-known quantities such as durations and amplitudes, as well as lesser known ones, such as the sharpness of peaks and troughs.

2020 ◽  
Vol 91 (4) ◽  
pp. 352-357
Author(s):  
Jessica Tedford ◽  
Valerie Skaggs ◽  
Ann Norris ◽  
Farhad Sahiar ◽  
Charles Mathers

INTRODUCTION: Atrial fibrillation (AF) is one of the most common cardiac arrhythmias in the general population and is considered disqualifying aeromedically. This study is a unique examination of significant outcomes in aviators with previous history of both AF and stroke.METHODS: Pilots examined by the FAA between 2002 and 2012 who had had AF at some point during his or her medical history were reviewed, and those with an initial stroke or transient ischemic attack (TIA) during that time period were included in this study. All records were individually reviewed to determine stroke and AF history, medical certification history, and recurrent events. Variables collected included medical and behavior history, stroke type, gender, BMI, medication use, and any cardiovascular or neurological outcomes of interest. Major recurrent events included stroke, TIA, cerebrovascular accident, death, or other major events. These factors were used to calculate CHA2DS2-VASc scores.RESULTS: Of the 141 pilots selected for the study, 17.7% experienced a recurrent event. At 6 mo, the recurrent event rate was 5.0%; at 1 yr, 5.8%; at 3 yr 6.9%; and at 5 yr the recurrent event rate was 17.3%. No statistical difference between CHA2DS2-VASc scores was found as it pertained to number of recurrent events.DISCUSSION: We found no significant factors predicting risk of recurrent event and lower recurrence rates in pilots than the general population. This suggests CHA2DS2-VASc scores are not appropriate risk stratification tools in an aviation population and more research is necessary to determine risk of recurrent events in aviators with atrial fibrillation.Tedford J, Skaggs V, Norris A, Sahiar F, Mathers C. Recurrent stroke risk in pilots with atrial fibrillation. Aerosp Med Hum Perform. 2020; 91(4):352–357.


Stroke ◽  
2012 ◽  
Vol 43 (suppl_1) ◽  
Author(s):  
LAURA EVENSEN ◽  
Nan Liu ◽  
Yijun Wang ◽  
Bernadette Boden-Albala

Objective: To describe the relationship between sleep problems, measured by the Medical Outcomes Sleep scale (MOS) at baseline, in ischemic stroke and TIA (IS/TIA) patients and the likelihood of having a recurrent event, leading to vascular death. Background: Among IS/TIA patients, there is increased risk for recurrent vascular events, including stroke, MI and vascular death. While history of stroke is a major predictor of recurrent events, there may be unidentified factors in play. Sleep quality may predict recurrent vascular events, but little is known about the relationship between sleep and recurrent events in IS/TIA patients. Methods: The Stroke Warning Information and Faster Treatment (SWIFT) Study is an NINDS SPOTRIAS funded randomized trial to study the effect of culturally appropriate, interactive education on stroke knowledge and time to arrival after IS/TIA. Sleep problems and recurrent event information were collected among consentable IS/TIA patients. Cox proportional hazards models were used to describe relationships between sleep and recurrent vascular events in IS/TIA patients. The MOS, a 12 item sleep assessment, measures 6 dimensions of sleep: initiation, maintenance, quantity, adequacy, somnolence and respiratory impairment. Results: Over 5 years, the SWIFT study cohort of 1198 [77% IS; 23% TIA] patients were prospectively enrolled. This cohort was 50% female; 50% Hispanic, 31% White and 18% Black, with a mean NIHSS of 3.2 [SD ±3.8]. 750 subjects completed the MOS scale at baseline. In a multivariate analysis, after adjusting for demographics and vascular risk factors: gender, age, race ethnicity, NIHSS, stroke history, qualifying event type, hypertension, diabetes, smoking and family stroke history, longer sleep initiation is associated with combined outcome of IS/TIA, MI and vascular death [p=0.1, HR=1.09]. Significant predictors of vascular death included: trouble falling asleep (initiation) [p=0.05, HR=1.15]; not ‘getting enough sleep to feel rested’ and not ‘getting the amount of sleep you need’ (adequacy) [p=0.06, HR=1.18 and p=0.03, HR=1.18, respectively]; shortness of breath or headache upon waking (respiratory impairment) [p=0.003, HR=1.33]; restless sleep [p=0.07, HR=1.15] and waking at night with trouble resuming sleep [p=0.004, HR=1.23] (maintenance); daytime drowsiness [p=0.05, HR=1.18] and trouble staying awake [p=0.01, HR=1.25] (somnolence); and taking naps (quantity) [p=0.03, HR=1.22]. Conclusions: Sleep problems represent diverse, modifiable risk factors for secondary vascular events, particularly vascular death. Exploring sleep dimensions may yield crucial information for reduction of secondary vascular events in IS/TIA patients. Further investigation is needed to fully understand the effects of sleep on secondary vascular event incidence.


2015 ◽  
Vol 26 (4) ◽  
pp. 1969-1981 ◽  
Author(s):  
Jing Ning ◽  
Mohammad H Rahbar ◽  
Sangbum Choi ◽  
Jin Piao ◽  
Chuan Hong ◽  
...  

In comparative effectiveness studies of multicomponent, sequential interventions like blood product transfusion (plasma, platelets, red blood cells) for trauma and critical care patients, the timing and dynamics of treatment relative to the fragility of a patient’s condition is often overlooked and underappreciated. While many hospitals have established massive transfusion protocols to ensure that physiologically optimal combinations of blood products are rapidly available, the period of time required to achieve a specified massive transfusion standard (e.g. a 1:1 or 1:2 ratio of plasma or platelets:red blood cells) has been ignored. To account for the time-varying characteristics of transfusions, we use semiparametric rate models for multivariate recurrent events to estimate blood product ratios. We use latent variables to account for multiple sources of informative censoring (early surgical or endovascular hemorrhage control procedures or death). The major advantage is that the distributions of latent variables and the dependence structure between the multivariate recurrent events and informative censoring need not be specified. Thus, our approach is robust to complex model assumptions. We establish asymptotic properties and evaluate finite sample performance through simulations, and apply the method to data from the PRospective Observational Multicenter Major Trauma Transfusion study.


2017 ◽  
Author(s):  
◽  
Guanglei Yu

[ACCESS RESTRICTED TO THE UNIVERSITY OF MISSOURI AT AUTHOR'S REQUEST.] Recurrent event data and panel count data are two common types of data that have been studied extensively in event history studies in literature. By recurrent event data, we mean that subjects are observed continuously in the follow-up study and thus occurrence times of recurrent events of interest are available. For panel count data, subjects are monitored periodically at discrete observation times and thus only numbers of recurrent events between two subsequent observations are recorded. In addition, one may face mixed panel count data in practice, which are the mixture of recurrent event data and panel count data. They arise when each study subject may be observed continuously during the whole study period, continuously over some study periods and at some time points otherwise, or only at some discrete time points. That is, these mixed data provide complete or incomplete information on the recurrent event process over different time periods for different subjects. It is well-known that in panel count data, the observation process may carry information on the underlying recurrent event process and the censoring may also be dependent in practice. Under such circumstance, the first part of this dissertation will discuss regression analysis of panel count data with informative observations and drop-outs. For the problem, a general means model is presented that can allow both additive and multiplicative effects of covariates on the underlying recurrent event process. In addition, the proportional rates model and the accelerated failure time model are employed to describe the covariate effects on the observation process and the dropout or follow-up process, respectively. For estimation of regression parameters, some estimating equation-based procedures are developed and the asymptotic properties of the proposed estimators are established. In addition, a resampling approach is proposed for the estimation of the covariance matrix of the proposed estimator and a model checking procedure is also provided. The results from an extensive simulation study indicate that the proposed methodology works well for practical situations and it is applied to a motivated set of real data from the Childhood Cancer Survivor Study (CCSS) given in Section 1.1.2.2. In the second part of this dissertation, we will consider regression analysis of mixed panel count data. One major problem in the statistical inference on the mixed data is to combine these two different types of data structures. Since panel count data can be viewed as interval-censored recurrent event data with exact occurrence times of events of interest unobserved or missing, they may be augmented by filling in those missing data by imputation. Then the mixed data can be converted to recurrent event data on which the existing statistical inference method can be easily implemented. Motivated by this, a multiple imputation-based estimation approach is proposed. A simulation study is conducted to study the finite-sample properties of the proposed methodology and it shows that the proposed method is more efficient than the existing method. Also, an illustrative example from the CCSS is provided. The third part of this dissertation still considers regression analysis of mixed panel count data but in the presence of a dependent terminal event, which precludes further occurrence of either recurrent events of interest or observations. For this problem, we present a marginal modeling approach which acknowledges the fact that there will be no more recurrent events after the terminal event and leaves the correlation structure unspecified. To estimate the parameters of interest, an estimating equation-based procedure is developed and the inverse probability of survival weighting technique is used. Asymptotic properties of proposed estimators are also established and finite-sample properties are assessed in a simulation study. We again apply this proposed methodology to the CCSS. In the last part of this dissertation, we will discuss some work directions of the future research.


2017 ◽  
Vol 43 (7) ◽  
pp. 828-838 ◽  
Author(s):  
Marius Popescu ◽  
Zhaojin Xu

Purpose The purpose of this paper is to explore the motivation behind mutual funds’ risk shifting behavior by examining its impact on fund performance, while jointly considering fund managers’ compensation incentives and career concerns. Design/methodology/approach The study uses a sample of US actively managed equity funds over the period 1980-2010. A fund’s risk shifting is estimated as the difference between the fund’s intended portfolio risk in the second half of the year and the realized portfolio risk in the first half of the year. Using the state of the market to identify the dominating type of incentive that fund managers face, we examine the relationship between performance and risk shifting in a cross-sectional regression setting, using the Fama and MacBeth (1973) methodology. Findings The authors find that poorly performing (well performing) funds are likely to increase (decrease) their risk level in bull markets, while reducing (increasing) it during bear markets. Furthermore, we find that funds that increase risk underperform, while those that decrease their portfolio risk do not. In addition, we find that poorly performing funds that increase (or decrease) their risk underperform across bull and bear markets, while well performing funds that reduce risk during bull markets subsequently outperform. Originality/value The paper contributes to the literature on mutual fund risk shifting by providing evidence that the performance consequence of such behavior is dependent on the state of the market and on the funds’ past performance. The results suggest that loser funds tend to be agency prone or be managed by managers with inferior investment skill, and that winner funds exhibit superior investment ability during bull markets. The authors argue that both the agency and investment ability hypotheses are driving fund managers’ risk shifting behavior.


Circulation ◽  
2018 ◽  
Vol 138 (6) ◽  
pp. 570-577 ◽  
Author(s):  
Brian Claggett ◽  
Stuart Pocock ◽  
L.J. Wei ◽  
Marc A. Pfeffer ◽  
John J.V. McMurray ◽  
...  

Background: Most phase-3 trials feature time-to-first event end points for their primary and secondary analyses. In chronic diseases, where a clinical event can occur >1 time, recurrent-event methods have been proposed to more fully capture disease burden and have been assumed to improve statistical precision and power compared with conventional time-to-first methods. Methods: To better characterize factors that influence statistical properties of recurrent-event and time-to-first methods in the evaluation of randomized therapy, we repeatedly simulated trials with 1:1 randomization of 4000 patients to active versus control therapy, with true patient-level risk reduction of 20% (ie, relative risk=0.80). For patients who discontinued active therapy after a first event, we assumed their risk reverted subsequently to their original placebo-level risk. Through simulation, we varied the degree of between-patient heterogeneity of risk and the extent of treatment discontinuation. Findings were compared with those from actual randomized clinical trials. Results: As the degree of between-patient heterogeneity of risk increased, both time-to-first and recurrent-event methods lost statistical power to detect a true risk reduction and confidence intervals widened. The recurrent-event analyses continued to estimate the true relative risk (0.80) as heterogeneity increased, whereas the Cox model produced attenuated estimates. The power of recurrent-event methods declined as the rate of study drug discontinuation postevent increased. Recurrent-event methods provided greater power than time-to-first methods in scenarios where drug discontinuation was ≤30% after a first event, lesser power with drug discontinuation rates of ≥60%, and comparable power otherwise. We confirmed in several actual trials of chronic heart failure that treatment effect estimates were attenuated when estimated via the Cox model and that increased statistical power from recurrent-event methods was most pronounced in trials with lower treatment discontinuation rates. Conclusions: We find that the statistical power of both recurrent-events and time-to-first methods are reduced by increasing heterogeneity of patient risk, a parameter not included in conventional power and sample size formulas. Data from real clinical trials are consistent with simulation studies, confirming that the greatest statistical gains from use of recurrent-events methods occur in the presence of high patient heterogeneity and low rates of study drug discontinuation.


1965 ◽  
Vol 61 (2) ◽  
pp. 519-526 ◽  
Author(s):  
D. J. Daley

Renewal processes in discrete time (or as they are commonly termed, recurrent events) are appropriately described by renewal sequences {un} which are generated by discrete distributions , according to the equationAny two renewal sequences {u′n}, {u″n} define another renewal sequence {un} by means of their term-by-term product {un} = {u′nu″n}, for the joint occurrence of two independent recurrent events ℰ′ and ℰ″ is also a recurrent event. Considering a renewal process in continuous time for which we shall suppose a frequency function f(x) of the lifetime distribution exists, so that a renewal density exists, the analogous property would be that for two renewal density functions h1(x) and h2(x), the function h(x) = h1(x) h2(x) is a renewal density function. A little intuitive reflexion shows that while h(x) dx has a probability density interpretation, this is not in general true of h1(x) h2(x) dx. It is not surprising therefore to find in example 1 a case where the product of two renewal densities is not a renewal density. Example 2, on the other hand, shows that in some cases it is true, and taken together with example 1, there is suggested the problem of characterizing the class of renewal densities h(x) for which αh(x) is a renewal density for all finite positive α and not merely α in 0 < α ≤ A < ∞. In turn this characterization enables us to define a class of renewal densities for which h1(x) and imply that .


2014 ◽  
Vol 631-632 ◽  
pp. 27-30
Author(s):  
Huan Bin Liu ◽  
Tong Yin ◽  
Cong Jun Rao

Recurrent events data refers to the observation of individuals, which contains the recurrent event time of interest. This paper mainly discusses a joint model when the end time is a multiplicable hazard function and the recurrent event process is a multiplicable intensity function. Based on the likelihood method, Delta method, U-statistic method and the idea of general estimation equation, the estimation of unknown parameters and unknown functions in the model is provided. It provides a new method of parameter estimation for the statistic analysis of recurrent events data.


2022 ◽  
Vol 22 (1) ◽  
Author(s):  
Gilma Hernández-Herrera ◽  
David Moriña ◽  
Albert Navarro

Abstract Background When dealing with recurrent events in observational studies it is common to include subjects who became at risk before follow-up. This phenomenon is known as left censoring, and simply ignoring these prior episodes can lead to biased and inefficient estimates. We aimed to propose a statistical method that performs well in this setting. Methods Our proposal was based on the use of models with specific baseline hazards. In this, the number of prior episodes were imputed when unknown and stratified according to whether the subject had been at risk of presenting the event before t = 0. A frailty term was also used. Two formulations were used for this “Specific Hazard Frailty Model Imputed” based on the “counting process” and “gap time.” Performance was then examined in different scenarios through a comprehensive simulation study. Results The proposed method performed well even when the percentage of subjects at risk before follow-up was very high. Biases were often below 10% and coverages were around 95%, being somewhat conservative. The gap time approach performed better with constant baseline hazards, whereas the counting process performed better with non-constant baseline hazards. Conclusions The use of common baseline methods is not advised when knowledge of prior episodes experienced by a participant is lacking. The approach in this study performed acceptably in most scenarios in which it was evaluated and should be considered an alternative in this context. It has been made freely available to interested researchers as R package miRecSurv.


Sign in / Sign up

Export Citation Format

Share Document