Applications of operant demand to treatment selection II : Covariance of evidence strength and treatment consumption

Author(s):  
Shawn P. Gilroy ◽  
Cassie C. Feck
2014 ◽  
Vol 74 (S 01) ◽  
Author(s):  
F Arnold ◽  
D Margraf ◽  
O Hoffmann ◽  
K von Dehn-Rotfelser ◽  
I Funke ◽  
...  

2021 ◽  
Vol 2 ◽  
pp. 263348952199419
Author(s):  
Cara C Lewis ◽  
Kayne Mettert ◽  
Aaron R Lyon

Background: Despite their inclusion in Rogers’ seminal diffusion of innovations theory, few implementation studies empirically evaluate the role of intervention characteristics. Now, with growing evidence on the role of adaptation in implementation, high-quality measures of characteristics such as adaptability, trialability, and complexity are needed. Only two systematic reviews of implementation measures captured those related to the intervention or innovation and their assessment of psychometric properties was limited. This manuscript reports on the results of eight systematic reviews of measures of intervention characteristics with nuanced data regarding a broad range of psychometric properties. Methods: The systematic review proceeded in three phases. Phase I, data collection, involved search string generation, title and abstract screening, full text review, construct assignment, and citation searches. Phase II, data extraction, involved coding psychometric information. Phase III, data analysis, involved two trained specialists independently rating each measure using PAPERS (Psychometric And Pragmatic Evidence Rating Scales). Results: Searches identified 16 measures or scales: zero for intervention source, one for evidence strength and quality, nine for relative advantage, five for adaptability, six for trialability, nine for complexity, and two for design quality and packaging. Information about internal consistency and norms was available for most measures, whereas information about other psychometric properties was most often not available. Ratings for psychometric properties fell in the range of “poor” to “good.” Conclusion: The results of this review confirm that few implementation scholars are examining the role of intervention characteristics in behavioral health studies. Significant work is needed to both develop new measures (e.g., for intervention source) and build psychometric evidence for existing measures in this forgotten domain. Plain Language Summary Intervention characteristics have long been perceived as critical factors that directly influence the rate of adopting an innovation. It remains unclear the extent to which intervention characteristics including relative advantage, complexity, trialability, intervention source, design quality and packaging, evidence strength and quality, adaptability, and cost impact implementation of evidence-based practices in behavioral health settings. To unpack the differential influence of these factors, high quality measures are needed. Systematic reviews can identify measures and synthesize the data regarding their quality to identify gaps in the field and inform measure development and testing efforts. Two previous reviews identified measures of intervention characteristics, but they did not provide information about the extent of the existing evidence nor did they evaluate the host of evidence available for identified measures. This manuscript summarizes the results of nine systematic reviews (i.e., one for each of the factors listed above) for which 16 unique measures or scales were identified. The nuanced findings will help direct measure development work in this forgotten domain.


2021 ◽  
pp. 096228022110028
Author(s):  
Yun Li ◽  
Irina Bondarenko ◽  
Michael R Elliott ◽  
Timothy P Hofer ◽  
Jeremy MG Taylor

With medical tests becoming increasingly available, concerns about over-testing, over-treatment and health care cost dramatically increase. Hence, it is important to understand the influence of testing on treatment selection in general practice. Most statistical methods focus on average effects of testing on treatment decisions. However, this may be ill-advised, particularly for patient subgroups that tend not to benefit from such tests. Furthermore, missing data are common, representing large and often unaddressed threats to the validity of most statistical methods. Finally, it is often desirable to conduct analyses that can be interpreted causally. Using the Rubin Causal Model framework, we propose to classify patients into four potential outcomes subgroups, defined by whether or not a patient’s treatment selection is changed by the test result and by the direction of how the test result changes treatment selection. This subgroup classification naturally captures the differential influence of medical testing on treatment selections for different patients, which can suggest targets to improve the utilization of medical tests. We can then examine patient characteristics associated with patient potential outcomes subgroup memberships. We used multiple imputation methods to simultaneously impute the missing potential outcomes as well as regular missing values. This approach can also provide estimates of many traditional causal quantities of interest. We find that explicitly incorporating causal inference assumptions into the multiple imputation process can improve the precision for some causal estimates of interest. We also find that bias can occur when the potential outcomes conditional independence assumption is violated; sensitivity analyses are proposed to assess the impact of this violation. We applied the proposed methods to examine the influence of 21-gene assay, the most commonly used genomic test in the United States, on chemotherapy selection among breast cancer patients.


Biometrika ◽  
2020 ◽  
Author(s):  
Oliver Dukes ◽  
Stijn Vansteelandt

Summary Eliminating the effect of confounding in observational studies typically involves fitting a model for an outcome adjusted for covariates. When, as often, these covariates are high-dimensional, this necessitates the use of sparse estimators, such as the lasso, or other regularization approaches. Naïve use of such estimators yields confidence intervals for the conditional treatment effect parameter that are not uniformly valid. Moreover, as the number of covariates grows with the sample size, correctly specifying a model for the outcome is nontrivial. In this article we deal with both of these concerns simultaneously, obtaining confidence intervals for conditional treatment effects that are uniformly valid, regardless of whether the outcome model is correct. This is done by incorporating an additional model for the treatment selection mechanism. When both models are correctly specified, we can weaken the standard conditions on model sparsity. Our procedure extends to multivariate treatment effect parameters and complex longitudinal settings.


Sign in / Sign up

Export Citation Format

Share Document