Personalized treatment selection in routine care: Integrating machine learning and statistical algorithms to recommend cognitive behavioral or psychodynamic therapy

2020 ◽  
Vol 31 (1) ◽  
pp. 33-51 ◽  
Author(s):  
Brian Schwartz ◽  
Zachary D. Cohen ◽  
Julian A. Rubel ◽  
Dirk Zimmermann ◽  
Werner W. Wittmann ◽  
...  
2017 ◽  
Author(s):  
Zachary Daniel Cohen ◽  
Thomas Kim ◽  
Henricus Van ◽  
Jack Dekker ◽  
Ellen Driessen

Objective: We use a new variable selection procedure for treatment selection which generates treatment recommendations based on pre-treatment characteristics for adults with mild-to-moderate depression deciding between cognitive behavioral (CBT) versus psychodynamic therapy (PDT).Method: Data are drawn from a randomized comparison of CBT versus PDT for depression (N=167, 71%-female, mean-age=39.6). The approach combines four different statistical techniques to identify patient characteristics associated consistently with differential treatment response. Variables are combined to generate predictions indicating each individual’s optimal-treatment. The average outcomes for patients who received their indicated treatment versus those who did not were compared retrospectively to estimate model utility.Results: Of 49 predictors examined, depression severity, anxiety sensitivity, extraversion, and psychological treatment-needs were included in the final model. The average post-treatment Hamilton-Depression-Rating-Scale score was 1.6 points lower (95%CI=[0.5:2.8]; d=0.21) for those who received their indicated-treatment compared to non-indicated. Among the 60% of patients with the strongest treatment recommendations, that advantage grew to 2.6 (95%CI=[1.4:3.7]; d=0.37). Conclusions: Variable selection procedures differ in their characterization of the importance of predictive variables. Attending to consistently-indicated predictors may be sensible when constructing treatment selection models. The small-N and lack of separate validation sample indicate a need for prospective tests before this model is used.


2021 ◽  
Vol 28 (1) ◽  
pp. e100251
Author(s):  
Ian Scott ◽  
Stacey Carter ◽  
Enrico Coiera

Machine learning algorithms are being used to screen and diagnose disease, prognosticate and predict therapeutic responses. Hundreds of new algorithms are being developed, but whether they improve clinical decision making and patient outcomes remains uncertain. If clinicians are to use algorithms, they need to be reassured that key issues relating to their validity, utility, feasibility, safety and ethical use have been addressed. We propose a checklist of 10 questions that clinicians can ask of those advocating for the use of a particular algorithm, but which do not expect clinicians, as non-experts, to demonstrate mastery over what can be highly complex statistical and computational concepts. The questions are: (1) What is the purpose and context of the algorithm? (2) How good were the data used to train the algorithm? (3) Were there sufficient data to train the algorithm? (4) How well does the algorithm perform? (5) Is the algorithm transferable to new clinical settings? (6) Are the outputs of the algorithm clinically intelligible? (7) How will this algorithm fit into and complement current workflows? (8) Has use of the algorithm been shown to improve patient care and outcomes? (9) Could the algorithm cause patient harm? and (10) Does use of the algorithm raise ethical, legal or social concerns? We provide examples where an algorithm may raise concerns and apply the checklist to a recent review of diagnostic imaging applications. This checklist aims to assist clinicians in assessing algorithm readiness for routine care and identify situations where further refinement and evaluation is required prior to large-scale use.


2009 ◽  
Vol 50 (6) ◽  
pp. 517-525 ◽  
Author(s):  
Susana Vázquez-Rivera ◽  
César González-Blanch ◽  
Laura Rodríguez-Moya ◽  
Dolores Morón ◽  
Sara González-Vives ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document