scholarly journals Reinforcement learning derived chemotherapeutic schedules for robust patient-specific therapy

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Brydon Eastman ◽  
Michelle Przedborski ◽  
Mohammad Kohandel

AbstractThe in-silico development of a chemotherapeutic dosing schedule for treating cancer relies upon a parameterization of a particular tumour growth model to describe the dynamics of the cancer in response to the dose of the drug. In practice, it is often prohibitively difficult to ensure the validity of patient-specific parameterizations of these models for any particular patient. As a result, sensitivities to these particular parameters can result in therapeutic dosing schedules that are optimal in principle not performing well on particular patients. In this study, we demonstrate that chemotherapeutic dosing strategies learned via reinforcement learning methods are more robust to perturbations in patient-specific parameter values than those learned via classical optimal control methods. By training a reinforcement learning agent on mean-value parameters and allowing the agent periodic access to a more easily measurable metric, relative bone marrow density, for the purpose of optimizing dose schedule while reducing drug toxicity, we are able to develop drug dosing schedules that outperform schedules learned via classical optimal control methods, even when such methods are allowed to leverage the same bone marrow measurements.

2021 ◽  
Author(s):  
Brydon Eastman ◽  
Michelle Przedborski ◽  
Mohammad Kohandel

The in-silico development of a chemotherapeutic dosing schedule for treating cancer relies upon a parameterization of a particular tumour growth model to describe the dynamics of the cancer in response to the dose of the drug. In practice, it is often prohibitively difficult to ensure the validity of patient-specific parameterizations of these models for any particular patient. As a result, sensitivities to these particular parameters can result in therapeutic dosing schedules that are optimal in principle not performing well on particular patients. In this study, we demonstrate that chemotherapeutic dosing strategies learned via reinforcement learning methods are more robust to perturbations in patient-specific parameter values than those learned via classical optimal control methods. By training a reinforcement learning agent on mean-value parameters and allowing the agent periodic access to a more easily measurable metric, relative bone marrow density, for the purpose of optimizing dose schedule while reducing drug toxicity, we are able to develop drug dosing schedules that outperform schedules learned via classical optimal control methods, even when such methods are allowed to leverage the same bone marrow measurements.


1996 ◽  
Vol 28 (1-2) ◽  
pp. 85-92
Author(s):  
Valentin V. Ostapenko ◽  
A. P. Yakovleva ◽  
I. S. Voznyuk ◽  
V. M. Rogov

Games ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 23
Author(s):  
Alexander Arguchintsev ◽  
Vasilisa Poplevko

This paper deals with an optimal control problem for a linear system of first-order hyperbolic equations with a function on the right-hand side determined from controlled bilinear ordinary differential equations. These ordinary differential equations are linear with respect to state functions with controlled coefficients. Such problems arise in the simulation of some processes of chemical technology and population dynamics. Normally, general optimal control methods are used for these problems because of bilinear ordinary differential equations. In this paper, the problem is reduced to an optimal control problem for a system of ordinary differential equations. The reduction is based on non-classic exact increment formulas for the cost-functional. This treatment allows to use a number of efficient optimal control methods for the problem. An example illustrates the approach.


Sign in / Sign up

Export Citation Format

Share Document