AbstractTraditional robot programming is often not feasible in small-batch production, as it is time-consuming, inefficient, and expensive. To shorten the time necessary to deploy robot tasks, we need appropriate tools to enable efficient reuse of existing robot control policies. Incremental Learning from Demonstration (iLfD) and reversible Dynamic Movement Primitives (DMP) provide a framework for efficient policy demonstration and adaptation. In this paper, we extend our previously proposed framework with improvements that provide better performance and lower the algorithm’s computational burden. Further, we analyse the learning stability and evaluate the proposed framework with a comprehensive user study. The proposed methods have been evaluated on two popular collaborative robots, Franka Emika Panda and Universal Robot UR10.