Recurrent neural circuits overcome partial inactivation by compensation and re-learning
Technical advances in artificial manipulation of neural activity have precipitated a surge in studying the causal contribution of brain circuits to cognition and behavior. However, complexities of neural circuits challenge interpretation of experimental results, necessitating theoretical frameworks for systematic explorations. Here, we take a step in this direction, using, as a testbed, recurrent neural networks trained to perform a perceptual decision. We show that understanding the computations implemented by network dynamics enables predicting the magnitude of perturbation effects based on changes in the network's phase plane. Inactivation effects are weaker for distributed network architectures, are more easily discovered with non-discrete behavioral readouts (e.g., reaction times), and vary considerably across multiple tasks implemented by the same circuit. Finally, networks that can "learn" during inactivation recover function quickly, often much faster than the original training time. Our framework explains past empirical observations by clarifying how complex circuits compensate and adapt to perturbations.