Simple probabilistic reinforcement learning is recognized as a striatum-based learning system, but in recent years, has also been associated with hippocampal involvement. The present study examined whether such involvement may be attributed to observation-based learning processes, running in parallel to striatum-based reinforcement learning. A computational model of observation-based learning (OL), mirroring classic models of reinforcement-based learning (RL), was constructed and applied to the neuroimaging dataset of Palombo, Hayes, Reid, & Verfaellie (2019). Hippocampal contributions to value-based learning: Converging evidence from fMRI and amnesia. Cognitive, Affective & Behavioral Neuroscience, 19(3), 523–536. Results suggested that observation-based learning processes may indeed take place concomitantly to reinforcement learning and involve activation of the hippocampus and central orbitofrontal cortex (cOFC). However, rather than independent mechanisms running in parallel, the brain correlates of the OL and RL prediction errors indicated collaboration between systems, with direct implication of the hippocampus in computations of the discrepancy between the expected and actual reinforcing values of actions. These findings are consistent with previous accounts of a role for the hippocampus in encoding the strength of observed stimulus-outcome associations, with updating of such associations through striatal reinforcement-based computations. Additionally, enhanced negative prediction error signaling was found in the anterior insula with greater use of OL over RL processes. This result may suggest an additional mode of collaboration between OL and RL systems, implicating the error monitoring network.