AbstractTechnological innovations in healthcare, perhaps now more than ever, are posing decisive opportunities for improvements in diagnostics, treatment, and overall quality of life. The use of artificial intelligence and big data processing, in particular, stands to revolutionize healthcare systems as we once knew them. But what effect do these technologies have on human agency and moral responsibility in healthcare? How can patients, practitioners, and the general public best respond to potential obscurities in responsibility? In this paper, I investigate the social and ethical challenges arising with newfound medical technologies, specifically the ways in which artificially intelligent systems may be threatening moral responsibility in the delivery of healthcare. I argue that if our ability to locate responsibility becomes threatened, we are left with a difficult choice of trade-offs. In short, it might seem that we should exercise extreme caution or even restraint in our use of state-of-the-art systems, but thereby lose out on such benefits as improved quality of care. Alternatively, we could embrace novel healthcare technologies but in doing so we might need to loosen our commitment to locating moral responsibility when patients come to harm; for even if harms are fewer – say, as a result of data-driven diagnostics – it may be unclear who or what is responsible when things go wrong. What is clear, at least, is that the shift toward artificial intelligence and big data calls for significant revisions in expectations on how, if at all, we might locate notions of responsibility in emerging models of healthcare.