Robots and other artificial agents are increasingly present in eldercare, education, and at work. They are designed to help humans with tasks and to meet labor shortages?a technological fix for our times. In two experiments and a follow-up study, we investigate factors that influence the acceptance of artificial agents in positions of power, using attachment theory and disappointment theory as explanatory models. We found that when the state of the world provokes anxiety, citizens perceive artificial agents as a reliable proxy to replace human leaders. Moreover, people accept artificial agents as decision-makers in politics and security more willingly when they deem their leaders or government to be untrustworthy, disappointing, or immoral. Finally, we discuss these results with respect to theories of technology acceptance and the delegation of duties and prerogatives.