Setting aside the military advantages offered by Autonomous Weapons Systems for a moment, international debate continues to feature the argument that the use of lethal force by “killer robots” inherently violates human dignity. The purpose of this chapter is to refute this assumption of inherent immorality and demonstrate situations in which deploying autonomous systems would be strategically, morally, and rationally appropriate. The second part of this chapter objects to the argument that the use of robots in warfare is somehow inherently offensive to human dignity. Overall, this chapter will demonstrate that, contrary to arguments made by some within civil society, moral employment of force is possible, even without proximate human decision-making. As discussions continue to swirl around autonomous weapons systems, it is important not to lose sight of the fact that fire-and-forget weapons are not morally exceptional or inherently evil. If an engagement complied with the established ethical framework, it is not inherently morally invalidated by the absence of a human at the point of violence. As this chapter argues, the decision to employ lethal force becomes problematic when a more thorough consideration would have demanded restraint. Assuming a legitimate target, therefore, the importance of the distance between human agency in the target authorization process and force delivery is separated by degrees. A morally justifiable decision to engage a target with rifle fire would not be ethically invalidated simply because the lethal force was delivered by a commander-authorized robotic carrier.