In this chapter we raise some of the moral issues involved in the current development of robotic autonomous agents. Starting from the connection between autonomy and responsibility, we distinguish two sorts of problems: those having to do with guaranteeing that the behavior of the artificial cognitive system is going to fall within the area of the permissible, and those having to do with endowing such systems with whatever abilities are required for engaging in moral interaction. Only in the second case can we speak of full blown autonomy, or moral autonomy. We illustrate the first type of case with Arkin’s proposal of a hybrid architecture for control of military robots. As for the second kind of case, that of full-blown autonomy, we argue that a motivational component is needed, to ground the self-orientation and the pattern of appraisal required, and outline how such motivational component might give rise to interaction in terms of moral emotions. We end suggesting limits to a straightforward analogy between natural and artificial cognitive systems from this standpoint.