Abstract
The proposal of this talk is to explain alternatives to obtain ethical reasoning in the humans/AI interactions in medical (especially public health) contexts. One of the ethical problems in AI is the alignment mechanisms between human values and machines automatisms. This research is based on obtaining a system capable to infer from rational human activity in a certain behavior, so it can be captured how a human moves and the way that human beings learn and teach ethical values. One way is mimetic alignment, which are values imitation processes based trough big data preferences analysis, linguistic expressions, etc. However, this approximation commits two mistakes. First, preferences are confused with values, and then the second problem is that naturalistic fallacy is committed. From this point of view, naturalistic fallacy occurs if the research is focused on alignment meaning instead of value one, and the subsequent answer is preference analysis based. Therefore, prescriptions are derivate from descriptions. The chain of reasoning that leads us to commit this fallacy begins with the confusion that values and preferences are equivalent. An alternative proposal is anchored values alignment, which is based on anchoring normative values processes of a machine that has a behavior to interact. Through abductive reasoning, this way of thinking tries to capture the idea that a value is not in any set of things, instead it is some action guiding. The relevance of abduction is its temptative value to project beyond descriptive reasoning as statically one, which it is currently used in works on medical diagnosis precisely for the characteristics that clinical eye needs.