This work seeks to study the beneficial properties that an autonomous agent can obtain by imitating a cognitive architecture similar to that of conscious beings. Throughout this document, a cognitive model of an autonomous agent-based in a global workspace architecture is presented. We hypothesize that consciousness is an evolutionary advantage, so if our autonomous agent can be potentially conscious, its performance will be enhanced. We explore whether an autonomous agent implementing a cognitive architecture like the one proposed in the global workspace theory can be conscious from a philosophy of mind perspective, with a special emphasis on functionalism and multiple realizability. The purposes of our proposed model are to create autonomous agents that can navigate within an environment composed of multiple independent magnitudes, adapting to its surroundings to find the best possible position according to its inner preferences and to test the effectiveness of many of its cognitive mechanisms, such as an attention mechanism for magnitude selection, possession of inner feelings and preferences, usage of a memory system to storage beliefs and past experiences, and incorporating the consciousness bottleneck into the decision-making process, that controls and integrates information processed by all the subsystems of the model, as in global workspace theory. We show in a large experiment set how potentially conscious autonomous agents can benefit from having a cognitive architecture such as the one described.