Adoption of AI-equipped systems and their societal benefits are heavily dependent on human understanding of the rationale behind the systems’ outputs. Such systems’ widespread inability to explain their outputs causes human mistrust and doubts regarding their regulatory compliance. Research in psychology points to the amenability of argumentation as a paradigm for human reasoning, advocating that humans developed reasoning in order to argue. We here overview a number of approaches using computational argumentation frameworks as the scaffolding for explanations for human consumption. Our argumentation frameworks are automatically mined from data and data-centric methods. We define explanations as graphs obtained from these argumentation frameworks, which are customisable by means of properties. We illustrate our methods with various consumer-oriented tasks in the media and entertainment industry, providing reasoning outputs that can be explained to consumers, and that consumers can directly interact with to give rise to improved recommendations.