scholarly journals AI Ethics Needs Good Data

2021 ◽  
pp. 103-121
Author(s):  
Angela Daly ◽  
S. Kate Devitt ◽  
Monique Mann

Arguing that discourses on AI must engage with power and political economy this chapter, in particular, makes the case that we must move beyond the depoliticised language of ‘ethics’ currently deployed in determining whether AI is ‘good’ given the limitations of ethics as a frame through which AI issues can be viewed. In order to circumvent these limits, we use instead the language and conceptualisation of ‘Good Data’ which we view as a more expansive term to elucidate the values, rights and interests at stake when it comes to AI’s development and deployment. These considerations include, but go beyond privacy, as well as fairness, transparency and accountability to include explicit political economy critiques of power.

2021 ◽  
Vol 8 (2) ◽  
pp. 205395172110477
Author(s):  
Dieuwertje Luitse ◽  
Wiebke Denkena

In recent years, AI research has become more and more computationally demanding. In natural language processing (NLP), this tendency is reflected in the emergence of large language models (LLMs) like GPT-3. These powerful neural network-based models can be used for a range of NLP tasks and their language generation capacities have become so sophisticated that it can be very difficult to distinguish their outputs from human language. LLMs have raised concerns over their demonstrable biases, heavy environmental footprints, and future social ramifications. In December 2020, critical research on LLMs led Google to fire Timnit Gebru, co-lead of the company’s AI Ethics team, which sparked a major public controversy around LLMs and the growing corporate influence over AI research. This article explores the role LLMs play in the political economy of AI as infrastructural components for AI research and development. Retracing the technical developments that have led to the emergence of LLMs, we point out how they are intertwined with the business model of big tech companies and further shift power relations in their favour. This becomes visible through the Transformer, which is the underlying architecture of most LLMs today and started the race for ever bigger models when it was introduced by Google in 2017. Using the example of GPT-3, we shed light on recent corporate efforts to commodify LLMs through paid API access and exclusive licensing, raising questions around monopolization and dependency in a field that is increasingly divided by access to large-scale computing power.


Sign in / Sign up

Export Citation Format

Share Document