scholarly journals Logics and practices of transparency and opacity in real-world applications of public sector machine learning

Author(s):  
Michael Veale

Presented as a talk at the 4th Workshop on Fairness, Accountability and Transparency in Machine Learning (FAT/ML 2017), Halifax, Nova Scotia, Canada.Machine learning systems are increasingly used to support public sector decision-making across a variety of sectors. Given concerns around accountability in these domains, and amidst accusations of intentional or unintentional bias, there have been increased calls for transparency of these technologies. Few, however, have considered how logics and practices concerning transparency have been understood by those involved in the machine learning systems already being piloted and deployed in public bodies today. This short paper distils insights about transparency on the ground from interviews with 27 such actors, largely public servants and relevant contractors, across 5 OECD countries. Considering transparency and opacity in relation to trust and buy-in, better decision-making, and the avoidance of gaming, it seeks to provide useful insights for those hoping to develop socio-technical approaches to transparency that might be useful to practitioners on-the-ground.

2018 ◽  
Author(s):  
Michael Veale ◽  
Max Van Kleek ◽  
Reuben Binns

Cite as:Michael Veale, Max Van Kleek and Reuben Binns (2018) Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making. ACM Conference on Human Factors in Computing Systems (CHI'18). doi: 10.1145/3173574.3174014Calls for heightened consideration of fairness and accountability in algorithmically-informed public decisions—like taxation, justice, and child protection—are now commonplace. How might designers support such human values? We interviewed 27 public sector machine learning practitioners across 5 OECD countries regarding challenges understanding and imbuing public values into their work. The results suggest a disconnect between organisational and institutional realities, constraints and needs, and those addressed by current research into usable, transparent and 'discrimination-aware' machine learning—absences likely to undermine practical initiatives unless addressed. We see design opportunities in this disconnect, such as in supporting the tracking of concept drift in secondary data sources, and in building usable transparency tools to identify risks and incorporate domain knowledge, aimed both at managers and at the `street-level bureaucrats' on the frontlines of public service. We conclude by outlining ethical challenges and future directions for collaboration in these high-stakes applications.


2020 ◽  
Vol 44 ◽  
pp. 101127 ◽  
Author(s):  
Mikaël J.A. Maes ◽  
Kate E. Jones ◽  
Mireille B. Toledano ◽  
Ben Milligan

Omega ◽  
1979 ◽  
Vol 7 (5) ◽  
pp. 379-384 ◽  
Author(s):  
David Pearce

Sign in / Sign up

Export Citation Format

Share Document