DSAA 2018 Special Session: Data Science for Social Good

Author(s):  
Daniela Paolotti ◽  
Michele Tizzoni
Author(s):  
Caelin Bryant ◽  
Yesheng Chen ◽  
Zhen Chen ◽  
Jonathan Gilmour ◽  
Shyamala Gumidyala ◽  
...  

Author(s):  
Cat Drew

Data science can offer huge opportunities for government. With the ability to process larger and more complex datasets than ever before, it can provide better insights for policymakers and make services more tailored and efficient. As with all new technologies, there is a risk that we do not take up its opportunities and miss out on its enormous potential. We want people to feel confident to innovate with data. So, over the past 18 months, the Government Data Science Partnership has taken an open, evidence-based and user-centred approach to creating an ethical framework. It is a practical document that brings all the legal guidance together in one place, and is written in the context of new data science capabilities. As part of its development, we ran a public dialogue on data science ethics, including deliberative workshops, an experimental conjoint survey and an online engagement tool. The research supported the principles set out in the framework as well as provided useful insight into how we need to communicate about data science. It found that people had a low awareness of the term ‘data science’, but that showing data science examples can increase broad support for government exploring innovative uses of data. But people's support is highly context driven. People consider acceptability on a case-by-case basis, first thinking about the overall policy goals and likely intended outcome, and then weighing up privacy and unintended consequences. The ethical framework is a crucial start, but it does not solve all the challenges it highlights, particularly as technology is creating new challenges and opportunities every day. Continued research is needed into data minimization and anonymization, robust data models, algorithmic accountability, and transparency and data security. It also has revealed the need to set out a renewed deal between the citizen and state on data, to maintain and solidify trust in how we use people's data for social good. This article is part of the themed issue ‘The ethical impact of data science’.


Author(s):  
Caelin Bryant ◽  
Jonathan Gilmour ◽  
Beatriz Herce-Hagiwara ◽  
Anh Thu Pham ◽  
Halle Remash ◽  
...  

2019 ◽  
Vol 10 (1) ◽  
pp. 44-65 ◽  
Author(s):  
Bettina Berendt

AbstractRecently, many AI researchers and practitioners have embarked on research visions that involve doing AI for “Good”. This is part of a general drive towards infusing AI research and practice with ethical thinking. One frequent theme in current ethical guidelines is the requirement that AI be good for all, or: contribute to the Common Good. Butwhat is the Common Good, and is it enough to want to be good? Via four lead questions, I will illustrate challenges and pitfallswhen determining, from an AI point of view,what the Common Good is and how it can be enhanced by AI. The questions are: What is the problem / What is a problem?, Who defines the problem?, What is the role of knowledge?, and What are important side effects and dynamics? The illustration will use an example from the domain of “AI for Social Good”, more specifically “Data Science for Social Good”. Even if the importance of these questions may be known at an abstract level, they do not get asked sufficiently in practice, as shown by an exploratory study of 99 contributions to recent conferences in the field. Turning these challenges and pitfalls into a positive recommendation, as a conclusion I will draw on another characteristic of computer-science thinking and practice to make these impediments visible and attenuate them: “attacks” as a method for improving design. This results in the proposal of ethics pen-testing as a method for helping AI designs to better contribute to the Common Good.


Sign in / Sign up

Export Citation Format

Share Document