Artificial Intelligence and Public Policy

Author(s):  
Sophie Guetzko
2018 ◽  
Vol 62 ◽  
pp. 729-754 ◽  
Author(s):  
Katja Grace ◽  
John Salvatier ◽  
Allan Dafoe ◽  
Baobao Zhang ◽  
Owain Evans

Advances in artificial intelligence (AI) will transform modern life by reshaping transportation, health, science, finance, and the military. To adapt public policy, we need to better anticipate these advances. Here we report the results from a large survey of machine learning researchers on their beliefs about progress in AI. Researchers predict AI will outperform humans in many activities in the next ten years, such as translating languages (by 2024), writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053). Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years, with Asian respondents expecting these dates much sooner than North Americans. These results will inform discussion amongst researchers and policymakers about anticipating and managing trends in AI. This article is part of the special track on AI and Society.


Author(s):  
Steven Feldstein

This chapter examines how artificial intelligence (AI) and big-data technology are reshaping repression strategies and why they are a boon for autocratic leaders. It explores two in-depth scenarios that describe potential state deploy AI and big-data techniques to accomplish political objectives. It presents a global index of AI and big-data surveillance that measures the use of these tools in 179 countries. It then presents a detailed explanation for specific types of AI and big-data surveillance: safe cities, facial recognition systems, smart policing, and social media surveillance. Subsequently, it examines China’s role in proliferating AI and big-data surveillance technology, and it reviews public policy considerations regarding use of this technology by democracies.


Processes ◽  
2020 ◽  
Vol 8 (11) ◽  
pp. 1374
Author(s):  
Juan M. Sánchez ◽  
Juan P. Rodríguez ◽  
Helbert E. Espitia

The objective of this article is to review how Artificial Intelligence (AI) tools have helped the process of formulating agricultural public policies in the world. For this, a search process was carried out in the main scientific repositories finding different publications. The findings have shown that, first, the most commonly used AI tools are agent-based models, cellular automata, and genetic algorithms. Secondly, they have been utilized to determine land and water use, and agricultural production. In the end, the large usefulness that AI tools have in the process of formulating agricultural public policies is concluded.


2020 ◽  
Author(s):  
Thomas Ploug ◽  
Anna Sundby ◽  
Thomas B Moeslund ◽  
Søren Holm

BACKGROUND Certain types of artificial intelligence (AI), that is, deep learning models, can outperform health care professionals in particular domains. Such models hold considerable promise for improved diagnostics, treatment, and prevention, as well as more cost-efficient health care. They are, however, opaque in the sense that their exact reasoning cannot be fully explicated. Different stakeholders have emphasized the importance of the transparency/explainability of AI decision making. Transparency/explainability may come at the cost of performance. There is need for a public policy regulating the use of AI in health care that balances the societal interests in high performance as well as in transparency/explainability. A public policy should consider the wider public’s interests in such features of AI. OBJECTIVE This study elicited the public’s preferences for the performance and explainability of AI decision making in health care and determined whether these preferences depend on respondent characteristics, including trust in health and technology and fears and hopes regarding AI. METHODS We conducted a choice-based conjoint survey of public preferences for attributes of AI decision making in health care in a representative sample of the adult Danish population. Initial focus group interviews yielded 6 attributes playing a role in the respondents’ views on the use of AI decision support in health care: (1) type of AI decision, (2) level of explanation, (3) performance/accuracy, (4) responsibility for the final decision, (5) possibility of discrimination, and (6) severity of the disease to which the AI is applied. In total, 100 unique choice sets were developed using fractional factorial design. In a 12-task survey, respondents were asked about their preference for AI system use in hospitals in relation to 3 different scenarios. RESULTS Of the 1678 potential respondents, 1027 (61.2%) participated. The respondents consider the physician having the final responsibility for treatment decisions the most important attribute, with 46.8% of the total weight of attributes, followed by explainability of the decision (27.3%) and whether the system has been tested for discrimination (14.8%). Other factors, such as gender, age, level of education, whether respondents live rurally or in towns, respondents’ trust in health and technology, and respondents’ fears and hopes regarding AI, do not play a significant role in the majority of cases. CONCLUSIONS The 3 factors that are most important to the public are, in descending order of importance, (1) that physicians are ultimately responsible for diagnostics and treatment planning, (2) that the AI decision support is explainable, and (3) that the AI system has been tested for discrimination. Public policy on AI system use in health care should give priority to such AI system use and ensure that patients are provided with information.


2021 ◽  
pp. 3-32
Author(s):  
V.N. Leksin

The third and final article of the three-part series of articles «Artificial intelligence in the economy and politics of our time» (the first and second articles of the series were published in the fourth and fifth issues of the journal for this year, respectively) presents the results of a study of the goals, motivations and specifics of the adoption of national strategies to support the development of artificial intelligence in different countries. It is shown that such a strategy in Russia is based on the idea of the most important role of using artificial intelligence in solving the most complex economic, social, and military-political problems of the country. Differences in conceptual approaches to the development of research and practical use of artificial intelligence developments in the national strategies of the largest countries of the world — the United States, China and India.


2017 ◽  
Author(s):  
Adam D. Thierer ◽  
Andrea O'Sullivan ◽  
Raymond Russell

2019 ◽  
pp. 144078331987304
Author(s):  
Robert Holton ◽  
Ross Boyd

This article explores the sociology of artificial intelligence (AI), focusing on interactions between social actors and technological processes. The aim is to locate social actors in the key elements of Bell’s framework for understanding AI, featuring big data, algorithms, machine learning, sensors and rationale/logic. We dispute notions of human autonomy and machine autonomy, seeking alternatives to both anthropocentric and technological determinist accounts of AI. While human actors and technological devices are co-producers of the assemblages around AI, we challenge the argument that their respective contributions are symmetrical. The theoretical problem is to establish quite how human actors are positioned asymmetrically within AI processes. This challenge has strong resonances for issues of inequality, democracy, governance and public policy. The theoretical questions raised do not support the argument that sociology should respond to the rise of big data by becoming a primarily empirical discipline.


Sign in / Sign up

Export Citation Format

Share Document