scholarly journals A Critical Take on the Policy Recommendations of the EU High-Level Expert Group on Artificial Intelligence

Author(s):  
Michael Veale

Cite as Michael Veale, ‘A Critical Take on the Policy Recommendations of the EU High-Level Expert Group on Artificial Intelligence’ (2020) __ European Journal of Risk Regulation __. doi:10/ggjdjsThe European Commission recently published the policy recommendations of its ‘High-Level Expert Group on Artificial Intelligence’: a heavily anticipated document, particularly in the context of the stated ambition of the new Commission President to regulate in that area. This essay argues that these recommendations have significant deficits in a range of areas. It analyses a selection of the Group’s proposals in context of the governance of artificial intelligence more broadly, focussing on issues of framing, representation and expertise, and on the lack of acknowledgement of key issues of power and infrastructure underpinning modern information economies and practices of optimisation.

2020 ◽  
pp. 1-10 ◽  
Author(s):  
Michael VEALE

The European Commission recently published the policy recommendations of its “High-Level Expert Group on Artificial Intelligence”: a heavily anticipated document, particularly in the context of the stated ambition of the new Commission President to regulate in that area. This article argues that these recommendations have significant deficits in a range of areas. It analyses a selection of the Group’s proposals in context of the governance of artificial intelligence more broadly, focusing on issues of framing, representation and expertise, and on the lack of acknowledgement of key issues of power and infrastructure underpinning modern information economies and practices of optimisation.


First Monday ◽  
2021 ◽  
Author(s):  
Gry Hasselbalch

This article makes a case for a data interest analysis of artificial intelligence (AI) that explores how different interests in data are empowered or disempowered by design. The article uses the EU High-Level Expert Group on AI’s Ethics Guidelines for Trustworthy AI as an applied ethics approach to data interests with a human-centric ethical governance framework and accordingly suggests ethical questions that will help resolve conflicts between data interests in AI design


2020 ◽  
pp. 1-15
Author(s):  
Stefan LARSSON

Abstract This article uses a socio-legal perspective to analyze the use of ethics guidelines as a governance tool in the development and use of artificial intelligence (AI). This has become a central policy area in several large jurisdictions, including China and Japan, as well as the EU, focused on here. Particular emphasis in this article is placed on the Ethics Guidelines for Trustworthy AI published by the EU Commission’s High-Level Expert Group on Artificial Intelligence in April 2019, as well as the White Paper on AI, published by the EU Commission in February 2020. The guidelines are reflected against partially overlapping and already-existing legislation as well as the ephemeral concept construct surrounding AI as such. The article concludes by pointing to (1) the challenges of a temporal discrepancy between technological and legal change, (2) the need for moving from principle to process in the governance of AI, and (3) the multidisciplinary needs in the study of contemporary applications of data-dependent AI.


Author(s):  
Andrea Renda

This chapter assesses Europe’s efforts in developing a full-fledged strategy on the human and ethical implications of artificial intelligence (AI). The strong focus on ethics in the European Union’s AI strategy should be seen in the context of an overall strategy that aims at protecting citizens and civil society from abuses of digital technology but also as part of a competitiveness-oriented strategy aimed at raising the standards for access to Europe’s wealthy Single Market. In this context, one of the most peculiar steps in the European Union’s strategy was the creation of an independent High-Level Expert Group on AI (AI HLEG), accompanied by the launch of an AI Alliance, which quickly attracted several hundred participants. The AI HLEG, a multistakeholder group including fifty-two experts, was tasked with the definition of Ethics Guidelines as well as with the formulation of “Policy and Investment Recommendations.” With the advice of the AI HLEG, the European Commission put forward ethical guidelines for Trustworthy AI—which are now paving the way for a comprehensive, risk-based policy framework.


2020 ◽  
Vol 11 (3) ◽  
pp. 683-692
Author(s):  
Giovanni SILENO

This short paper aims to unpack some of the assumptions underlying the “Policy and Investment Recommendation for Trustworthy AI” provided by the High-Level Expert Group on Artificial Intelligence (AI) appointed by the European Commission. It elaborates in particular on three aspects: on the technical-legal dimensions of trustworthy AI; on what we mean by AI; and on the impact of AI. The consequent analysis results in the identification, amongst others, of three recurrent simplifications, respectively concerning the definition of AI (sub-symbolic systems instead of “intelligent” informational processing systems), the interface between AI and institutions (neatly separated instead of continuity) and a plausible technological evolution (expecting a plateau instead of a potentially near-disruptive innovation).


2011 ◽  
Vol 2 (2) ◽  
pp. 191-192
Author(s):  
Kristina Nordlander

Professor Lofstedt's article makes an important contribution to the growing scholarship on risk regulation. He focuses on one of Europe's key challenges – how to ensure that European law and regulation is “smart” in the sense that it strikes the right balance between a high level of protection for human health and the environment while not being overly burdensome and enabling us to maintain our competitiveness and standard of living. The EU has taken a leading global role in chemicals regulation, which is welcome. However, as Professor Lofstedt's study clearly illustrates, much work remains to be done to ensure transparency, legal certainty, and sound science-based regulatory outcomes.The debate around risk versus hazard is an important one. As Professor Lofstedt notes, these are not contradictory concepts, but rather build on each other. Hazard and the intrinsic capacity of a substance to cause harm is the starting point. The debate then centers on to what extent a risk assessment (taking actual or potential exposure into account) should be made before the uses of the substance in question are regulated. Put simply, if there is no exposure, there will be no risk, even when the hazard is high. There are many examples of substances that have known intrinsic hazards, but that pose virtually no risk to human health given how and in what quantities they are used. Ordinary table salt is a well-known example.


2021 ◽  
Vol 27 (1) ◽  
Author(s):  
Charlotte Stix

AbstractIn the development of governmental policy for artificial intelligence (AI) that is informed by ethics, one avenue currently pursued is that of drawing on “AI Ethics Principles”. However, these AI Ethics Principles often fail to be actioned in governmental policy. This paper proposes a novel framework for the development of ‘Actionable Principles for AI’. The approach acknowledges the relevance of AI Ethics Principles and homes in on methodological elements to increase their practical implementability in policy processes. As a case study, elements are extracted from the development process of the Ethics Guidelines for Trustworthy AI of the European Commission’s “High Level Expert Group on AI”. Subsequently, these elements are expanded on and evaluated in light of their ability to contribute to a prototype framework for the development of 'Actionable Principles for AI'. The paper proposes the following three propositions for the formation of such a prototype framework: (1) preliminary landscape assessments; (2) multi-stakeholder participation and cross-sectoral feedback; and, (3) mechanisms to support implementation and operationalizability.


2019 ◽  
Vol 50 (4) ◽  
pp. 870-879
Author(s):  
Frank Decker

After the 2019 European election, national political actors and party officials in both the European Parliament as well as in the Council once again clashed over the selection of the Commission’s President, a controversy that also received widespread public attention . Disagreements centered on the so-called Spitzenkandidaten - top candidate - system that - contrary to its premiere in 2014 - failed to be implemented . The manner in which this system functions is frequently misunderstood by both political actors and observers . One example is that the appointment process is interpreted through the lens of parliamentary democracy, another is that the overrepresentation of smaller member states within the European Parliament is depicted as a serious violation of democratic principles . Potential starting points for a thorough democratization of the EU, such as the direct election of the Commission President, a common electoral system with joint European parties, and a greater say by voters and the President of the Commission regarding the appointment of commissioners are also discussed . [ZParl, vol . 50 (2019), no . 4, pp . 870 - 879]


Sign in / Sign up

Export Citation Format

Share Document