scholarly journals Of Duels, Trials and Simplifying Systems

2020 ◽  
Vol 11 (3) ◽  
pp. 683-692
Author(s):  
Giovanni SILENO

This short paper aims to unpack some of the assumptions underlying the “Policy and Investment Recommendation for Trustworthy AI” provided by the High-Level Expert Group on Artificial Intelligence (AI) appointed by the European Commission. It elaborates in particular on three aspects: on the technical-legal dimensions of trustworthy AI; on what we mean by AI; and on the impact of AI. The consequent analysis results in the identification, amongst others, of three recurrent simplifications, respectively concerning the definition of AI (sub-symbolic systems instead of “intelligent” informational processing systems), the interface between AI and institutions (neatly separated instead of continuity) and a plausible technological evolution (expecting a plateau instead of a potentially near-disruptive innovation).

Author(s):  
Andrea Renda

This chapter assesses Europe’s efforts in developing a full-fledged strategy on the human and ethical implications of artificial intelligence (AI). The strong focus on ethics in the European Union’s AI strategy should be seen in the context of an overall strategy that aims at protecting citizens and civil society from abuses of digital technology but also as part of a competitiveness-oriented strategy aimed at raising the standards for access to Europe’s wealthy Single Market. In this context, one of the most peculiar steps in the European Union’s strategy was the creation of an independent High-Level Expert Group on AI (AI HLEG), accompanied by the launch of an AI Alliance, which quickly attracted several hundred participants. The AI HLEG, a multistakeholder group including fifty-two experts, was tasked with the definition of Ethics Guidelines as well as with the formulation of “Policy and Investment Recommendations.” With the advice of the AI HLEG, the European Commission put forward ethical guidelines for Trustworthy AI—which are now paving the way for a comprehensive, risk-based policy framework.


2020 ◽  
pp. 1-10 ◽  
Author(s):  
Michael VEALE

The European Commission recently published the policy recommendations of its “High-Level Expert Group on Artificial Intelligence”: a heavily anticipated document, particularly in the context of the stated ambition of the new Commission President to regulate in that area. This article argues that these recommendations have significant deficits in a range of areas. It analyses a selection of the Group’s proposals in context of the governance of artificial intelligence more broadly, focusing on issues of framing, representation and expertise, and on the lack of acknowledgement of key issues of power and infrastructure underpinning modern information economies and practices of optimisation.


2020 ◽  
Vol 6 (2) ◽  
pp. 26-41
Author(s):  
Guillermo Lazcoz Moratinos

On 20 October 2020, the European Parliament adopted a resolution (2020/2012(INL)) with recommendations to the Commission regarding artificial intelligence, robotics and related technologies, which included a legislative proposal for a Regulation on the ethical principles for the development, deployment and use of these technologies. The content of this proposal undoubtedly follows from the regulatory vision that the European Commission has maintained in documents such as the White Paper on Artificial Intelligence (COM(2020) 65 final) or the Ethical guidelines for trustworthy AI drawn up by the High-Level Expert Group on AI. Given this new legislative horizon, it is more necessary than ever to address a constructive criticism on the proposal, highlighting the possibility of reformulating its markedly soft-law character despite its location in a regulatory source of general application and directly applicable, such as regulations, or the adopted approach for certain key principles such as human supervision or discrimination.


2009 ◽  
Vol 27 (24) ◽  
pp. 4014-4020 ◽  
Author(s):  
Elizabeth Goss ◽  
Michael P. Link ◽  
Suanna S. Bruinooge ◽  
Theodore S. Lawrence ◽  
Joel E. Tepper ◽  
...  

Purpose The American Society of Clinical Oncology (ASCO) Cancer Research Committee designed a qualitative research project to assess the attitudes of cancer researchers and compliance officials regarding compliance with the US Privacy Rule and to identify potential strategies for eliminating perceived or real barriers to achieving compliance. Methods A team of three interviewers asked 27 individuals (13 investigators and 14 compliance officials) from 13 institutions to describe the anticipated approach of their institutions to Privacy Rule compliance in three hypothetical research studies. Results The interviews revealed that although researchers and compliance officials share the view that patients' cancer diagnoses should enjoy a high level of privacy protection, there are significant tensions between the two groups related to the proper standards for compliance necessary to protect patients. The disagreements are seen most clearly with regard to the appropriate definition of a “future research use” of protected health information in biospecimen and data repositories and the standards for a waiver of authorization for disclosure and use of such data. Conclusion ASCO believes that disagreements related to compliance and the resulting delays in certain projects and abandonment of others might be eased by additional institutional training programs and consultation on Privacy Rule issues during study design. ASCO also proposes the development of best practices documents to guide 1) creation of data repositories, 2) disclosure and use of data from such repositories, and 3) the design of survivorship and genetics studies.


2021 ◽  
Vol 27 (6) ◽  
pp. 101-106
Author(s):  
М. Falaleev ◽  
◽  
N. Sitdikova ◽  
Е. Nechay ◽  
◽  
...  

The development of digital technologies, coupled with progress in the development of self-learning programs based on AI (Artificial Intelligence), has obvious advantages in improving the effectiveness of information impact on people around the world. During the 2010s, researchers have documented trends in the use of artificial intelligence for the construction and distribution of media content to indirectly manipulate political discourse at the national and global levels. Special interest in the context of this issue is how the rapid development of AI technologies affects political communication. The object of consideration within the framework of this article is the deepfake technology. Based on this, as a subject, the authors define deepfake as a phenomenon of modern political communication. Accordingly, the purpose of the study is to describe and predict the impact of deepfake technology on political communication at the global and national levels. The paper presents the definition of deepfake, assesses its characteristics depending on the methods and purposes of its distribution, and analyzes the prospects for using this tool to influence political discourse in modern Russia. To study the subject field of the research, methods of systematizing theoretical data, classification, analysis of a set of factors and forecasting have been applied. The practical significance of the work is presented by the authors’ definition and typology of the phenomenon of deepfake and describes its significance as a factor of political communication on the example of a particular country. The results of the work will be useful for researchers studying the problems of digitalization of the media space and modern means of disinformation in politics, both at the local and global levels


2021 ◽  
Author(s):  
SANGHAMITRA CHOUDHURY ◽  
Shailendra Kumar

<p>The relationship between women, technology manifestation, and likely prospects in the developing world is discussed in this manuscript. Using India as a case study, the paper goes on to discuss how ontology and epistemology views utilised in AI (Artificial Intelligence) and robotics will affect women's prospects in developing countries. Women in developing countries, notably in South Asia, are perceived as doing domestic work and are underrepresented in high-level professions. They are disproportionately underemployed and face prejudice in the workplace. The purpose of this study is to determine if the introduction of AI would exacerbate the already precarious situation of women in the developing world or if it would serve as a liberating force. While studies on the impact of AI on women have been undertaken in developed countries, there has been less research in developing countries. This manuscript attempts to fill that need.</p>


2020 ◽  
Vol 10 (4) ◽  
pp. 70-75
Author(s):  
TOMAS MOLODTSOV ◽  

The article is devoted to the definition of artificial intelligence and its impact on human rights in the context of lawmaking activity. Purpose of the article: this paper aims to investigate the main approaches to understanding artificial intelligence and the consequences of its integration into the legislative process, as well as to assess the impact of artificial intelligence on human rights. The purpose of the article is also to identify the risks of such influence and ways to level them. Methodology and methods: this article uses general scientific methods of analysis, especially empirical and dialectical, which allow to consider raised issues comprehensively. The author also uses methods of analysis and synthesis, induction and deduction. Conclusions: as the result of this research, the author comes to the conclusion that artificial intelligence, understood as both an exclusively automated tool and a pure consciousness, can significantly optimize the current lawmaking system. However, its impact on human rights in this context may be negative, limiting the freedom of choice, privacy and secrecy of correspondence. To protect human rights, the author recommends using automation tools only as additional measure, but not as substitute. The conclusion raises the question of what consequences can occur for people if artificial intelligence, integrated into law-making activities, can become aware of itself. Scope of the results: this work can be interested to both lawmakers and society as a whole, as it raises basic issues of human rights protection in the context of global digitalization.


2021 ◽  
Vol 27 (1) ◽  
Author(s):  
Charlotte Stix

AbstractIn the development of governmental policy for artificial intelligence (AI) that is informed by ethics, one avenue currently pursued is that of drawing on “AI Ethics Principles”. However, these AI Ethics Principles often fail to be actioned in governmental policy. This paper proposes a novel framework for the development of ‘Actionable Principles for AI’. The approach acknowledges the relevance of AI Ethics Principles and homes in on methodological elements to increase their practical implementability in policy processes. As a case study, elements are extracted from the development process of the Ethics Guidelines for Trustworthy AI of the European Commission’s “High Level Expert Group on AI”. Subsequently, these elements are expanded on and evaluated in light of their ability to contribute to a prototype framework for the development of 'Actionable Principles for AI'. The paper proposes the following three propositions for the formation of such a prototype framework: (1) preliminary landscape assessments; (2) multi-stakeholder participation and cross-sectoral feedback; and, (3) mechanisms to support implementation and operationalizability.


2016 ◽  
Vol 32 (1-2) ◽  
pp. 54-60 ◽  
Author(s):  
Anna Nachtnebel ◽  
Johanna Breuer ◽  
Wolfgang Willenbacher ◽  
Anna Bucsics ◽  
Peter Krippl ◽  
...  

Objectives: The regularly structured adaptation of health technology assessment (HTA) programs is of utmost importance to sustain the relevance of the products for stakeholders and to justify investment of scarce financial resources. This study describes internal adjustments and external measures taken to ensure the Horizon Scanning Programme in Oncology (HSO) is current.Methods: Formal evaluation methods comprising a survey, a download, an environmental analysis, and a Web site questionnaire were used to evaluate user satisfaction.Results: The evaluation showed that users were satisfied with HSO outputs in terms of timeliness, topics selected, and depth of information provided. Discussion of these findings with an expert panel led to changes such as an improved dissemination strategy and the introduction of an additional output, that is, the publication of a league table of emerging oncology drugs. The rather high level of international usage and the environmental analysis highlighted a considerable overlap in topics assessed and, thus, the potential for international collaboration. As a consequence, thirteen reports were jointly published based on eleven “calls for collaboration.” To further facilitate collaboration and the usability of reports for other agencies, HSO reports will be adjusted according to tools developed at a European level.Conclusions: Evaluation of the impact of HTA programs allows the tailoring of outputs to fit the needs of the target population. However, within a fast developing HTA community, estimates of impact will increasingly be determined by international collaborative efforts. Refined methods and a broader definition of impact are needed to ultimately capture the efficiency of national HTA programs.


Sign in / Sign up

Export Citation Format

Share Document