scholarly journals Nachweislich eine gute Entscheidung: Qualitätssicherung für künstlich-intelligente Verfahren in der Industrie

2021 ◽  
pp. 51-63
Author(s):  
Annelie Pentenrieder ◽  
Ernst A. Hartmann ◽  
Matthias Künzel
Keyword(s):  

ZusammenfassungWelche Arten von Künstlicher Intelligenz (KI) sollen in europäischen Industrieunternehmen eingeführt und genutzt werden? Wie kann es gelingen, europäisch-demokratische Werte wie Mitbestimmung, Transparenz, Widerspruchsmöglichkeit und Anpassungsfähigkeit für die Nutzung von KI-Technologien zu gewährleisten? Diese Fragen werden aktuell unter den Debatten um erklärbare KI und KI-Zertifizierung verhandelt. Der folgende Beitrag legt an einem konkreten Fallbeispiel aus der Industrie die ALTAI-Kriterien an, die von der High Level Expert Group für die Gestaltung „vertrauenswürdiger KI“ formuliert wurden. Entlang der drei ausgewählten Kriterien „Menschliches Handeln und Aufsicht“, „Transparenz“ und „Robustheit“ wird exemplarisch skizziert, wie Erklär- und Kontrollierbarkeit KI-basierter Verfahren im industriellen Arbeitsumfeld zunehmend umgesetzt und prüfbar gemacht werden können. Es zeigt sich, dass technische Benutzeroberflächen als Teamarbeit gestaltet werden müssen und dass die Zusammenarbeit unterschiedlicher Unternehmen in der Bereitstellung von Datensätzen und Algorithmen im Fokus einer Prüfung stehen muss (Software-Genese). Als mögliches Handlungsfeld werden Auditing-Verfahren vorgestellt.

Author(s):  
Andrea Renda

This chapter assesses Europe’s efforts in developing a full-fledged strategy on the human and ethical implications of artificial intelligence (AI). The strong focus on ethics in the European Union’s AI strategy should be seen in the context of an overall strategy that aims at protecting citizens and civil society from abuses of digital technology but also as part of a competitiveness-oriented strategy aimed at raising the standards for access to Europe’s wealthy Single Market. In this context, one of the most peculiar steps in the European Union’s strategy was the creation of an independent High-Level Expert Group on AI (AI HLEG), accompanied by the launch of an AI Alliance, which quickly attracted several hundred participants. The AI HLEG, a multistakeholder group including fifty-two experts, was tasked with the definition of Ethics Guidelines as well as with the formulation of “Policy and Investment Recommendations.” With the advice of the AI HLEG, the European Commission put forward ethical guidelines for Trustworthy AI—which are now paving the way for a comprehensive, risk-based policy framework.


Author(s):  
José Ramón Martínez-Riera ◽  
Raúl Juárez-Vela ◽  
Miguel Ángel Díaz-Herrera ◽  
Raimunda Montejano-Lozoya ◽  
Vicente Doménech-Briz ◽  
...  

Background: A short TOP10 scale based on the Practice Environment Scale-Nursing Work Index questionnaire measures the characteristics of nursing work environments. Positive environments result in better quality care and health outcomes. Objective: To identify a small number of core elements that would facilitate more effective interventions by nurse managers, and compare them with the essential elements proposed by the TOP10. Method: Qualitative research by a nominal group of eight experts. The content analysis was combined with descriptive data. Results: Ten most important items were selected and analyzed by the expert group. A high level of consensus in four items (2, 15, 20, 31) and an acceptable consensus in five items was reached (6, 11, 14, 18, 26). The tenth item in the top ten was selected from content analysis (19). The expert group agreed 90% with the elements selected as essential to the TOP10. Conclusion: The expert group achieved a high level of consensus that supports 90% of the essential elements of primary care settings proposed by the TOP10 questionnaire. Organizational changes implemented by managers to improve working environments must be prioritized following our results, so care delivery and health outcomes can be further improved.


2020 ◽  
Vol 11 (3) ◽  
pp. 683-692
Author(s):  
Giovanni SILENO

This short paper aims to unpack some of the assumptions underlying the “Policy and Investment Recommendation for Trustworthy AI” provided by the High-Level Expert Group on Artificial Intelligence (AI) appointed by the European Commission. It elaborates in particular on three aspects: on the technical-legal dimensions of trustworthy AI; on what we mean by AI; and on the impact of AI. The consequent analysis results in the identification, amongst others, of three recurrent simplifications, respectively concerning the definition of AI (sub-symbolic systems instead of “intelligent” informational processing systems), the interface between AI and institutions (neatly separated instead of continuity) and a plausible technological evolution (expecting a plateau instead of a potentially near-disruptive innovation).


2021 ◽  
Vol 27 (1) ◽  
Author(s):  
Charlotte Stix

AbstractIn the development of governmental policy for artificial intelligence (AI) that is informed by ethics, one avenue currently pursued is that of drawing on “AI Ethics Principles”. However, these AI Ethics Principles often fail to be actioned in governmental policy. This paper proposes a novel framework for the development of ‘Actionable Principles for AI’. The approach acknowledges the relevance of AI Ethics Principles and homes in on methodological elements to increase their practical implementability in policy processes. As a case study, elements are extracted from the development process of the Ethics Guidelines for Trustworthy AI of the European Commission’s “High Level Expert Group on AI”. Subsequently, these elements are expanded on and evaluated in light of their ability to contribute to a prototype framework for the development of 'Actionable Principles for AI'. The paper proposes the following three propositions for the formation of such a prototype framework: (1) preliminary landscape assessments; (2) multi-stakeholder participation and cross-sectoral feedback; and, (3) mechanisms to support implementation and operationalizability.


2019 ◽  
Author(s):  
Michael Veale

Cite as Michael Veale, ‘A Critical Take on the Policy Recommendations of the EU High-Level Expert Group on Artificial Intelligence’ (2020) __ European Journal of Risk Regulation __. doi:10/ggjdjsThe European Commission recently published the policy recommendations of its ‘High-Level Expert Group on Artificial Intelligence’: a heavily anticipated document, particularly in the context of the stated ambition of the new Commission President to regulate in that area. This essay argues that these recommendations have significant deficits in a range of areas. It analyses a selection of the Group’s proposals in context of the governance of artificial intelligence more broadly, focussing on issues of framing, representation and expertise, and on the lack of acknowledgement of key issues of power and infrastructure underpinning modern information economies and practices of optimisation.


10.2196/12678 ◽  
2019 ◽  
Vol 7 (3) ◽  
pp. e12678
Author(s):  
Junho Ko ◽  
Jong Joo Lee ◽  
Seong-Wook Jang ◽  
Yeomin Yun ◽  
Sungchul Kang ◽  
...  

Background Performing high-level surgeries with endoscopy is challenging, and hence, an efficient surgical training method or system is required. Serious game–based simulators can provide a trainee-centered educational environment unlike traditional teacher-centered education environments since serious games provide a high level of interaction (feedback that induces learning). Objective This study aimed to propose an epiduroscopy simulator, EpiduroSIM, based on a serious game for spatial cognitive training. Methods EpiduroSIM was designed based on a serious game. For spatial cognitive training, the virtual environment of EpiduroSIM was modeled based on a cognitive map. Results EpiduroSIM was developed considering user accessibility to provide various functions. The experiment for the validation of EpiduroSIM focused on psychological fidelity and repetitive training effects. The experiments were conducted by dividing 16 specialists into 2 groups of 8 surgeons. The group was divided into beginner and expert based on their epiduroscopy experience. The psychological fidelity of EpiduroSIM was confirmed through the training results of the expert group rather than the beginner group. In addition, the repetitive training effect of EpiduroSIM was confirmed by improving the training results in the beginner group. Conclusions EpiduroSIM may be useful for training beginner surgeons in epiduroscopy.


First Monday ◽  
2021 ◽  
Author(s):  
Gry Hasselbalch

This article makes a case for a data interest analysis of artificial intelligence (AI) that explores how different interests in data are empowered or disempowered by design. The article uses the EU High-Level Expert Group on AI’s Ethics Guidelines for Trustworthy AI as an applied ethics approach to data interests with a human-centric ethical governance framework and accordingly suggests ethical questions that will help resolve conflicts between data interests in AI design


2020 ◽  
pp. 1-15
Author(s):  
Stefan LARSSON

Abstract This article uses a socio-legal perspective to analyze the use of ethics guidelines as a governance tool in the development and use of artificial intelligence (AI). This has become a central policy area in several large jurisdictions, including China and Japan, as well as the EU, focused on here. Particular emphasis in this article is placed on the Ethics Guidelines for Trustworthy AI published by the EU Commission’s High-Level Expert Group on Artificial Intelligence in April 2019, as well as the White Paper on AI, published by the EU Commission in February 2020. The guidelines are reflected against partially overlapping and already-existing legislation as well as the ephemeral concept construct surrounding AI as such. The article concludes by pointing to (1) the challenges of a temporal discrepancy between technological and legal change, (2) the need for moving from principle to process in the governance of AI, and (3) the multidisciplinary needs in the study of contemporary applications of data-dependent AI.


2020 ◽  
Vol 26 (5) ◽  
pp. 2749-2767
Author(s):  
Mark Ryan

Abstract One of the main difficulties in assessing artificial intelligence (AI) is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission’s High-level Expert Group on AI (HLEG) have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI (HLEG AI Ethics guidelines for trustworthy AI, 2019, p. 35). Trust is one of the most important and defining activities in human relationships, so proposing that AI should be trusted, is a very serious claim. This paper will show that AI cannot be something that has the capacity to be trusted according to the most prevalent definitions of trust because it does not possess emotive states or can be held responsible for their actions—requirements of the affective and normative accounts of trust. While AI meets all of the requirements of the rational account of trust, it will be shown that this is not actually a type of trust at all, but is instead, a form of reliance. Ultimately, even complex machines such as AI should not be viewed as trustworthy as this undermines the value of interpersonal trust, anthropomorphises AI, and diverts responsibility from those developing and using them.


Sign in / Sign up

Export Citation Format

Share Document