“What I See Is What You Get” Explorations of Live Artwork Generation, Artificial Intelligence, and Human Interaction in a Pedagogical Environment

Author(s):  
Ana Herruzo ◽  
Nikita Pashenkov
Author(s):  
Christopher-John L. Farrell

Abstract Objectives Artificial intelligence (AI) models are increasingly being developed for clinical chemistry applications, however, it is not understood whether human interaction with the models, which may occur once they are implemented, improves or worsens their performance. This study examined the effect of human supervision on an artificial neural network trained to identify wrong blood in tube (WBIT) errors. Methods De-identified patient data for current and previous (within seven days) electrolytes, urea and creatinine (EUC) results were used in the computer simulation of WBIT errors at a rate of 50%. Laboratory staff volunteers reviewed the AI model’s predictions, and the EUC results on which they were based, before making a final decision regarding the presence or absence of a WBIT error. The performance of this approach was compared to the performance of the AI model operating without human supervision. Results Laboratory staff supervised the classification of 510 sets of EUC results. This workflow identified WBIT errors with an accuracy of 81.2%, sensitivity of 73.7% and specificity of 88.6%. However, the AI model classifying these samples autonomously was superior on all metrics (p-values<0.05), including accuracy (92.5%), sensitivity (90.6%) and specificity (94.5%). Conclusions Human interaction with AI models can significantly alter their performance. For computationally complex tasks such as WBIT error identification, best performance may be achieved by autonomously functioning AI models.


10.2196/17620 ◽  
2020 ◽  
Vol 22 (5) ◽  
pp. e17620 ◽  
Author(s):  
Rana Abdullah ◽  
Bahjat Fakieh

Background The advancement of health care information technology and the emergence of artificial intelligence has yielded tools to improve the quality of various health care processes. Few studies have investigated employee perceptions of artificial intelligence implementation in Saudi Arabia and the Arabian world. In addition, limited studies investigated the effect of employee knowledge and job title on the perception of artificial intelligence implementation in the workplace. Objective The aim of this study was to explore health care employee perceptions and attitudes toward the implementation of artificial intelligence technologies in health care institutions in Saudi Arabia. Methods An online questionnaire was published, and responses were collected from 250 employees, including doctors, nurses, and technicians at 4 of the largest hospitals in Riyadh, Saudi Arabia. Results The results of this study showed that 3.11 of 4 respondents feared artificial intelligence would replace employees and had a general lack of knowledge regarding artificial intelligence. In addition, most respondents were unaware of the advantages and most common challenges to artificial intelligence applications in the health sector, indicating a need for training. The results also showed that technicians were the most frequently impacted by artificial intelligence applications due to the nature of their jobs, which do not require much direct human interaction. Conclusions The Saudi health care sector presents an advantageous market potential that should be attractive to researchers and developers of artificial intelligence solutions.


Author(s):  
Ladly Patel ◽  
Kumar Abhishek Gaurav

In today's world, a huge amount of data is available. So, all the available data are analyzed to get information, and later this data is used to train the machine learning algorithm. Machine learning is a subpart of artificial intelligence where machines are given training with data and the machine predicts the results. Machine learning is being used in healthcare, image processing, marketing, etc. The aim of machine learning is to reduce the work of the programmer by doing complex coding and decreasing human interaction with systems. The machine learns itself from past data and then predict the desired output. This chapter describes machine learning in brief with different machine learning algorithms with examples and about machine learning frameworks such as tensor flow and Keras. The limitations of machine learning and various applications of machine learning are discussed. This chapter also describes how to identify features in machine learning data.


Information ◽  
2020 ◽  
Vol 11 (5) ◽  
pp. 235
Author(s):  
Paulo Garcia ◽  
Francine Darroch ◽  
Leah West ◽  
Lauren BrooksCleator

The use of technological solutions to address the production of goods and offering of services is ubiquitous. Health and social issues, however, have only slowly been permeated by technological solutions. Whilst several advances have been made in health in recent years, the adoption of technology to combat social problems has lagged behind. In this paper, we explore Big Data-driven Artificial Intelligence (AI) applied to social systems; i.e., social computing, the concept of artificial intelligence as an enabler of novel social solutions. Through a critical analysis of the literature, we elaborate on the social and human interaction aspects of technology that must be in place to achieve such enabling and address the limitations of the current state of the art in this regard. We review cultural, political, and other societal impacts of social computing, impact on vulnerable groups, and ethically-aligned design of social computing systems. We show that this is not merely an engineering problem, but rather the intersection of engineering with health sciences, social sciences, psychology, policy, and law. We then illustrate the concept of ethically-designed social computing with a use case of our ongoing research, where social computing is used to support safety and security in home-sharing settings, in an attempt to simultaneously combat youth homelessness and address loneliness in seniors, identifying the risks and potential rewards of such a social computing application.


2020 ◽  
Vol 6 (2) ◽  
Author(s):  
Sarah Myers West

Computer scientists, and artificial intelligence researchers in particular, have a predisposition for adopting precise, fixed definitions to serve as classifiers (Agre, 1997; Broussard, 2018). But classification is an enactment of power; it orders human interaction in ways that produce advantage or suffering (Bowker & Star, 1999). In so doing, it obscures the messiness of human life, masking the work of the people involved in training machine learning systems, and hiding the uneven distribution of its impacts on communities (Taylor, 2018; Gray, 2019; Roberts, 2019). Feminist scholars, and particularly feminist scholars of color, have made powerful critiques of the ways in which artificial intelligence systems formalize, classify, and amplify historical forms of discrimination and act to reify and amplify existing forms of social inequality (Eubanks, 2017; Benjamin, 2019; Noble, 2018). In response, the machine learning community has begun to address claims of algorithmic bias under the rubric of fairness, accountability, and transparency. But in doing so, it has largely dealt with these issues in familiar terms, using statistical methods aimed at achieving parity and deploying fairness ‘toolkits’. Yet actually existing inequality is reflected and amplified in algorithmic systems in ways that exceed the capacity of statistical methods alone. This article outlines a feminist critique of extant methods of dealing with algorithmic discrimination. I outline the ways in which gender discrimination and erasure are built into the field of AI at a foundational level; the product of a community that largely represents a small, privileged, and male segment of the global population (Author, 2019). In so doing, I illustrate how a situated mode of inquiry enables us to more closely examine a feedback loop between discriminatory workplaces and discriminatory systems.


2021 ◽  
Vol 12 ◽  
Author(s):  
Supraja Sankaran ◽  
Chao Zhang ◽  
Henk Aarts ◽  
Panos Markopoulos

Applications using Artificial Intelligence (AI) have become commonplace and embedded in our daily lives. Much of our communication has transitioned from human–human interaction to human–technology or technology-mediated interaction. As technology is handed over control and streamlines choices and decision-making in different contexts, people are increasingly concerned about a potential threat to their autonomy. In this paper, we explore autonomy perception when interacting with AI-based applications in everyday contexts using a design fiction-based survey with 328 participants. We probed if providing users with explanations on “why” an application made certain choices or decisions influenced their perception of autonomy or reactance regarding the interaction with the applications. We also looked at changes in perception when users are aware of AI's presence in an application. In the social media context, we found that people perceived a greater reactance and lower sense of autonomy perhaps owing to the personal and identity-sensitive nature of the application context. Providing explanations on “why” in the navigation context, contributed to enhancing their autonomy perception, and reducing reactance since it influenced the users' subsequent actions based on the recommendation. We discuss our findings and the implications it has for the future development of everyday AI applications that respect human autonomy.


2021 ◽  
pp. 355-368
Author(s):  
Witold Wyporek

This article represents an overview of the jurisprudence case review of issues relatively connected with artificial intelligence technology. The collection of judgments chosen for the purposes of study which include concerns related to issues associated with forthcoming technological world. For example, the functionality of bot software automate human interaction easy with various online activities, the use of AI to analyse the car cost repairing according to model. AI used in forensic medical radiology, figure print scanning, security enhancement using facial biometrics recognition. AI in automate graphics and game design application. Also AI use to filter social networks to identify inciting terrorism. The main purpose of the study is to identify and assess the need of regulate artificial intelligence technology according to standardize policy, as well as to assess the level of threats associated with privacy of data analysis functions of the AI technology in the context of the presented jurisprudence.


2019 ◽  
Author(s):  
Rana Abdullah ◽  
Bahjat Fakieh

BACKGROUND The advancement of health care information technology and the emergence of artificial intelligence has yielded tools to improve the quality of various health care processes. Few studies have investigated employee perceptions of artificial intelligence implementation in Saudi Arabia and the Arabian world. In addition, limited studies investigated the effect of employee knowledge and job title on the perception of artificial intelligence implementation in the workplace. OBJECTIVE The aim of this study was to explore health care employee perceptions and attitudes toward the implementation of artificial intelligence technologies in health care institutions in Saudi Arabia. METHODS An online questionnaire was published, and responses were collected from 250 employees, including doctors, nurses, and technicians at 4 of the largest hospitals in Riyadh, Saudi Arabia. RESULTS The results of this study showed that 3.11 of 4 respondents feared artificial intelligence would replace employees and had a general lack of knowledge regarding artificial intelligence. In addition, most respondents were unaware of the advantages and most common challenges to artificial intelligence applications in the health sector, indicating a need for training. The results also showed that technicians were the most frequently impacted by artificial intelligence applications due to the nature of their jobs, which do not require much direct human interaction. CONCLUSIONS The Saudi health care sector presents an advantageous market potential that should be attractive to researchers and developers of artificial intelligence solutions.


2021 ◽  
Vol 14 (2) ◽  
pp. 295-306
Author(s):  
Tetiana Sovhyra

The article systematizes and analyzes the existing experience of organizing the creative process in a robotic theater. The author explores the robotic theater phenomenon, the artificial intelligence technology possibilities to function in the stage space. The article provides a comparative analysis of human and mechanized interaction in the stage space. The methodological basis of the research is a combination of several methods: analytical – for accounting for historical and fictional literature; theoretical and conceptual method – for analyzing the conceptual and terminological system of research and identifying the specifics of introducing the artificial intelligence technology in creative process; comparative-typological – to compare the peculiarities of the functioning of mechanized “actors” with the acting skills of human performers. The article explores the threat perception and uncanny valley concepts to study the perception of a robot–actor by an audience. The author examines the process of human interaction with a robotic body: from the moment of interest, interaction to the moment of rejection of the robot by a person (audience).


Sign in / Sign up

Export Citation Format

Share Document