Human-Machine Communication
Latest Publications


TOTAL DOCUMENTS

25
(FIVE YEARS 25)

H-INDEX

2
(FIVE YEARS 2)

Published By Nicholson School Of Communication, UCF

2638-6038, 2638-602x

2021 ◽  
Vol 3 ◽  
pp. 27-46
Author(s):  
Sonja Utz ◽  
Lara Wolfers ◽  
Anja Göritz

In times of the COVID-19 pandemic, difficult decisions such as the distribution of ventilators must be made. For many of these decisions, humans could team up with algorithms; however, people often prefer human decision-makers. We examined the role of situational (morality of the scenario; perspective) and individual factors (need for leadership; conventionalism) for algorithm preference in a preregistered online experiment with German adults (n = 1,127). As expected, algorithm preference was lowest in the most moral-laden scenario. The effect of perspective (i.e., decision-makers vs. decision targets) was only significant in the most moral scenario. Need for leadership predicted a stronger algorithm preference, whereas conventionalism was related to weaker algorithm preference. Exploratory analyses revealed that attitudes and knowledge also mattered, stressing the importance of individual factors.


2021 ◽  
Vol 3 ◽  
pp. 65-82
Author(s):  
Do Kyun David Kim ◽  
Gary Kreps ◽  
Rukhsana Ahmed

As humanoid robot technology, anthropomorphized by artificial intelligence (AI), has rapidly advanced to introduce more human-resembling automated robots that can communicate, interact, and work like humans, we have begun to expect active interactions with Humanoid AI Robots (HAIRs) in the near future. Coupled with the HAIR technology development, the COVID-19 pandemic triggered our interest in using health care robots with many substantial advantages that overcome critical human vulnerabilities against the strong infectious COVID-19 virus. Recognizing the tremendous potential for the active application of HAIRs, this article explores feasible ways to implement HAIRs in health care and patient services and suggests recommendations for strategically developing and diffusing autonomous HAIRs in health care facilities. While discussing the integration of HAIRs into health care, this article points out some important ethical concerns that should be addressed for implementing HAIRs for health care services.


2021 ◽  
Vol 3 ◽  
pp. 47-64
Author(s):  
Kerk Kee ◽  
Prasad Calyam ◽  
Hariharan Regunath

The COVID-19 pandemic is an unprecedented global emergency. Clinicians and medical researchers are suddenly thrown into a situation where they need to keep up with the latest and best evidence for decision-making at work in order to save lives and develop solutions for COVID-19 treatments and preventions. However, a challenge is the overwhelming numbers of online publications with a wide range of quality. We explain a science gateway platform designed to help users to filter the overwhelming amount of literature efficiently (with speed) and effectively (with quality), to find answers to their scientific questions. It is equipped with a chatbot to assist users to overcome infodemic, low usability, and high learning curve. We argue that human-machine communication via a chatbot play a critical role in enabling the diffusion of innovations.


2021 ◽  
Vol 3 ◽  
pp. 11-26
Author(s):  
Miles Coleman

The rampant misinformation amid the COVID-19 pandemic demonstrates an obvious need for persuasion. This article draws on the fields of digital rhetoric and rhetoric of science, technology, and medicine to explore the persuasive threats and opportunities machine communicators pose to public health. As a specific case, Alexa and the machine’s performative similarities to the Oracle at Delphi are tracked alongside the voice-based assistant’s further resonances with the discourses of expert systems to develop an account of the machine’s rhetorical energies. From here, machine communicators are discussed as optimal deliverers of inoculations against misinformation in light of the fact that their performances are attended by rhetorical energies that can enliven persuasions against misinformation.


2021 ◽  
Vol 3 ◽  
pp. 83-89
Author(s):  
James Dearing

For billions of people, the threat of the Novel Coronavirus SARS-CoV-2 and its variants has precipitated the adoption of new behaviors. Pandemics are radical events that disrupt the gradual course of societal change, offering the possibility that some rapidly adopted innovations will persist in use past the time period of the event and, thus, diffuse more rapidly than in the absence of such an event. Human-machine communication includes a range of technologies with which many of us have quickly become more familiar due to stay-athome orders, distancing, workplace closures, remote instruction, home-bound entertainment, fear of contracting COVID-19, and boredom. In this commentary I focus on Artificial Intelligence (AI) agents, and specifically chatbots, in considering the factors that may affect chatbot diffusion. I consider anthropomorphism and expectancy violations, the characteristics of chatbots, business imperatives, millennials and younger users, and from the user perspective, uses and gratifications.


2021 ◽  
Vol 2 ◽  
pp. 105-120
Author(s):  
Jindong Liu

This study critically investigates the construction of gender on a Japanese hologram animestyle social robot Azuma Hikari. By applying a mixed method merging the visual semiotic method and heterogeneous engineering approach in software studies, the signs in Azuma Hikari’s anthropomorphized image and the interactivity enabled by the multimedia interface have been analyzed and discussed. The analysis revealed a stereotyped representation of a Japanese “ideal bride” who should be cute, sexy, comforting, good at housework, and subordinated to “Master”-like husband. Moreover, the device interface disciplines users to play the role of “wage earner” in the simulated marriage and reconstructs the gender relations in reality. It suggests the humanization of the objects is often associated with the dehumanization and objectification of the human in reverse.


2021 ◽  
Vol 2 ◽  
pp. 209-234
Author(s):  
Andrew Prahl ◽  
Lyn Van Swol

This study investigates the effects of task demonstrability and replacing a human advisor with a machine advisor. Outcome measures include advice-utilization (trust), the perception of advisors, and decision-maker emotions. Participants were randomly assigned to make a series of forecasts dealing with either humanitarian planning (low demonstrability) or management (high demonstrability). Participants received advice from either a machine advisor only, a human advisor only, or their advisor was replaced with the other type of advisor (human/machine) midway through the experiment. Decision-makers rated human advisors as more expert, more useful, and more similar. Perception effects were strongest when a human advisor was replaced by a machine. Decision-makers also experienced more negative emotions, lower reciprocity, and faulted their advisor more for mistakes when a human was replaced by a machine.


2021 ◽  
Vol 2 ◽  
pp. 191-208
Author(s):  
Cameron Piercy ◽  
Angela Gist-Mackey

This study uses a sample of pharmacists and pharmacy technicians (N = 240) who differ in skill, education, and income to replicate and extend past findings about socioeconomic disparities in the perceptions of automation. Specifically, this study applies the skills-biased technical change hypothesis, an economic theory that low-skill jobs are the most likely to be affected by increased automation (Acemoglu & Restrepo, 2019), to the mental models of pharmacy workers. We formalize the hypothesis that anxiety about automation leads to perceptions that jobs will change in the future and automation will increase. We also posit anxiety about overpayment related to these outcomes. Results largely support the skillsbiased hypothesis as a mental model shared by pharmacy workers regardless of position, with few effects for overpayment anxiety.


2021 ◽  
Vol 2 ◽  
pp. 173-190
Author(s):  
Jacob Johanssen ◽  
Xin Wang

Artificial intuition (AI acting intuitively) is one trend in artificial intelligence. This article analyzes how it is discussed by technology journalism on the internet. The journalistic narratives that were analyzed claim that intuition can make AI more efficient, autonomous, and human. Some commentators also write that intuitive AI could execute tasks better than humans themselves ever could (e.g., in digital games); therefore, it could ultimately surpass human intuition. Such views do not pay enough attention to biases as well as transparency and explainability of AI. We contrast the journalistic narratives with philosophical understandings of intuition and a psychoanalytic view of the human. Those perspectives allow for a more complex view that goes beyond the focus on rationality and computational perspectives of tech journalism.


2021 ◽  
Vol 2 ◽  
pp. 57-79
Author(s):  
Katrin Etzrodt ◽  
Sven Engesser

We aim to investigate the nature of doubt regarding voice-based agents by referring to Piaget’s ontological object–subject classification “thing” and “person,” its associated equilibration processes, and influential factors of the situation, the user, and the agent. In two online surveys, we asked 853 and 435 participants, ranging from 17 to 65 years of age, to assess Alexa and the Google Assistant. We discovered that only some people viewed voice-based agents as mere things, whereas the majority classified them into personified things. However, their classification is fragile and depends basically on the imputation of subject-like attributes of agency and mind to the voice-based agents, increased by a dyadic using situation, previous regular interactions, a younger age, and an introverted personality of the user. We discuss these results in a broader context.


Sign in / Sign up

Export Citation Format

Share Document