computer agents
Recently Published Documents


TOTAL DOCUMENTS

43
(FIVE YEARS 5)

H-INDEX

8
(FIVE YEARS 1)

2021 ◽  
Vol Volume 16 ◽  
pp. 941-971
Author(s):  
Norina Gasteiger ◽  
Kate Loveys ◽  
Mikaela Law ◽  
Elizabeth Broadbent

Organizacija ◽  
2021 ◽  
Vol 54 (2) ◽  
pp. 162-177
Author(s):  
Anton Ivaschenko ◽  
Alfiya R. Diyazitdinova ◽  
Tatiyana Nikiforova

Abstract Background and Purpose: The growing role and involvement of Artificial Intelligence in modern digital enterprises leads to a considerable reduction of personnel and reorientation of the remaining staff to new responsibilities. However, in many areas like services and support the total elimination of the employed human resources still remains impossible. It is proposed to study the organisational problem of finding the optimal proportion of computer agents and human actors in the mixed collaborative environment. Methods: Using the technology of semantic and statistical analysis, we developed an original model of computer agents’ and human actors’ cooperative interaction and an optimization method, which is novel in considering the focus of the executors while calculating the compliance indicators. Results: The problem was studied by an example of service desk automation. Considering the semantics of the problem domain in the form of ontology introduces the logic for better distribution and automation of tasks. Conclusion: In a modern digital enterprise there exists and can be estimated a rational balance between the computer agents and human actors, which becomes a significant indicator of its performance. In general, human actors are preferable for processing unpredictable events in real time, while agents are better at the modelling and simulation.


Author(s):  
William H Sharp ◽  
Marc M. Sebrechts

Computer agents are frequently anthropomorphized, giving them appearances and responses similar to humans. Research has demonstrated that users tend to apply social norms and expectations to such computer agents, and that people interact with computer agents in a similar fashion as they would another human. Perceived expertise has been shown to affect trust in human-human relationships, but the literature investigating how this influences trust in computer agents is limited. The current study investigated the effect of computer agent perceived level of expertise and recommendation reliability on subjective (rated) and objective (compliance) trust during a pattern recognition task. Reliability of agent recommendations had a strong effect on both subjective and objective trust. Expert agents started with higher subjective trust, but showed less trust repair. Agent expertise had little impact on objective trust resiliency or repair.


2020 ◽  
Vol 104 ◽  
pp. 105624 ◽  
Author(s):  
Katharina Herborn ◽  
Matthias Stadler ◽  
Maida Mustafić ◽  
Samuel Greiff

2019 ◽  
Vol 15 (1) ◽  
pp. 33-45 ◽  
Author(s):  
Aleksandra Swiderska ◽  
Eva G. Krumhuber ◽  
Arvid Kappas

This article describes how studies in the area of decision-making suggest clear differences in behavioral responses to humans versus computers. The current objective was to investigate decision-making in an economic game played only with computer partners. In Experiment 1, participants were engaged in the ultimatum game with computer agents and regular computers while their physiological responses were recorded. In Experiment 2, an identical setup of the game was used, but the ethnicity of the computer agents was manipulated. As expected, almost all equitable monetary splits offered by the computer were accepted. The acceptance rates gradually decreased when the splits became less fair. Although the obtained behavioral pattern implied a reaction to violation of the rule of fairness by the computer in the game, no evidence was found for participants' corresponding emotional involvement. The findings contribute to the body of research on human-computer interaction and suggest that social effects of computers can be attenuated.


2018 ◽  
Author(s):  
Nicholas Hertz ◽  
Eva Wiese

As nonhuman agents become integrated into the workforce, the question becomes whether humans are willing to consider their advice, and to what extent advice-seeking depends on the perceived agent-task fit. To examine this, participants performed social and analytical tasks and received advice from human, robot, and computer agents in two conditions: in the Agent First condition, participants were first asked to choose advisors and were then informed which task to perform; in the Task First condition, they were first informed about the task and then asked to choose advisors. In the Agent First condition, we expected participants to prefer human to non-human advisors, and to subsequently trust their advice more if they were assigned the social as opposed to the analytical task. In the Task First condition, we expected advisor choices to be guided by stereotypical assumptions regarding the agents’ expertise for the tasks, accompanied by higher trust in their suggestions. The findings indicate that in the Agent First condition, the human was chosen significantly more often than the machines, while in the Task First condition advisor choices were calibrated based on perceived agent-task fit. Trust was higher in the social task, but only showed variations with the human partner.


2018 ◽  
Author(s):  
Renato F. L. Azevedo ◽  
Daniel G. Morrow ◽  
Kuangxiao Gu ◽  
Thomas S. Huang ◽  
Mark Hasegawa-Johnson ◽  
...  

2017 ◽  
Vol 76 ◽  
pp. 607-616 ◽  
Author(s):  
Arthur C. Graesser ◽  
Zhiqiang Cai ◽  
Brent Morgan ◽  
Lijia Wang
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document