Invoking Principles of Groupware to Develop and Evaluate Present and Future Human-Agent Teams

Author(s):  
Christopher Flathmann ◽  
Beau Schelble ◽  
Brock Tubre ◽  
Nathan McNeese ◽  
Paige Rodeghero
Keyword(s):  
Author(s):  
Jasper van der Waa ◽  
Jurriaan van Diggelen ◽  
Luciano Cavalcante Siebert ◽  
Mark Neerincx ◽  
Catholijn Jonker

Author(s):  
Huao Li ◽  
Tianwei Ni ◽  
Siddharth Agrawal ◽  
Fan Jia ◽  
Suhas Raja ◽  
...  

Author(s):  
Michael E. Miller ◽  
John M. McGuirl ◽  
Michael F. Schneider ◽  
Thomas C. Ford

2010 ◽  
Vol 25 (5) ◽  
pp. 46-53 ◽  
Author(s):  
Nanja Smets ◽  
Jeffrey M Bradshaw ◽  
Jurriaan van Diggelen ◽  
Catholijn Jonker ◽  
Mark A. Neerincx ◽  
...  

Author(s):  
Huao Li ◽  
Keyang Zheng ◽  
Michael Lewis ◽  
Dana Hughes ◽  
Katia Sycara

The ability to make inferences about other’s mental state is referred to as having a Theory of Mind (ToM). Such ability is the foundation of many human social interactions such as empathy, teamwork, and communication. As intelligent agents being involved in diverse human-agent teams, they are also expected to be socially intelligent to become effective teammates. To provide a feasible baseline for future social intelligent agents, this paper presents a experimental study on the process of human ToM reference. Human observers’ inferences are compared with participants’ verbally reported mental state in a simulated search and rescue task. Results show that ToM inference is a challenging task even for experienced human observers.


2022 ◽  
Vol 6 (GROUP) ◽  
pp. 1-29
Author(s):  
Beau G. Schelble ◽  
Christopher Flathmann ◽  
Nathan J. McNeese ◽  
Guo Freeman ◽  
Rohit Mallick

An emerging research agenda in Computer-Supported Cooperative Work focuses on human-agent teaming and AI agent's roles and effects in modern teamwork. In particular, one understudied key question centers around the construct of team cognition within human-agent teams. This study explores the unique nature of team dynamics in human-agent teams compared to human-human teams and the impact of team composition on perceived team cognition, team performance, and trust. In doing so, a mixed-method approach, including three team composition conditions (all human, human-human-agent, human-agent-agent), completed the team simulation NeoCITIES and completed shared mental model, trust, and perception measures. Results found that human-agent teams are similar to human-only teams in the iterative development of team cognition and the importance of communication to accelerating its development; however, human-agent teams are different in that action-related communication and explicitly shared goals are beneficial to developing team cognition. Additionally, human-agent teams trusted agent teammates less when working with only agents and no other humans, perceived less team cognition with agent teammates than human ones, and had significantly inconsistent levels of team mental model similarity when compared to human-only teams. This study contributes to Computer-Supported Cooperative Work in three significant ways: 1) advancing the existing research on human-agent teaming by shedding light on the relationship between humans and agents operating in collaborative environments, 2) characterizing team cognition development in human-agent teams; and 3) advancing real-world design recommendations that promote human-centered teaming agents and better integrate the two.


2005 ◽  
Author(s):  
Stephen M. Fiore ◽  
Florian Jentsch ◽  
Eduardo Salas ◽  
Neal Finkelstein

Sign in / Sign up

Export Citation Format

Share Document