agent teams
Recently Published Documents


TOTAL DOCUMENTS

119
(FIVE YEARS 20)

H-INDEX

14
(FIVE YEARS 1)

2022 ◽  
Vol 6 (GROUP) ◽  
pp. 1-29
Author(s):  
Beau G. Schelble ◽  
Christopher Flathmann ◽  
Nathan J. McNeese ◽  
Guo Freeman ◽  
Rohit Mallick

An emerging research agenda in Computer-Supported Cooperative Work focuses on human-agent teaming and AI agent's roles and effects in modern teamwork. In particular, one understudied key question centers around the construct of team cognition within human-agent teams. This study explores the unique nature of team dynamics in human-agent teams compared to human-human teams and the impact of team composition on perceived team cognition, team performance, and trust. In doing so, a mixed-method approach, including three team composition conditions (all human, human-human-agent, human-agent-agent), completed the team simulation NeoCITIES and completed shared mental model, trust, and perception measures. Results found that human-agent teams are similar to human-only teams in the iterative development of team cognition and the importance of communication to accelerating its development; however, human-agent teams are different in that action-related communication and explicitly shared goals are beneficial to developing team cognition. Additionally, human-agent teams trusted agent teammates less when working with only agents and no other humans, perceived less team cognition with agent teammates than human ones, and had significantly inconsistent levels of team mental model similarity when compared to human-only teams. This study contributes to Computer-Supported Cooperative Work in three significant ways: 1) advancing the existing research on human-agent teaming by shedding light on the relationship between humans and agents operating in collaborative environments, 2) characterizing team cognition development in human-agent teams; and 3) advancing real-world design recommendations that promote human-centered teaming agents and better integrate the two.


Algorithms ◽  
2021 ◽  
Vol 14 (11) ◽  
pp. 327
Author(s):  
Yifeng Zhou ◽  
Kai Di ◽  
Haokun Xing

Principal–assistant agent teams are often employed to solve tasks in multiagent collaboration systems. Assistant agents attached to the principal agents are more flexible for task execution and can assist them to complete tasks with complex constraints. However, how to employ principal–assistant agent teams to execute time-critical tasks considering the dependency between agents and the constraints among tasks is still a challenge so far. In this paper, we investigate the principal–assistant collaboration problem with deadlines, which is to allocate tasks to suitable principal–assistant teams and construct routes satisfying the temporal constraints. Two cases are considered in this paper, including single principal–assistant teams and multiple principal–assistant teams. The former is formally formulated in an arc-based integer linear programming model. We develop a hybrid combination algorithm for adapting larger scales, the idea of which is to find an optimal combination of partial routes generated by heuristic methods. The latter is defined in a path-based integer linear programming model, and a branch-and-price-based (BP-based) algorithm is proposed that introduces the number of assistant-accessible tasks surrounding a task to guide the route construction. Experimental results validate that the hybrid combination algorithm and the BP-based algorithm are superior to the benchmarks in terms of the number of served tasks and the running time.


2021 ◽  
Vol 6 (4) ◽  
pp. 7065-7072
Author(s):  
Haochen Wu ◽  
Amin Ghadami ◽  
Alparslan Emrah Bayrak ◽  
Jonathon M. Smereka ◽  
Bogdan I. Epureanu

Author(s):  
Huao Li ◽  
Keyang Zheng ◽  
Michael Lewis ◽  
Dana Hughes ◽  
Katia Sycara

The ability to make inferences about other’s mental state is referred to as having a Theory of Mind (ToM). Such ability is the foundation of many human social interactions such as empathy, teamwork, and communication. As intelligent agents being involved in diverse human-agent teams, they are also expected to be socially intelligent to become effective teammates. To provide a feasible baseline for future social intelligent agents, this paper presents a experimental study on the process of human ToM reference. Human observers’ inferences are compared with participants’ verbally reported mental state in a simulated search and rescue task. Results show that ToM inference is a challenging task even for experienced human observers.


2021 ◽  
Vol 35 (2) ◽  
Author(s):  
E. S. Kox ◽  
J. H. Kerstholt ◽  
T. F. Hueting ◽  
P. W. de Vries

AbstractThe role of intelligent agents becomes more social as they are expected to act in direct interaction, involvement and/or interdependency with humans and other artificial entities, as in Human-Agent Teams (HAT). The highly interdependent and dynamic nature of teamwork demands correctly calibrated trust among team members. Trust violations are an inevitable aspect of the cycle of trust and since repairing damaged trust proves to be more difficult than building trust initially, effective trust repair strategies are needed to ensure durable and successful team performance. The aim of this study was to explore the effectiveness of different trust repair strategies from an intelligent agent by measuring the development of human trust and advice taking in a Human-Agent Teaming task. Data for this study were obtained using a task environment resembling a first-person shooter game. Participants carried out a mission in collaboration with their artificial team member. A trust violation was provoked when the agent failed to detect an approaching enemy. After this, the agent offered one of four trust repair strategies, composed of the apology components explanation and expression of regret (either one alone, both or neither). Our results indicated that expressing regret was crucial for effective trust repair. After trust declined due to the violation by the agent, trust only significantly recovered when an expression of regret was included in the apology. This effect was stronger when an explanation was added. In this context, the intelligent agent was the most effective in its attempt of rebuilding trust when it provided an apology that was both affective, and informational. Finally, the implications of our findings for the design and study of Human-Agent trust repair are discussed.


Author(s):  
Michael Schneider ◽  
Michael Miller ◽  
David Jacques ◽  
Gilbert Peterson ◽  
Thomas Ford

Teaming permits cognitively complex work to be rapidly executed by multiple entities. As artificial agents (AAs) participate in increasingly complex cognitive work, they hold the promise of moving beyond tools to becoming effective members of human–agent teams. Coordination has been identified as the critical process that enables effective teams and is required to achieve the vision of tightly coupled teams of humans and AAs. This paper characterizes coordination on the axes of types, content, and cost. This characterization is grounded in the human and AA literature and is evaluated to extract design implications for human–agent teams. These design implications are the mechanisms, moderators, and models employed within human–agent teams, which illuminate potential AA design improvements to support coordination.


Author(s):  
Huao Li ◽  
Tianwei Ni ◽  
Siddharth Agrawal ◽  
Fan Jia ◽  
Suhas Raja ◽  
...  

Author(s):  
Geoff Musick ◽  
Divine Maloney ◽  
Chris Flathmann ◽  
Nathan J. McNeese ◽  
Jamiahus Walton

Teacher-agent teams have the potential to increase instructional effectiveness in diverse classrooms. The agent can be trained on previous student assessment data to create a model for assessing student performance and provide instructional recommendations. We propose a conceptual model that outlines how assessment agents can be trained for and used in classrooms to create effective teacher-agent teams. Furthermore, we show how teacher-agent teams can assist in the implementation of differentiated instruction, a strategy which allows teachers to effectively instruct students of diverse backgrounds and understandings. Differentiated instruction is further realized by having an assessment agent focus on grading student work, providing feedback to students, categorizing students, and giving recommendations for instruction so that teachers can focus on providing individualized or small group instruction to diverse learners. This model maximizes the strengths of teachers, while minimizing the tedious tasks that teachers routinely perform.


Author(s):  
Christopher Flathmann ◽  
Beau Schelble ◽  
Brock Tubre ◽  
Nathan McNeese ◽  
Paige Rodeghero
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document