Synthetic Agents as Full-fledged Teammates

Author(s):  
Christopher W. Myers

An important goal of training systems research is the ability to train teams to criterion while simultaneously minimizing training resources. One promising approach is to develop synthetic agents that act as full-fledged members of a team. Five experts will highlight successes, failures, and continuing challenges associated with the development, validation, and deployment of synthetic agents as full-fledged teammates. The panel will provide an intimate look “under the hood” of synthetic agents, describe what each has found useful for developing a synthetic teammate that “plays well with others,” and discuss the key roadblocks that must be overcome for the further inclusion of synthetic teammates within human training systems. The lessons learned from these panelists will be of value to those interested in cognitive engineering and human performance modeling.

Author(s):  
Diane Kuhl Mitchell ◽  
Charneta Samms

For at least a decade, researchers at the Army Research Laboratory (ARL) have predicted mental workload using human performance modeling (HPM) tools, primarily IMPRINT. During this timeframe their projects have matured from simple models of human behavior to complex analyses of the interactions of system design and human behavior. As part of this maturation process, the researchers learned: 1) to develop a modeling question that incorporates all aspects of workload, 2) to determine when workload is most likely to affect performance, 3) to build multiple models to represent experimental conditions, 4) to connect performance predictions to an overall mission or system capability, and 5) to format results in a clear, concise format. By implementing the techniques they developed from these lessons learned, the researchers have had an impact on major Army programs with their workload predictions. Specifically, they have successfully changed design requirements for future concept Army vehicles, substantiated manpower requirements for fielded Army vehicles, and made Soldier workload the number one item during preliminary design review for a major Army future concept vehicle program. The effective techniques the ARL researchers developed for their IMPRINT projects are applicable to other HPM tools. In addition, they can be used by students and researchers who are doing human performance modeling projects and are confronted with similar problems to help them achieve project success.


Author(s):  
David C. Foyle ◽  
Becky L. Hooey ◽  
Michael D. Byrne ◽  
Kevin M. Corker ◽  
Stephen Deutsch ◽  
...  

Five modeling teams from industry and academia were chosen by the NASA Aviation Safety and Security Program to develop human performance models (HPM) of pilots performing taxi operations and runway instrument approaches with and without advanced displays. One representative from each team will serve as a panelist to discuss their team's model architecture, augmentations and advancements to HPMs, and aviation-safety related lessons learned. Panelists will discuss how modeling results are influenced by a model's architecture and structure, the role of the external environment, specific modeling advances and future directions and challenges for human performance modeling in aviation.


Author(s):  
Holly S. Bautsch ◽  
Michael D. McNeese ◽  
S. Narayanan

All too often human-systems integration is not addressed until the final stages of systems development. Because of constraints in the time-cost-schedule, it is typically too late in the acquisition process to make adaptations that address cognitive engineering and user-centered performance. Traditionally, designers have not had methods/tools that comprehensively integrate them in the concept exploration stage of design decision-making. Human-systems integration may currently include cognitive engineering or human performance modeling but rarely combines these methods to comprehensively establish human design requirements. This paper assesses the value of both cognitive engineering and human performance modeling by evaluating pilot-system dynamics in an advanced mission. One model is informed through traditional task analysis, while the other model utilizes cognitive task analysis. An experiment is reported which analyzes model outcomes in contrast to a “benchmark” (pilot-in-the-loop data). The results assess model similarities and differences. The discussion evaluates how human performance models can enhance cognitive engineering in design decision-making.


Author(s):  
Wendy J. Reece ◽  
Harold S. Blackman

One of the current unsolved problems in human factors is the difficulty in acquiring information from lessons learned and data collected among human performance analysts in different domains. There are several common concerns and generally accepted issues of importance for human factors, psychology and industry analysts of performance and safety. Among these are the need to incorporate lessons learned in design, to carefully consider implementation of new designs and automation, and the need to reduce human performance-based contributions to risk. In spite of shared concerns, there are several roadblocks to widespread sharing of data and lessons learned from operating experience and simulation, including the fact that very few publicly accessible data bases exist (Gertman & Blackman, 1994, and Kirwan, 1997). There is a need to draw together analysts and analytic methodologies to comprise a centralized source of data with sufficient detail to be meaningful while ensuring source anonymity. We propose that a generic source of performance data and a multi-domain data store may provide the first steps toward cooperative performance modeling and analysis across industries.


Sign in / Sign up

Export Citation Format

Share Document