Some Problems in the Measurement of Human Performance in Man–Machine Systems

Author(s):  
Alan D. Swain

Quantification of human performance in man–machine systems is receiving more and more attention in human factors work. Obstacles to such quantification include: (1) complexity and subjectivity of available quantification methods, (2) grossness of assumptions behind these methods, and (3) resistance of some psychologists. Research is needed (1) to develop an improved human performance data bank, (2) to develop improved models and methods, and (3) to validate quantification data, models and methods. Some research is being done in these areas.

Author(s):  
Michael E. Watson ◽  
Christina F. Rusnock ◽  
Michael E. Miller ◽  
John M. Colombi

Humans perform critical functions in nearly every system, making them vital to consider during system development. Human Systems Integration (HSI) would ideally permit the human’s impact on system performance to be effectively accounted for during the systems engineering (SE) process, but effective processes are often not applied, especially in the early design phases. Failure to properly account for human capabilities and limitations during system design may lead to unreasonable expectations of the human. The result is a system design that makes unrealistic assumptions about the human, leading to an overestimation of the human’s performance and thus the system’s performance. This research proposes a method of integrating HSI with SE that allows human factors engineers to apply Systems Modeling Language (SysML) and human performance simulation to describe and communicate human and system performance. Using these models, systems engineers can more fully understand the system’s performance to facilitate design decisions that account for the human. A scenario is applied to illustrate the method, in which a system developer seeks to redesign an example system, Vigilant Spirit, by incorporating system automation to improve overall system performance. The example begins by performing a task analysis through physical observation and analysis of human subjects’ data from 12 participants employing Vigilant Spirit. This analysis is depicted in SysML Activity and Sequence Diagrams. A human-in-the-loop experiment is used to study performance and workload effects of humans applying Vigilant Spirit to conduct simulated remotely-piloted aircraft surveillance and tracking missions. The results of the task analysis and human performance data gathered from the experiment are used to build a human performance model in the Improved Performance Research Integration Tool (IMPRINT). IMPRINT allows the analyst to represent a mission in terms of functions and tasks performed by the system and human, and then run a discrete event simulation of the system and human accomplishing the mission to observe the effects of defined variables on performance and workload. The model was validated against performance data from the human-subjects’ experiment. In the scenario, six different scan algorithms, which varied in terms of scan accuracy and speed, were simulated. These algorithms represented different potential system trades as factors such as various technologies and hardware architectures could influence algorithm accuracy and speed. These automation trades were incorporated into the system’s block definition (BDD), requirements, and parametric SysML diagrams. These diagrams were modeled from a systems engineer’s perspective; therefore they originally placed less emphasis on the human. The BDD portrayed the structural aspect of Vigilant Spirit, to include the operator, automation, and system software. The requirements diagram levied a minimum system-level performance requirement. The parametric diagram further defined the performance and specification requirements, along with the automation’s scan settings, through the use of constraints. It was unclear from studying the SysML diagrams which automation setting would produce the best results, or if any could meet the performance requirement. Existing system models were insufficient by themselves to evaluate these trades; thus, IMPRINT was used to perform a trade study to determine the effects of each of the automation options on overall system performance. The results of the trade study revealed that all six automation conditions significantly improved performance scores from the baseline, but only two significantly improved workload. Once the trade study identified the preferred alternative, the results were integrated into existing system diagrams. Originally system-focused, SysML diagrams were updated to reflect the results of the trade analysis. The result is a set of integrated diagrams that accounts for both the system and human, which may then be used to better inform system design. Using human performance- and workload-modeling tools such as IMPRINT to perform tradeoff analyses, human factors engineers can attain data about the human subsystem early in system design. These data may then be integrated into existing SysML diagrams applied by systems engineers. In so doing, additional insights into the whole system can be gained that would not be possible if human factors and systems engineers worked independently. Thus, the human is incorporated into the system’s design and the total system performance may be predicted, achieving a successful HSI process.


2017 ◽  
Vol 12 (1) ◽  
pp. 42-49 ◽  
Author(s):  
Greg A. Jamieson ◽  
Gyrd Skraaning

This paper responds to Kaber’s reflections on the empirical grounding and design utility of the levels-of-automation (LOA) framework. We discuss the suitability of the existing human performance data for supporting design decisions in complex work environments. We question why human factors design guidance seems wedded to a model of questionable predictive value. We challenge the belief that LOA frameworks offer useful input to the design and operation of highly automated systems. Finally, we seek to expand the design space for human–automation interaction beyond the familiar human factors constructs. Taken together, our positions paint LOA frameworks as abstractions suffering a crisis of confidence that Kaber’s remedies cannot restore.


1986 ◽  
Vol 30 (8) ◽  
pp. 771-775 ◽  
Author(s):  
Gary A. Klein ◽  
Christopher P. Brezovic

The types of human perception and performance information that training device designers need in making design decisions were studied to identify the types of human performance data needed to make these decisions. A total of 50 experienced designers were studied. For a subset of 39 of these designers, the interviews focused on critical design decisions where there was a need for human perception and performance data. The utility of the sources used in the decision was assessed and showed the present technical literature database of little value in the problem solving of the designers. The data collected indicated systematic decision making strategies were used in a minority of cases. Instead, there was a heavy reliance on informal experiments and analogous cases for guidance in resolving design questions, The implications are that human factors specialists can have a stronger influence on design through identification of analogous cases, and participating in prototype studies than by identifying basic research findings.


Author(s):  
Benjamin A. Clegg ◽  
Jeffrey G. Morrison ◽  
Noelle L. Brown ◽  
Karen M. Feigh ◽  
Harvey S. Smallman ◽  
...  

The emergence of Human Factors as a discipline is often traced to pioneering efforts tackling military issues in World War II. Rapid technological advances raised fundamental questions around human performance. Approaches, solutions, and advances in the science soon spread outside of their original military contexts. Current and emerging technologies, and also new challenges for human-machine systems, means Human Factors remains central to military effectiveness, while producing outcomes with broader potential impact. This panel discussion will examine an array of contributions to the Office of Naval Research program on Command Decision Making. The session will explore methods to understand and enhance decision making through: (1) Addressing gaps that demand further foundational knowledge to produce empirical generalizations, models, and theories as basis for future guidelines, principles, specifications, and doctrine for Navy Command Decision Making; (2) Applications of existing knowledge within specific contexts to address current /future real world Navy decision making challenges.


1982 ◽  
Vol 26 (8) ◽  
pp. 722-726
Author(s):  
David Meister

The major points of this paper are: (1) Human Factors (HF) data have no value unless used; (2) the major use of HF data should be quantitative prediction of human performance; (3) predictive methods exist but the data banks to support them do not; (4) HF research should be directed by data bank needs, rather than by researcher idiosyncracy.


Author(s):  
Dane A. Morey ◽  
Jesse M. Marquisee ◽  
Ryan C. Gifford ◽  
Morgan C. Fitzgerald ◽  
Michael F. Rayo

With all of the research and investment dedicated to artificial intelligence and other automation technologies, there is a paucity of evaluation methods for how these technologies integrate into effective joint human-machine teams. Current evaluation methods, which largely were designed to measure performance of discrete representative tasks, provide little information about how the system will perform when operating outside the bounds of the evaluation. We are exploring a method of generating Extensibility Plots, which predicts the ability of the human-machine system to respond to classes of challenges at intensities both within and outside of what was tested. In this paper we test and explore the method, using performance data collected from a healthcare setting in which a machine and nurse jointly detect signs of patient decompensation. We explore the validity and usefulness of these curves to predict the graceful extensibility of the system.


Author(s):  
Shane T. Mueller ◽  
Lamia Alam ◽  
Gregory J. Funke ◽  
Anne Linja ◽  
Tauseef Ibne Mamun ◽  
...  

In many human performance tasks, researchers assess performance by measuring both accuracy and response time. A number of theoretical and practical approaches have been proposed to obtain a single performance value that combines these measures, with varying degrees of success. In this report, we examine data from a common paradigm used in applied human factors assessment: a go/no-go vigilance task (Smith et al., 2019). We examined whether 12 different measures of performance were sensitive to the vigilance decrement induced by the design, and also examined how the different measures were correlated. Results suggest that most combined measures were slight improvements over accuracy or response time alone, with the most sensitive and representative result coming from the Linear Ballistic Accumulator model. Practical lessons for applying these measures are discussed.


1980 ◽  
Vol 24 (1) ◽  
pp. 606-607
Author(s):  
Ben B. Morgan

Vigilance is one of the most thoroughly researched areas of human performance. Volumes have been written concerning vigilance performance in both laboratory and real-world settings, and there is a clear trend in the literature toward an increasing emphasis on the study of operational task behavior under environmental conditions that are common to real world jobs. Although a great deal of this research has been designed to test various aspects of the many theories of vigilance, there is a general belief that vigilance research is relevant and applicable to the performances required in real-world monitoring and inspection tasks. Indeed, many of the reported studies are justified on the basis of their apparent relevance to vigilance requirements in modern man-machine systems, industrial inspection tasks, and military jobs. There is a growing body of literature, however, which suggests that many vigilance studies are of limited applicability to operational task performance. For example, Kibler (1965) has argued that technological changes have altered job performance requirements to the extent that laboratory vigilance studies are no longer applicable to real-world jobs. Many others have simply been unable to reproduce the typical “vigilance decrement” in field situations. This has led Teichner (1974) to conclude that “the decremental function itself is more presumed than established.”


Data in Brief ◽  
2017 ◽  
Vol 15 ◽  
pp. 213-215 ◽  
Author(s):  
Mashrura Musharraf ◽  
Jennifer Smith ◽  
Faisal Khan ◽  
Brian Veitch ◽  
Scott MacKinnon

Sign in / Sign up

Export Citation Format

Share Document