Baseline Performance Measurement for Human Performance Evaluation

1974 ◽  
Vol 18 (4) ◽  
pp. 429-439
Author(s):  
William A. Spindell ◽  
Frederick G. Knirk

The problem of determining baselines for human performance measurement is neither peculiar to people concerned with military system performance nor to those associated with educational systems. It has traditionally been easier to compare performance of, for example, the experimental group to the control group or system “a” to system “b”, than it has been to determine some base of performance characteristic of a group of people and then to measure the effect of change from there. In education, the question of not only philosophical but very practical consequence is how do we know when someone is working at his level? Do attempts to standardize presentation methodology and time consider performer variations adequately? In engineering, the human factors specialist is also concerned with workload and overload in terms of system performance decrement. If the pilot of a high performance tactical fighter must perform a precise tracking task, and at the same time navigate and monitor his aircraft systems while subject to intense “g” loadings, and if he fails to do so, the concern is with the increment which resulted in this failure—i.e., which added duty or which increment of psychological or physiological stress was the last straw? Baseline performance measurement is confounded by other problems as well. The largest of these is the tremendous reserve capacity for both continued performance and dramatic performance increase found among humans at all age and ability levels. This is clearly a motivational artifact because, when so motivated, people can program their activities in such a way as to have enormously increased capacities for work or cognition. The overloaded pilot, suddenly faced with a fire warning indication, in seconds becomes a far more sophisticated analog computer than anything he has on-board, rapidly relegating certain tasks to low priority (e.g., navigation or energy management) and others to the highest priority (e.g., fault isolation, logic assessment of spurious indications). The child in the classroom, plodding along at one moment, is, in the next moment, able to take on vast increases in information when his interest is sparked. How can these baselines be measured when they are seemingly made of some superstretch material? How could capacities be quantified at some level so that one could know that the addition of some increment would or would not effect system performance learning or achievement? Over the years, techniques have developed in response to such engineering questions as: will control system “a” result in a greater workload than system “b”? These were typically performance based questions, since what was ultimately desired was some statement of how the above would influence mission performance. Similarly, educators have devised systems of measuring learner activity levels, but most dramatically, recent innovations in remote measurement of psychophysiological states perhaps may provide some breakthroughs. This paper will trace the development of baseline performance measurement techniques from human factors task loading studies to those of brain wave and physiological state measurements and offer several recommendations for further study.

1987 ◽  
Vol 31 (6) ◽  
pp. 620-620
Author(s):  
Edward M. Connelly

Objectives: The objectives of this symposium are to identify fundamental performance measurement problems and to present theory, methods and application tools for assessing the impact of human performance on system performance. Further, case studies are used to illustrate the methods and tools. Finally, plans for development by government agencies of computer based processors implementing the tools are presented. Scientific Importance: Design and analysis of systems involving human operators have been hampered by the lack of performance based development tools. In order to assess the impact of human performance on system performance, it is first necessary to have a reliable and quantitative means for assessing overall system performance. Second, a means is required for relating human performance to the system performance. When these two types of tools are available and are used, systems can be designed to a prescribed performance standard. The papers in this session address fundamental performance measurement issues (including measurement reliability, sensitivity, and discrimination issues), as well as application methods and procedures.


Author(s):  
Michael E. Watson ◽  
Christina F. Rusnock ◽  
Michael E. Miller ◽  
John M. Colombi

Humans perform critical functions in nearly every system, making them vital to consider during system development. Human Systems Integration (HSI) would ideally permit the human’s impact on system performance to be effectively accounted for during the systems engineering (SE) process, but effective processes are often not applied, especially in the early design phases. Failure to properly account for human capabilities and limitations during system design may lead to unreasonable expectations of the human. The result is a system design that makes unrealistic assumptions about the human, leading to an overestimation of the human’s performance and thus the system’s performance. This research proposes a method of integrating HSI with SE that allows human factors engineers to apply Systems Modeling Language (SysML) and human performance simulation to describe and communicate human and system performance. Using these models, systems engineers can more fully understand the system’s performance to facilitate design decisions that account for the human. A scenario is applied to illustrate the method, in which a system developer seeks to redesign an example system, Vigilant Spirit, by incorporating system automation to improve overall system performance. The example begins by performing a task analysis through physical observation and analysis of human subjects’ data from 12 participants employing Vigilant Spirit. This analysis is depicted in SysML Activity and Sequence Diagrams. A human-in-the-loop experiment is used to study performance and workload effects of humans applying Vigilant Spirit to conduct simulated remotely-piloted aircraft surveillance and tracking missions. The results of the task analysis and human performance data gathered from the experiment are used to build a human performance model in the Improved Performance Research Integration Tool (IMPRINT). IMPRINT allows the analyst to represent a mission in terms of functions and tasks performed by the system and human, and then run a discrete event simulation of the system and human accomplishing the mission to observe the effects of defined variables on performance and workload. The model was validated against performance data from the human-subjects’ experiment. In the scenario, six different scan algorithms, which varied in terms of scan accuracy and speed, were simulated. These algorithms represented different potential system trades as factors such as various technologies and hardware architectures could influence algorithm accuracy and speed. These automation trades were incorporated into the system’s block definition (BDD), requirements, and parametric SysML diagrams. These diagrams were modeled from a systems engineer’s perspective; therefore they originally placed less emphasis on the human. The BDD portrayed the structural aspect of Vigilant Spirit, to include the operator, automation, and system software. The requirements diagram levied a minimum system-level performance requirement. The parametric diagram further defined the performance and specification requirements, along with the automation’s scan settings, through the use of constraints. It was unclear from studying the SysML diagrams which automation setting would produce the best results, or if any could meet the performance requirement. Existing system models were insufficient by themselves to evaluate these trades; thus, IMPRINT was used to perform a trade study to determine the effects of each of the automation options on overall system performance. The results of the trade study revealed that all six automation conditions significantly improved performance scores from the baseline, but only two significantly improved workload. Once the trade study identified the preferred alternative, the results were integrated into existing system diagrams. Originally system-focused, SysML diagrams were updated to reflect the results of the trade analysis. The result is a set of integrated diagrams that accounts for both the system and human, which may then be used to better inform system design. Using human performance- and workload-modeling tools such as IMPRINT to perform tradeoff analyses, human factors engineers can attain data about the human subsystem early in system design. These data may then be integrated into existing SysML diagrams applied by systems engineers. In so doing, additional insights into the whole system can be gained that would not be possible if human factors and systems engineers worked independently. Thus, the human is incorporated into the system’s design and the total system performance may be predicted, achieving a successful HSI process.


Author(s):  
Logan T. Trujillo

The field of human factors studies the interaction between humans and technological systems in order to optimize human–system performance, research that has traditionally been focused on observation and experiment in laboratory and real-world settings. However, a multidisciplinary subfield of human factors called human performance modeling has recently emerged that involves the development and application of mathematical models, computer simulation techniques, and computational data analysis methods to the study of human performance. This chapter provides an introduction to the use of computational modeling, simulation, and analysis techniques in human factors research, with an eye toward how such methods may be used to optimize human performance in extreme settings.


2021 ◽  
pp. 251604352199026
Author(s):  
Peter Isherwood ◽  
Patrick Waterson

Patient safety, staff moral and system performance are at the heart of healthcare delivery. Investigation of adverse outcomes is one strategy that enables organisations to learn and improve. Healthcare is now understood as a complex, possibly the most complex, socio-technological system. Despite this the use of a 20th century linear investigation model is still recommended for the investigation of adverse outcomes. In this review the authors use data gathered from the investigation of a real life healthcare near incident and apply three different methodologies to the analysis of this data. They compare both the methodologies themselves and the outputs generated. This illustrates how different methodologies generate different system level recommendations. The authors conclude that system based models generate the strongest barriers to improve future performance. Healthcare providers and their regulatory bodies need to embrace system based methodologies if they are to effectively learn from, and reduce future, adverse outcomes.


Author(s):  
Shane T. Mueller ◽  
Lamia Alam ◽  
Gregory J. Funke ◽  
Anne Linja ◽  
Tauseef Ibne Mamun ◽  
...  

In many human performance tasks, researchers assess performance by measuring both accuracy and response time. A number of theoretical and practical approaches have been proposed to obtain a single performance value that combines these measures, with varying degrees of success. In this report, we examine data from a common paradigm used in applied human factors assessment: a go/no-go vigilance task (Smith et al., 2019). We examined whether 12 different measures of performance were sensitive to the vigilance decrement induced by the design, and also examined how the different measures were correlated. Results suggest that most combined measures were slight improvements over accuracy or response time alone, with the most sensitive and representative result coming from the Linear Ballistic Accumulator model. Practical lessons for applying these measures are discussed.


2013 ◽  
Vol 631-632 ◽  
pp. 1106-1110
Author(s):  
Wei Zhao ◽  
Qiang Wang ◽  
Sheng Li Song

In the tyred machinery chassis dynamometer control system, a fuzzy PID controller was used to adjust the exciting current of a DC dynamometer in order to change the resistance load torque, so the requirement of roller load for simulating the run resistance from the road surface was satisfied. A fuzzy PID arithmetic was designed to control the resistance loads, the system performance was improved by simulation. The software of the detection line measure-control system was designed in VB, the technical parameters of the machinery chassis could the automatically detected.


Author(s):  
Salman Ahmed ◽  
Mihir Sunil Gawand ◽  
Lukman Irshad ◽  
H. Onan Demirel

Computational human factors tools are often not fully-integrated during the early phases of product design. Often, conventional ergonomic practices require physical prototypes and human subjects which are costly in terms of finances and time. Ergonomics evaluations executed on physical prototypes has the limitations of increasing the overall rework as more iterations are required to incorporate design changes related to human factors that are found later in the design stage, which affects the overall cost of product development. This paper proposes a design methodology based on Digital Human Modeling (DHM) approach to inform designers about the ergonomics adequacies of products during early stages of design process. This proactive ergonomics approach has the potential to allow designers to identify significant design variables that affect the human performance before full-scale prototypes are built. The design method utilizes a surrogate model that represents human product interaction. Optimizing the surrogate model provides design concepts to optimize human performance. The efficacy of the proposed design method is demonstrated by a cockpit design study.


Sign in / Sign up

Export Citation Format

Share Document