Development and Evaluation of a Human Performance Assessment Battery

1990 ◽  
Author(s):  
John Schrot
1994 ◽  
Vol 3 (2) ◽  
pp. 145-157 ◽  
Author(s):  
Donald R. Lampton ◽  
Bruce W. Knerr ◽  
Stephen L. Goldberg ◽  
James P. Bliss ◽  
J. Michael Moshell ◽  
...  

The Virtual Environment Performance Assessment Battery (VEPAB) is a set of tasks developed to support research on training applications of virtual environment (VE) technology. VEPAB measures human performance on vision, locomotion, tracking, object manipulation, and reaction time tasks performed in three-dimensional, interactive VEs. It can be used to provide a general orientation for interacting in VEs and to determine both entry level performance and skill acquisition of users. In addition, VEPAB allows comparison of task performance, side effects and aftereffects, and subjective reactions across different VE systems. By providing benchmarks of human performance, VEPAB can promote continuity in training research involving different technologies, separate research facilities, and dissimilar subject populations. This paper describes the development of VEPAB and summarizes the results of two experiments, one to test the sensitivity of the tasks to differences between input control devices and the other to examine practice effects.


1987 ◽  
Vol 31 (6) ◽  
pp. 629-633
Author(s):  
Edward M. Connelly

Selection of a measure of effectiveness (MOE) (a mathematical function) and using that measure to evaluate performance demonstrations (or exercises, or experimental trials) without first testing the measure, typically results in a disagreement between two ways of assigning effectiveness scores to each performance demonstration. The two ways of assigning effectiveness scores to each performance demonstration are: effectiveness scores assigned directly by the investigator and effectiveness scores assigned by the MOE selected by the investigator. The disagreement often exists even when comparing the rank ordering of the two sets of scored performance demonstrations. A disagreement between the two methods means that one method, possibly both, are not correct. The direct assignment of effectiveness scores to each performance demonstration constitutes a test of the MOE. In this paper, we argue that test is typically not conducted and if it were, the MOE (existing untested MOE's) would likely fail the test. We also argue that the investigator should not select an MOE but rather should have an authority (SME) score performance demonstrations and then synthesize an MOE that will pass the test. A method for synthesizing the MOE is presented.


Sign in / Sign up

Export Citation Format

Share Document