Performance Evaluation Tests for Environmental Research (PETER): Code Substitution Test.

1981 ◽  
Author(s):  
Ross L. Pepper ◽  
Robert S. Kennedy ◽  
Alvah C. Bittner ◽  
Steven F. Wiker
1986 ◽  
Vol 63 (2) ◽  
pp. 683-708 ◽  
Author(s):  
Alvah C. Bittner ◽  
Robert C. Carter ◽  
Robert S. Kennedy ◽  
Mary M. Harbeson ◽  
Michele Krause

The goal of the Performance Evaluation Tests for Environmental Research (PETER) Program was to identify a set of measures of human capabilities for use in the study of environmental and other time-course effects. 114 measures studied in the PETER Program were evaluated and categorized into four groups based upon task stability and task definition. The Recommended category contained 30 measures that clearly obtained total stabilization and had an acceptable level of reliability efficiency. The Acceptable-But-Redundant category contained 15 measures. The 37 measures in the Marginal category, which included an inordinate number of slope and other derived measures, usually had desirable features which were outweighed by faults. The 32 measures in the Unacceptable category had either differential instability or weak reliability efficiency. It is our opinion that the 30 measures in the Recommended category should be given first consideration for environmental research applications. Further, it is recommended that information pertaining to preexperimental practice requirements and stabilized reliabilities should be utilized in repeated-measures environmental studies.


1980 ◽  
Vol 24 (1) ◽  
pp. 320-324 ◽  
Author(s):  
Robert C. Carter ◽  
Robert S. Kennedy ◽  
Alvah C. Bittner

A battery of Performance Evaluation Tests for Environmental Research (PETER) that is suitable for use in repeated measures experiments is being developed at the Naval Biodynamics Laboratory. This paper describes the sources of tasks which have been considered for inclusion in PETER. It also lists the tests in the source batteries which have or have not yet been considered for inclusion in PETER. The performance content of the tests that have been considered is compared with the content of those that have not. Recommendations are made for selection of additional tests from the source batteries which will not be redundant with tests that already have been considered. This report puts PETER into the context of the tests and test batteries which came before it.


1980 ◽  
Vol 51 (3_suppl2) ◽  
pp. 1023-1031 ◽  
Author(s):  
D. M. Seales ◽  
R. S. Kennedy ◽  
A. C. Bittner

A paper-and-pencil test of simple arithmetic ability was exceptionally well suited for inclusion in a battery of Performance Evaluation Tests for Environmental Research (PETER). Mean performance stabilized after nine days of baseline testing. Variance was constant throughout 15 days of baseline testing. “Task definition” was high, and “differential stability” was present from the outset. Subjects apparently came to this test with well established differential levels of arithmetic ability.


1983 ◽  
Vol 57 (1) ◽  
pp. 283-293 ◽  
Author(s):  
M. M. Harbeson ◽  
A. C. Bittner ◽  
R. S. Kennedy ◽  
R. C. Carter ◽  
M. Krause

Listed are 90 reports of the Performance Evaluation Tests for Environmental Research (PETER) Program. Conducted from 1977 to 1982, the programs' purpose was to develop a test battery for use in repeated measures investigations of environmental effects on human performance, e.g., vehicle motion, toxic substances, aging, etc. The battery also has applications to training, selection, and research on equipment design. The PETER Program concentrated on selecting tests which remained stable with repeated measurements, as environmental research usually involves testing before, during, and after exposure. Stability of the means, variances, and intertrial correlations ensures that simple analyses may be applied with minimal complications and without difficulties of attribution of effect. Over 80 measures were evaluated, 30% were found suitable for repeated measures applications, 20% were acceptable for limited use, and 50% could not be recommended. The unsuitability of many tasks brings into question the validity of portions of the literature on environmental effects. The reports describe program rationale, development of statistical methodology, and stable tasks. PETER reports are available from published sources, authors, or the Naval Biodynamics Laboratory.


1980 ◽  
Vol 51 (2) ◽  
pp. 655-665 ◽  
Author(s):  
Michael E. Mc Cauley ◽  
Robert S. Kennedy ◽  
Alvah C. Bittner

A time-estimation task was considered for inclusion in the Performance Evaluation Tests for Environmental Research (PETER) battery. As part of this consideration, the effects of repeated testing on the reliability of time judgments were studied. The method of production was used to estimate eight time intervals. Five trials per day at each interval were administered individually to each of 19 subjects for 15 consecutive workdays. Two scores, constant error and variable error, were reported. The effect of days was not significant for constant error and was moderate for variable error ( p < .04). The standard deviations were relatively stable across trials. A pronounced decline in reliability over repeated days of testing was found for both errors. It was concluded that this time-estimation test would be a poor candidate for inclusion in PETER, but further research is warranted because of the potential unique contribution of a time-estimation task in a performance test battery.


1979 ◽  
Vol 23 (1) ◽  
pp. 513-517 ◽  
Author(s):  
Michael E. McCauley ◽  
Robert S. Kennedy ◽  
Alvah C. Bittner

A time estimation task was considered for inclusion in the Performance Evaluation Tests for Environmental Research (PETER) battery. As part of this consideration the effects of repeated testing on the reliability of time judgments, using the method of production, was studied. Forty trials per day were administered individually to each of 19 subjects for 15 consecutive weekdays. Descriptive statistics are reported and the need for knowledge about the reliability coefficient over repeated test administrations in the context of performance testing in exotic environments is discussed.


1980 ◽  
Vol 24 (1) ◽  
pp. 340-343 ◽  
Author(s):  
Robert C. Carter ◽  
Robert S. Kennedy ◽  
Alvah C. Bittner ◽  
Michele Krause

Item Recognition (Sternberg, 1966) is a task which reflects the operation of human memory. This task was considered as a candidate for use in a battery of Performance Evaluation Tests for Environmental Research (PETER). Environmental research involves comparison of performances in a baseline environment and in a novel environment. It is desirable that scores be stable at different occasions in the baseline environment, so that changes due to the novel environment will be clear if they occur. It was found that item recognition results were similar to those obtained by other investigations, although the traditional item recognition score (slope) was unreliable across repeated measurements. The response time (RT) was stable for each of the four memory set sizes (1, 2, 3 & 4 items), from the standpoint of reliability, after the fourth session.


1984 ◽  
Vol 58 (2) ◽  
pp. 567-573 ◽  
Author(s):  
Diane L. Damos ◽  
Alvah C. Bittner ◽  
Robert S. Kennedy ◽  
Mary M. Harbeson ◽  
Michele Krause

A critical tracking test was considered for inclusion in the Performance Evaluation Tests for Environmental Research (PETER) Battery which was designed for use in unusual environments. Baseline measures were obtained by testing 18 subjects for 14 consecutive days. Mean performances increased but standard deviations were constant over the 14 days. Test-retest reliabilities improved over the first 8 days after which differential stability was seen. The implications for the use of this test in exotic environments are discussed. The critical tracking test is recommended as a good candidate for environmental research when practiced to total stability.


Sign in / Sign up

Export Citation Format

Share Document