scholarly journals A taxonomy of expert elevator and amusement device inspector knowledge

2021 ◽  
Author(s):  
Andrew Novak

This thesis presents a taxonomy of expert elevator and amusement device inspector knowledge that was developed using task and cognitive task analysis. While literature concerning research into quality control inspection exists, very little research has been performed into safety inspection. A qualitative study captured the knowledge used by elevator and amusement device inspection. The existence of expert performance in the elevator and amusement device inspection domains was identified and a taxonomy of expert inspector knowledge was created. This taxonomy was based on a model of knowledge that distinguishes between three types of knowledge - declarative, procedural, and strategic. Further development of this taxonomy, along with an effort to perform expert inspector knowledge capture, is expected to lead to improved inspector training and performance, and an increase in consistency between the inspections performed by all inspectors.

2021 ◽  
Author(s):  
Andrew Novak

This thesis presents a taxonomy of expert elevator and amusement device inspector knowledge that was developed using task and cognitive task analysis. While literature concerning research into quality control inspection exists, very little research has been performed into safety inspection. A qualitative study captured the knowledge used by elevator and amusement device inspection. The existence of expert performance in the elevator and amusement device inspection domains was identified and a taxonomy of expert inspector knowledge was created. This taxonomy was based on a model of knowledge that distinguishes between three types of knowledge - declarative, procedural, and strategic. Further development of this taxonomy, along with an effort to perform expert inspector knowledge capture, is expected to lead to improved inspector training and performance, and an increase in consistency between the inspections performed by all inspectors.


Author(s):  
Michael J. DeVries ◽  
Sallie E. Gordon

Because an increasing number of systems are being developed to support complex cognitive functioning, task analysis is commonly being augmented with cognitive task analysis, which identifies cognitive processes, knowledge, and mental models relevant to task performance. Cognitive task analysis tends to be lengthy and time-consuming, so designers frequently ask how they might know if it is actually necessary for a specific project. In this paper, we assume that much of the need for cognitive task analysis depends on the inherent “cognitive complexity” of the task. We present a model of cognitive complexity, and show how it was used to develop a computer-based tool for estimating relative cognitive complexity for a set of tasks. The tool, Cog-C, elicits task and subtask hierarchies, then guides the user in making relatively simple estimates on a number of scales. The tool calculates and displays the relative cognitive complexity scores for each task, along with subscores of cognitive complexity for different types of knowledge. Usability and reliability were evaluated in multiple domains, showing that the tool is relatively easy to use, reliable, and well-accepted.


Author(s):  
Emilie M. Roth ◽  
Randall J. Mumaw

Cognitive task analysis (CTA) methods have grown out of the need to explicitly consider cognitive processing requirements of complex tasks. A number of approaches to CTA have been developed that vary in goals, the tools they bring to bear, and their data requirements. We present a particular CTA technique that we are utilizing in the design of new person-machine interfaces for first-of-a-kind advanced process control plants. The methodology has its roots in the formal analytic goal-means decomposition method pioneered by Rasmussen (1986). It contrasts with other approaches in that it is intended: (1) for design of first-of-a-kind systems for which there are no close existing analogues, precluding the use of CTA techniques that rely on empirical analysis of expert performance; (2) to define person-machine interface requirements to support operator problem-solving and decision-making in unanticipated situations; and (3) to be a pragmatic, codified, tool that can be used reliably by person-machine interface designers.


Author(s):  
Laura G. Militello ◽  
Robert J. B. Hutton ◽  
Rebecca M. Pliske ◽  
Betsy J. Knight ◽  
Gary Klein ◽  
...  

2001 ◽  
Author(s):  
Richard P. Fahey ◽  
Anna L. Rowe ◽  
Kendra L. Dunlap ◽  
Dan O. deBoom

2000 ◽  
Author(s):  
J. M. C. Schraagen ◽  
◽  
N. Graff ◽  
J. Annett ◽  
M. H. Strub ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document