The utilization of the Purdue cognitive job analysis methodology

2003 ◽  
Vol 13 (1) ◽  
pp. 59-84 ◽  
Author(s):  
June Wei ◽  
Gavriel Salvendy
2006 ◽  
Vol 27 (4) ◽  
pp. 485-494 ◽  
Author(s):  
Susan M. Jenkins ◽  
Patrick Curtin

1993 ◽  
Vol 22 (4) ◽  
pp. 551-563 ◽  
Author(s):  
William Wooten

Job analytic methodology was used to identify knowledge, skill and ability (KSA) dimensions of four classes of jobs (secretarial/clerical, managerial/administrative, professional/technical and service). The KSA's were then identified as either selection or training criteria (critical for the development of selection tests or training programs). The feasibility of establishing career paths between the secretarial/clerical jobs (source jobs) and the managerial/administrative jobs (target jobs) was evaluated by comparing the selection and training criteria of the source job to the critical (important) knowledge, skills and abilities (KSAs) of the target jobs. It was found that when the critical KSAs for the managerial/administrative positions were rated using job analysis techniques, they significantly correlated with the content identified as part of the secretarial/clerical jobs. Sixty-eight percent (68%) of the KSAs identified as important for performance in the managerial/administrative jobs were also identified as important for the performance in the secretarial/clerical jobs. Further, 81% of the target jobs' KSAs not found to be source job selection criteria were found to be source job training criteria. The implications are that job analysis methodology can be used to identify possible career paths, and that career paths can be established between secretarial/clerical jobs and entry level administrative/managerial jobs.


2000 ◽  
Vol 44 (30) ◽  
pp. 5-612-5-612
Author(s):  
Nathan P. Rucker ◽  
J. Steven Moore

The Strain Index is a semi-quantitative job analysis methodology developed on the basis of biomechanics, physiology, and epidemiology to predict jobs that place workers at an increased risk of developing a musculoskeletal disorder. Thirty jobs were classified based on the Strain Index score and morbidity. A comparison of the Strain Index score and the morbidity resulted in a sensitivity of 100%, specificity of 88%, positive predictive value of 67%, and negative predictive value of 100%. This suggests that the Strain Index is a valid tool to identify at risk jobs in industrial settings.


2016 ◽  
Vol 21 (6) ◽  
pp. 5-11
Author(s):  
E. Randolph Soo Hoo ◽  
Stephen L. Demeter

Abstract Referring agents may ask independent medical evaluators if the examinee can return to work in either a normal or a restricted capacity; similarly, employers may ask external parties to conduct this type of assessment before a hire or after an injury. Functional capacity evaluations (FCEs) are used to measure agility and strength, but they have limitations and use technical jargon or concepts that can be confusing. This article clarifies key terms and concepts related to FCEs. The basic approach to a job analysis is to collect information about the job using a variety of methods, analyze the data, and summarize the data to determine specific factors required for the job. No single, optimal job analysis or validation method is applicable to every work situation or company, but the Equal Employment Opportunity Commission offers technical standards for each type of validity study. FCEs are a systematic method of measuring an individual's ability to perform various activities, and results are matched to descriptions of specific work-related tasks. Results of physical abilities/agilities tests are reported as “matching” or “not matching” job demands or “pass” or “fail” meeting job criteria. Individuals who fail an employment physical agility test often challenge the results on the basis that the test was poorly conducted, that the test protocol was not reflective of the job, or that levels for successful completion were inappropriate.


2002 ◽  
Vol 18 (1) ◽  
pp. 52-62 ◽  
Author(s):  
Olga F. Voskuijl ◽  
Tjarda van Sliedregt

Summary: This paper presents a meta-analysis of published job analysis interrater reliability data in order to predict the expected levels of interrater reliability within specific combinations of moderators, such as rater source, experience of the rater, and type of job descriptive information. The overall mean interrater reliability of 91 reliability coefficients reported in the literature was .59. The results of experienced professionals (job analysts) showed the highest reliability coefficients (.76). The method of data collection (job contact versus job description) only affected the results of experienced job analysts. For this group higher interrater reliability coefficients were obtained for analyses based on job contact (.87) than for those based on job descriptions (.71). For other rater categories (e.g., students, organization members) neither the method of data collection nor training had a significant effect on the interrater reliability. Analyses based on scales with defined levels resulted in significantly higher interrater reliability coefficients than analyses based on scales with undefined levels. Behavior and job worth dimensions were rated more reliable (.62 and .60, respectively) than attributes and tasks (.49 and .29, respectively). Furthermore, the results indicated that if nonprofessional raters are used (e.g., incumbents or students), at least two to four raters are required to obtain a reliability coefficient of .80. These findings have implications for research and practice.


1990 ◽  
Vol 35 (10) ◽  
pp. 1008-1008
Author(s):  
No authorship indicated

Sign in / Sign up

Export Citation Format

Share Document