Are there cracks in our foundation? An integrative review of diversity issues in job analysis.

Author(s):  
Nicole Strah ◽  
Deborah E. Rupp
2016 ◽  
Vol 21 (6) ◽  
pp. 5-11
Author(s):  
E. Randolph Soo Hoo ◽  
Stephen L. Demeter

Abstract Referring agents may ask independent medical evaluators if the examinee can return to work in either a normal or a restricted capacity; similarly, employers may ask external parties to conduct this type of assessment before a hire or after an injury. Functional capacity evaluations (FCEs) are used to measure agility and strength, but they have limitations and use technical jargon or concepts that can be confusing. This article clarifies key terms and concepts related to FCEs. The basic approach to a job analysis is to collect information about the job using a variety of methods, analyze the data, and summarize the data to determine specific factors required for the job. No single, optimal job analysis or validation method is applicable to every work situation or company, but the Equal Employment Opportunity Commission offers technical standards for each type of validity study. FCEs are a systematic method of measuring an individual's ability to perform various activities, and results are matched to descriptions of specific work-related tasks. Results of physical abilities/agilities tests are reported as “matching” or “not matching” job demands or “pass” or “fail” meeting job criteria. Individuals who fail an employment physical agility test often challenge the results on the basis that the test was poorly conducted, that the test protocol was not reflective of the job, or that levels for successful completion were inappropriate.


2002 ◽  
Vol 18 (1) ◽  
pp. 52-62 ◽  
Author(s):  
Olga F. Voskuijl ◽  
Tjarda van Sliedregt

Summary: This paper presents a meta-analysis of published job analysis interrater reliability data in order to predict the expected levels of interrater reliability within specific combinations of moderators, such as rater source, experience of the rater, and type of job descriptive information. The overall mean interrater reliability of 91 reliability coefficients reported in the literature was .59. The results of experienced professionals (job analysts) showed the highest reliability coefficients (.76). The method of data collection (job contact versus job description) only affected the results of experienced job analysts. For this group higher interrater reliability coefficients were obtained for analyses based on job contact (.87) than for those based on job descriptions (.71). For other rater categories (e.g., students, organization members) neither the method of data collection nor training had a significant effect on the interrater reliability. Analyses based on scales with defined levels resulted in significantly higher interrater reliability coefficients than analyses based on scales with undefined levels. Behavior and job worth dimensions were rated more reliable (.62 and .60, respectively) than attributes and tasks (.49 and .29, respectively). Furthermore, the results indicated that if nonprofessional raters are used (e.g., incumbents or students), at least two to four raters are required to obtain a reliability coefficient of .80. These findings have implications for research and practice.


1990 ◽  
Vol 35 (10) ◽  
pp. 1008-1008
Author(s):  
No authorship indicated

1995 ◽  
Author(s):  
Dierdre J. Knapp ◽  
Teresa L. Russell ◽  
John P. Campbell
Keyword(s):  

2012 ◽  
Author(s):  
Julie Taylor ◽  
Anne Stafford ◽  
Diane Jerwood
Keyword(s):  

2012 ◽  
Author(s):  
Jeffrey R. Labrador ◽  
Kathleen Frye ◽  
Michael A. Campion ◽  
Jeff Weekley
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document