scholarly journals Job Analysis and Curriculum development of Administrative Officials in the Cyber University by DACUM

2008 ◽  
Vol 10 (3) ◽  
pp. 87-116
Author(s):  
강민석 ◽  
Inae Kang
Author(s):  
Sook Hyang Kim ◽  
Kyung Hee Song ◽  
Hyeon Sook Kwun ◽  
Seol Aak Kim ◽  
Jong Hwa Jang ◽  
...  

This study aimed to develop standard items for the Korean Dental Hygienists' Licensing Examination; these items were also earmarked for use in developing the curriculum for dental hygienists, and in writing a job description, based on a job analysis using the Developing A Curriculum Method (DACUM). It also aimed to understand the significance and frequency of task elements that dental hygienists perform. Data were collected by means of a mail survey, in the form of self-entry, from a sample of dental hygienists registered with the Korean Dental Hygienists' Association. In all, 260 responses were analyzed. The tasks of dental hygienists were divided into four categories, 93 tasks, and 494 task elements. There were 281 elements (61%) that scored higher than 3.5 in significance, in the 4-scale items, and 480 elements (98%) that scored higher than 3.0. There were 30 elements (6%) that scored higher than 3.5 in frequency, and 140 elements (29%) that scored higher than 3.0 in frequency. Overall, 130 out of 494 elements (27%) scored higher than 3.0 for both significance and frequency. Therefore, those 130 elements should be included as items in the Korean Dental Hygienists' Licensing Examination. The results can also be used for curriculum development and as the basis of a job description for dental hygienists.


Film Studies ◽  
2009 ◽  
Vol null (42) ◽  
pp. 227-260
Author(s):  
Kim Jin-Mo ◽  
신재호 ◽  
이진화 ◽  
주현미 ◽  
Hee-Moon Cho ◽  
...  

2016 ◽  
Vol 21 (6) ◽  
pp. 5-11
Author(s):  
E. Randolph Soo Hoo ◽  
Stephen L. Demeter

Abstract Referring agents may ask independent medical evaluators if the examinee can return to work in either a normal or a restricted capacity; similarly, employers may ask external parties to conduct this type of assessment before a hire or after an injury. Functional capacity evaluations (FCEs) are used to measure agility and strength, but they have limitations and use technical jargon or concepts that can be confusing. This article clarifies key terms and concepts related to FCEs. The basic approach to a job analysis is to collect information about the job using a variety of methods, analyze the data, and summarize the data to determine specific factors required for the job. No single, optimal job analysis or validation method is applicable to every work situation or company, but the Equal Employment Opportunity Commission offers technical standards for each type of validity study. FCEs are a systematic method of measuring an individual's ability to perform various activities, and results are matched to descriptions of specific work-related tasks. Results of physical abilities/agilities tests are reported as “matching” or “not matching” job demands or “pass” or “fail” meeting job criteria. Individuals who fail an employment physical agility test often challenge the results on the basis that the test was poorly conducted, that the test protocol was not reflective of the job, or that levels for successful completion were inappropriate.


Author(s):  
Geoffrey Howson ◽  
Christine Keitel ◽  
Jeremy Kilpatrick

2002 ◽  
Vol 18 (1) ◽  
pp. 52-62 ◽  
Author(s):  
Olga F. Voskuijl ◽  
Tjarda van Sliedregt

Summary: This paper presents a meta-analysis of published job analysis interrater reliability data in order to predict the expected levels of interrater reliability within specific combinations of moderators, such as rater source, experience of the rater, and type of job descriptive information. The overall mean interrater reliability of 91 reliability coefficients reported in the literature was .59. The results of experienced professionals (job analysts) showed the highest reliability coefficients (.76). The method of data collection (job contact versus job description) only affected the results of experienced job analysts. For this group higher interrater reliability coefficients were obtained for analyses based on job contact (.87) than for those based on job descriptions (.71). For other rater categories (e.g., students, organization members) neither the method of data collection nor training had a significant effect on the interrater reliability. Analyses based on scales with defined levels resulted in significantly higher interrater reliability coefficients than analyses based on scales with undefined levels. Behavior and job worth dimensions were rated more reliable (.62 and .60, respectively) than attributes and tasks (.49 and .29, respectively). Furthermore, the results indicated that if nonprofessional raters are used (e.g., incumbents or students), at least two to four raters are required to obtain a reliability coefficient of .80. These findings have implications for research and practice.


Sign in / Sign up

Export Citation Format

Share Document