scholarly journals MP34-05 INTRODUCING OPERATIVE SKILLS TESTING IN UROLOGY BOARD EXAMINATIONS

2020 ◽  
Vol 203 ◽  
pp. e504-e505
Author(s):  
Ashraf Mosharafa* ◽  
Mohamed Abdelrassoul ◽  
Hany Elfayoumy ◽  
Mohamed Elsheikh ◽  
Ismail Saad ◽  
...  
Keyword(s):  
Author(s):  
Joo Hee Kim ◽  
Ju-Yeun Lee ◽  
Young Sook Lee ◽  
Chul-Soon Yong ◽  
Nayoung Han ◽  
...  

Purpose: The survey aimed to obtain opinions about a proposed implementation of pharmacy skills assessment in Korean pharmacist licensure examination (KPLE).Methods: A 16-question survey was distributed electronically to 2,738 people including 570 pharmacy professors of 35 pharmacy schools, 550 preceptors from 865 practice sites and 1,618 students who graduated in 2015. The survey solicited responses concerning the adequacy of the current KPLE in assessing pharmacy knowledge/skills/attitudes, deficiencies of pharmacy skills testing in assessing the professional competencies necessary for pharmacists, plans for pharmacy skills tests in the current KPLE, and subject areas of pharmacy practice.Results: A total of 466 surveys were returned. The current exam is not adequate for assessing skills and attitudes according to 42%–48% of respondents. Sixty percent felt that skills test is necessary to assess qualifications and professional competencies. Almost two-thirds of participants stated that testing should be implemented within 5 years. More than 60% agreed that candidates should be graduates and that written and skills test scores can be combined for pass-fail decisions. About 70% of respondents felt that the test should be less than 2 hours in duration. Over half of the respondents thought that the assessor should be a pharmacy faculty member with at least 5 years of clinical experience. Up to 70% stated that activities related to patient care were appropriate and practical for the scope of skills test. Conclusion: Pharmacy skills assessment was supported by the majority of respondents.


2018 ◽  
Vol 36 (2) ◽  
pp. 265-287 ◽  
Author(s):  
Susy Macqueen ◽  
Ute Knoch ◽  
Gillian Wigglesworth ◽  
Rachel Nordlinger ◽  
Ruth Singer ◽  
...  

All educational testing is intended to have consequences, which are assumed to be beneficial, but tests may also have unintended, negative consequences (Messick, 1989). The issue is particularly important in the case of large-scale standardized tests, such as Australia’s National Assessment Program - Literacy and Numeracy (NAPLAN), the intended benefits of which are increased accountability and improved educational outcomes. The NAPLAN purpose is comparable to that of other state and national ‘core skills’ testing programs, which evaluate cross-sections of populations in order to compare results between population sub-groupings. Such comparisons underpin ‘accountability’ in the era of population-level testing. This study investigates the impact of NAPLAN testing on one population grouping that is prominent in the NAPLAN results’ comparisons and public reporting: children in remote Indigenous communities. A series of interviews with principals and teachers documents informants’ first-hand experiences of the use and effects of NAPLAN in schools. In the views of most participants, the language and content of the test instruments, the nature of the test engagement, and the test washback have negative impacts on students and staff, with little benefit in terms of the usefulness of the test data. The primary issue is the fact that meaningful participation in the tests depends critically on proficiency in Standard Australian English (SAE) as a first language. This study contributes to the broader discussion of how reform-targeted standardized testing for national populations affects sub-groups who are not treated equitably by the test instrument or reporting for accountability purposes. It highlights a conflict between consequential validity and the notion of accountability that drives reform-targeted testing.


1988 ◽  
Vol 31 ◽  
pp. 116-126
Author(s):  
W. Jochems ◽  
F. Montens

This article presents and discusses a number of empirical findings concerning the psychometric quality of multiple-choice cloze tests as tests of general language proficiency, with emphasis on their validity and efficiency. The Dutch proficiency of various groups of foreign speakers was measured both by a series of separate proficiency tests in listening, speaking, reading and writing and by a series of multiple-choice cloze tests. Scores on multiple-choice cloze tests were found to correlate significantly with those on each of the proficiency tests. In addition, scores on multiple-choice cloze tests appeared to form a solid basis for predicting the total scores for listening, speaking, reading and writing taken together. Further, a clear relation was found to exist between levels of language proficiency and subjects' scores on multiple-choice cloze tests. Our conclusion is that the multiple-choice cloze tests under investigation have proved to be high-quality instruments for measuring proficiency in Dutch as a second language. Compared to a four-skills test, a multiple-choice cloze test is a very efficient instrument. Administering and processing take only little time. Besides, they can be administered to very large groups of subjects. Because of its quality and efficiency, multiple-choice cloze testing should be preferred to four-skills testing.


1988 ◽  
Vol 38 (1) ◽  
pp. 186-194
Author(s):  
Joy Rosner ◽  
Jerome Rosner ◽  
Malcolm L. Mazow

1992 ◽  
Vol 18 (3) ◽  
pp. 319-353 ◽  
Author(s):  
Chris Kay ◽  
Malcolm Rosier ◽  
Pinchas Tamir

1970 ◽  
Vol 2 (6) ◽  
pp. 252-255
Author(s):  
CR CLEGG ◽  
A JONES

Sign in / Sign up

Export Citation Format

Share Document