Using the Critical Thinking Assessment Test (CAT) as a Model for Designing Within-Course Assessments

2015 ◽  
Vol 30 (3) ◽  
pp. 38-48
Author(s):  
Ada Haynes ◽  
Elizabeth Lisic ◽  
Kevin Harris ◽  
Katie Leming ◽  
Kyle Shanks ◽  
...  
2014 ◽  
Vol 4 (1) ◽  
pp. 45-61
Author(s):  
Martina Kosturková

The ability to think critically is considered a key competence of the 21st century. Empirical findings are absent in Slovakia, therefore one of the goals of this work was to do research to determine the level of critical thinking in students of education. The statistic data was obtained through the analysis of the Watson-Glaser critical thinking assessment test. The sample consisted of 116 students of education at the Faculty of Humanities and Natural Sciences of University of Presov in Presov.


2016 ◽  
Vol 5 (2) ◽  
pp. 60 ◽  
Author(s):  
Sami Basha ◽  
Denise Drane ◽  
Gregory Light

<p>Critical thinking is a key learning outcome for Palestinian students. However, there are no validated critical thinking tests in Arabic. Suitability of the US developed Critical Thinking Assessment Test (CAT) for use in Palestine was assessed. The test was piloted with university students in English (n=30) and 4 questions were piloted in Arabic (n=48). Students responded favorably. Scores were comparable with US scores. Only two students found the content problematic. One-hundred-twelve Palestinian faculty reviewed the skills tested by the CAT. There was moderate agreement that they represent critical thinking. Translation of the CAT into Arabic and further study are warranted.</p>


2019 ◽  
Vol 8 (1) ◽  
pp. 1
Author(s):  
Sherry Fukuzawa ◽  
Michael DeBraga

Graded Response Method (GRM) is an alternative to multiple-choice testing where students rank options accordingto their relevance to the question. GRM requires discrimination and inference between statements and is acost-effective critical thinking assessment in large courses where open-ended answers are not feasible. This studyexamined critical thinking assessment in GRM versus open-ended and multiple-choice questions composed fromBloom’s taxonomy in an introductory undergraduate course in anthropology and archaeology (N=53students).Critical thinking was operationalized as the ability to assess a question with evidence to support or evaluatearguments (Ennis, 1993). We predicted that students who performed well on multiple-choice from Bloom’staxonomy levels 4-6 and open-ended questions would perform well on GRM involving similar concepts. Highperforming students on GRM were predicted to have higher course grades. The null hypothesis was question typewould not have an effect on critical thinking assessment. In two quizzes, there was weak correlation between GRMand open-ended questions (R2=0.15), however there was strong correlation in the exam (R2=0.56). Correlations wereconsistently higher between GRM and multiple-choice from Bloom’s taxonomy levels 4-6 (R2=0.23,0.31,0.21)versus levels 1-3 (R2=0.13,0.29,0.18). GRM is a viable alternative to multiple-choice in critical thinking assessmentwithout added resources and grading efforts.


Sign in / Sign up

Export Citation Format

Share Document