<p style="text-align: justify;">Computational thinking (CT) is a method for solving complex problems, but also gives people an inventive inspiration to adapt to our smart and changing society. Globally it has been considered as vital abilities for solving genuine issues successfully and efficiently in the 21st century. Recent studies have revealed that the nurture of CT mainly centered on measuring the technical skill. There is a lack of conceptualization and instruments that cogitate on CT disposition and attitudes. This study attends to these limitations by developing an instrument to measure CT concerning dispositions and attitudes. The instruments' validity and reliability testing were performed with the participation from secondary school students in Malaysia. The internal consistency reliability, standardized residual variance, construct validity and composite reliability were examined. The result revealed that the instrument validity was confirmed after removing items. The reliability and validity of the instrument have been verified. The findings established that all constructs are useful for assessing the disposition of computer science students. The implications for psychometric assessment were evident in terms of giving empirical evidence to corroborate theory-based constructs and also validating items' quality to appropriately represent the measurement.</p>
Computational Thinking (CT
), entailing both domain-general and domain-specific skills, is a competency fundamental to computing education and beyond. However, as a cross-domain competency, appropriate assessment design and method remain equivocal. Indeed, the majority of the existing assessments have a predominant focus on measuring programming proficiency and neglecting other contexts in which CT can also be manifested. To broaden the promotion and practice of CT, it is necessary to integrate diverse problem types and item formats using a competency-based assessment method to measure CT. Taking a psychometric approach, this article evaluates a novel computer-based assessment of CT competency, Computational Thinking Challenge. The assessment was administered to 119 British upper secondary school students (
= 1.19) with a range of prior programming experiences. Results from several reliability analyses, a convergent validity analysis, and a Rasch analysis, provided evidence to support the quality of the assessment. Taken together, the study demonstrated the feasibility to expand from traditional assessment methods to integrating multiple contexts, problem types, and item formats in measuring CT competency in a comprehensive manner.