Development of the Learning Potential Computerised Adaptive Test (Lpcat)

2005 ◽  
Vol 35 (4) ◽  
pp. 717-747 ◽  
Author(s):  
Marié de Beer

An overview of the development of a dynamic test for the measurement of learning potential — the Learning Potential Computerised Adaptive Test (LPCAT) — is provided. The test was developed in South Africa with a view to providing information on the present and potential future level of general non-verbal figurai reasoning ability for persons from different backgrounds in a way that is fair to all concerned. Multicultural samples were used in its development and standardisation. Item response theory principles and computerised adaptive testing technology addressed many of the earlier measurement problems in the dynamic assessment of learning potential and made possible the construction of a psychometrically sound, yet time-efficient and practically useful tool for the measurement of learning potential in multicultural contexts.

2021 ◽  
pp. 87-99
Author(s):  
Natalie Hasson

Dynamic assessment (DA), or the assessment of learning potential, is becoming recognized as an alternative method that has wide application within the assessment of language. In moving away from comparison to normative data, the assessment enables a wider range of children to be assessed, including all of those for whom the norms do not apply, such as children with autistic spectrum conditions, attention deficit hyperactivity disorder (ADHD), hearing loss, and bi- or multilingual children. In addition to differentiating language difficulties due to lack of experience with the target language from developmental language disorders (DLD), the DA procedure contributes a considerable amount of qualitative information about the learning skills of the test-taker. This chapter reviews the multiple models and methods of DA and the work that has been done to develop tools to assess language skills in first language learners.


2000 ◽  
Vol 1 (1) ◽  
pp. 40-64 ◽  
Author(s):  
David Tzuriel ◽  
Marilyn T. Samuels

The objective of this study was to investigate the reliability of three major domains of individual dynamic assessment (DA): (a) deficient cognitive functions (DCF), (b) types of mediation given during DA, and (c) non-intellective factors. A sample of 35 young adolescents was administered eight tests from the Learning Potential Assessment Device (LPAD) (Feuerstein, Rand, & Hoffman, 1979). The sample was composed of children diagnosed with learning disabilities and educable mental handicaps, and normally achieving children. The DA procedure for each case was videotaped for 8 to 15 hours and later rated for the three main domains. Results in general showed moderate reliability scores for DCF and mediational strategies and lower reliability scores for the non-intellective factors (NIF). Separate analyses were carried out for ratings which include a 0 category (examiners could not observe a behavior) and ratings without a 0 category. The results showed a general tendency for higher agreement among raters when the 0 category was removed. In type of mediation, ratings were similar with or without the 0 rating only in the training phase, when agreement was higher in approximately 10% of categories when 0 ratings were included than when not. These results were explained by referring to the interaction of type of task and phase of testing (situation) interaction.


2010 ◽  
Vol 9 (2) ◽  
pp. 91-115 ◽  
Author(s):  
Tirza Bosma ◽  
Wilma C.M. Resing

This study investigated teachers’ evaluations of reports and recommendations, based on outcomes of dynamic assessment, regarding their second grade pupils with math difficulties. Thirty-one teachers and 116 pupils assigned to an experimental or control condition participated. Reports for children were based on administrated math and memory tasks and either a dynamic test (Seria-Think Instrument) or standard test (Raven PM). Teachers were observed, interviewed, rated the learning potential at two moments, and evaluated specific dynamic assessment information in a follow-up questionnaire. Results showed that teachers valued the dynamic assessment reports and recommendations overall as meaningful, as did teachers reading static reports. Learning potential ratings appeared to be affected by the reports. Dynamic assessment information and recommendations were valued as applicable for constructing individual educational plans; personal factors (seniority and teaching experience) appeared of influence. To realize the potential of dynamic assessment, it is recommended to make dynamic assessment part of teacher’s curriculum.


2012 ◽  
Vol 93 (7) ◽  
pp. 1153-1160 ◽  
Author(s):  
Andrea L. Cheville ◽  
Kathleen J. Yost ◽  
Dirk R. Larson ◽  
Katiuska Dos Santos ◽  
Megan M. O'Byrne ◽  
...  

2019 ◽  
Vol 28 (1) ◽  
pp. 3-13 ◽  
Author(s):  
Terence J. G. Tracey

Technology holds the promise of greatly altering the conduct of interest assessment. I review five technological advances that currently exist and present how they can be incorporated into our interest measures and procedures: (a) dynamic assessment using item response theory, (b) adapting interpretations to individual users, (c) incorporating response latency, (d) gamification of interest measures, and (e) incorporating big data and machine learning. Using these advances in our assessments and procedures can structurally change what we do and enhance the precision of our measures.


Sign in / Sign up

Export Citation Format

Share Document