assessment context
Recently Published Documents


TOTAL DOCUMENTS

74
(FIVE YEARS 19)

H-INDEX

14
(FIVE YEARS 0)

2021 ◽  
Author(s):  
◽  
Julie McDonald

<p>How can Māori culturally preferred pedagogies be implemented in a secondary classroom in a unit standard assessment context? What impact does this implementation have on the emotional engagement, intellectual reasoning and intrinsic growth of the learners? This research was undertaken by way of “interviews as chat” and journal recording, followed by a collaborative storying session which occurred around emerging themes. Formative data collection occurred from a question/suggestion box, work samples, attendance data and my journal. Lastly summative data was collected through a second round of interviews. This research concludes that a collaborative exploration of ako Māori is of significant benefit to Māori learners, although the Pākehā-centric assessment system restricts a teacher's ability to fully embrace a kaupapa Māori educational paradigm.</p>


2021 ◽  
Author(s):  
◽  
Julie McDonald

<p>How can Māori culturally preferred pedagogies be implemented in a secondary classroom in a unit standard assessment context? What impact does this implementation have on the emotional engagement, intellectual reasoning and intrinsic growth of the learners? This research was undertaken by way of “interviews as chat” and journal recording, followed by a collaborative storying session which occurred around emerging themes. Formative data collection occurred from a question/suggestion box, work samples, attendance data and my journal. Lastly summative data was collected through a second round of interviews. This research concludes that a collaborative exploration of ako Māori is of significant benefit to Māori learners, although the Pākehā-centric assessment system restricts a teacher's ability to fully embrace a kaupapa Māori educational paradigm.</p>


Psych ◽  
2021 ◽  
Vol 3 (3) ◽  
pp. 422-446
Author(s):  
Nico Andersen ◽  
Fabian Zehner

In this paper, we introduce shinyReCoR: a new app that utilizes a cluster-based method for automatically coding open-ended text responses. Reliable coding of text responses from educational or psychological assessments requires substantial organizational and human effort. The coding of natural language in responses to tests depends on the texts’ complexity, corresponding coding guides, and the guides’ quality. Manual coding is thus not only expensive but also error-prone. With shinyReCoR, we provide a more efficient alternative. The use of natural language processing makes texts utilizable for statistical methods. shinyReCoR is a Shiny app deployed as an R-package that allows users with varying technical affinity to create automatic response classifiers through a graphical user interface based on annotated data. The present paper describes the underlying methodology, including machine learning, as well as peculiarities of the processing of language in the assessment context. The app guides users through the workflow with steps like text corpus compilation, semantic space building, preprocessing of the text data, and clustering. Users can adjust each step according to their needs. Finally, users are provided with an automatic response classifier, which can be evaluated and tested within the process.


2021 ◽  
Vol 49 ◽  
pp. 100543
Author(s):  
Shelley Stagg Peterson ◽  
Alison Altidor ◽  
Jen Kerwood

2021 ◽  
Vol 14 (3) ◽  
pp. 58
Author(s):  
Takehiro Hatakeyama

The significance of acknowledging well-being (WB) has increased in local sustainable development (SD) assessment. Meanwhile, scholars and practitioners have paid growing attention to using subjective indicators which rely on a person&rsquo;s subjective evaluation to measure SD subjects, due to the frequent critique. The predominant use of objective indicators to assess SD frequently overlooks capturing individual&rsquo;s and community&rsquo;s WB. Nevertheless, the scopes and functions of subjective indicators remain underexamined in the SD assessment context. Therefore, this study discusses the distinctive characteristics of subjective sustainable development indicators (SDIs), contrasting with objective SDIs, complemented by examining WB indicators. To this end, an analysis of the literature on indicator-based assessment of SD and WB at the community and local level was conducted. The findings highlighted that the three distinctive approaches of SDIs could optimally capture and address associated WB: the objective SDIs could most sufficiently capture and address material WB capture, which turned, however, the shortcoming that overlooks other dimensions of WB. In contrast, the expert-led subjective SDIs could optimally capture and address community&rsquo;s social WB, whereby the outcomes reflected social norms and preferences recognised by a community and sustainability theories. Likewise, the citizen-based subjective SDIs distinctly measured individual&rsquo;s life satisfaction levels, whereby the outcomes explicitly presented individual&rsquo;s subjective WB while addressing local needs and values. This study finally suggests that the complementary use of the respective SDIs contributes to a thorough local-level SD assessment, by optimally addressing associated WB, which ultimately helps meet the current and future generations&rsquo; WB in achieving local SD.


Author(s):  
Mary Ellen O’Toole

This chapter addresses the fundamentals of threat assessment for professionals new to the field. Threat assessment is a critical thinking analysis that requires a multidisciplinary and peer review approach. Some of the fundamental concepts of threat assessment discussed in this chapter include the need for a detailed evaluation of the threatener’s background, including background patterns of behavior, motivation, and abilities to carry out the threat. The use and relevance of self-reported information in a threat assessment context must be very carefully evaluated because of the possibly deceptive motivations of the person providing it. Also discussed in this chapter are adolescents as unique offenders from a threat assessment perspective; their psychological, emotional, and brain development is unique and critical for the threat assessor to understand and to discern when evaluating their potential to make threats and carry them out. Key concepts integral to threat assessment also discussed in this chapter include evidence of escalation, injustice collecting, superficial indicators of normalcy, hatred and other emotions as motivators for carrying out threatened acts of violence, and the categories of reasons for the misinterpretation of dangerous and violent behavior by individuals close to or associated with the threatener.


2021 ◽  
Author(s):  
Joseph Rios

Low test-taking effort as a validity threat is common when examinees perceive an assessment context to have minimal personal value. Prior research has shown that in such contexts subgroups may differ in their effort, which raises two concerns when making subgroup mean comparisons. First, it is unclear how differential effort could influence evaluations of scale property equivalence. Second, if attaining full scalar invariance, the degree to which differential effort can bias subgroup mean comparisons is unknown. To address these issues, a simulation study was conducted to examine the influence of differential noneffortful responding (NER) on evaluations of measurement invariance and latent mean comparisons. Results showed that as differential rates of NER grew, increased type I errors of measurement invariance were observed only at the metric invariance level, while no negative effects were apparent for configural or scalar invariance. When full scalar invariance was correctly attained, differential NER led to bias of mean score comparisons as large as 0.18 standard deviations with a differential NER rate of 7%. These findings suggest that test users should evaluate and document potential differential NER prior to both conducting measurement quality analyses and reporting disaggregated subgroup mean performance.


2021 ◽  
pp. 001316442199042
Author(s):  
Joseph A. Rios

Low test-taking effort as a validity threat is common when examinees perceive an assessment context to have minimal personal value. Prior research has shown that in such contexts, subgroups may differ in their effort, which raises two concerns when making subgroup mean comparisons. First, it is unclear how differential effort could influence evaluations of scale property equivalence. Second, if attaining full scalar invariance, the degree to which differential effort can bias subgroup mean comparisons is unknown. To address these issues, a simulation study was conducted to examine the influence of differential noneffortful responding (NER) on evaluations of measurement invariance and latent mean comparisons. Results showed that as differential rates of NER grew, increased Type I errors of measurement invariance were observed only at the metric invariance level, while no negative effects were apparent for configural or scalar invariance. When full scalar invariance was correctly attained, differential NER led to bias of mean score comparisons as large as 0.18 standard deviations with a differential NER rate of 7%. These findings suggest that test users should evaluate and document potential differential NER prior to both conducting measurement quality analyses and reporting disaggregated subgroup mean performance.


2021 ◽  
Author(s):  
Joseph Rios

Low test-taking effort as a validity threat is common when examinees perceive an assessment context to have minimal personal value. Prior research has shown that in such contexts subgroups may differ in their effort, which raises two concerns when making subgroup mean comparisons. First, it is unclear how differential effort could influence evaluations of scale property equivalence. Second, if attaining full scalar invariance, the degree to which differential effort can bias subgroup mean comparisons is unknown. To address these issues, a simulation study was conducted to examine the influence of differential noneffortful responding (NER) on evaluations of measurement invariance and latent mean comparisons. Results showed that as differential rates of NER grew, increased type I errors of measurement invariance were observed only at the metric invariance level, while no negative effects were apparent for configural or scalar invariance. When full scalar invariance was correctly attained, differential NER led to bias of mean score comparisons as large as 0.18 standard deviations with a differential NER rate of 7%. These findings suggest that test users should evaluate and document potential differential NER prior to both conducting measurement quality analyses and reporting disaggregated subgroup mean performance.


2021 ◽  
Vol 11 (1) ◽  
pp. 266
Author(s):  
Abdullah Alshakhi

This qualitatively based research study utilized a combination of multiple methods, which aimed at investigating the efficacy and reliability of employing cross-grading when assessing English as a Foreign Language (EFL) tertiary level learners&rsquo; writing. It further explored the perceptions of the EFL teachers and learners regarding the cross-grading practices to provide a clearer understanding of this relatively unexplored line of research enquiry. It was set to answer the following research question: In what ways does cross-grading practice contribute to assessing EFL writing? The participants of this study were conveniently selected where the sample included four language instructors from different ethnic and cultural backgrounds, as well as four Saudi EFL learners. Semi-structured interviews were individually conducted with all eight participants. In addition, four one-on-one feedback sessions between language instructors and learners were observed to assess feedback effectiveness after the cross-grading sessions. The data analysis revealed that instructors had difficulty explaining the feedback on their learners&rsquo; papers since they did not grade their students&rsquo; papers themselves. Furthermore, students felt they did not benefit from the feedback sessions because they could not fully understand the external grader&rsquo;s markings and, thus inhibiting the learner&rsquo;s ability to improve and develop their writing. The study concluded with some pedagogical implications for the EFL writing assessment context.


Sign in / Sign up

Export Citation Format

Share Document