literacy assessments
Recently Published Documents


TOTAL DOCUMENTS

51
(FIVE YEARS 18)

H-INDEX

5
(FIVE YEARS 2)

2021 ◽  
Vol 8 (2) ◽  
pp. 156-169
Author(s):  
Titis Angga Rini ◽  
Puri Selfi Cholifah ◽  
Ni Luh Sakinah Nuraini ◽  
Kay Margetts

Minimum competency assessment is a new challenge for classroom teachers in implementing and integrating it in learning to maximize its diagnostic and curative function on the quality of Indonesian education. This is the background of this research to analyze the readiness of teachers in arranging literacy and numeracy tests in elementary schools. This study used a content analysis design to reveal a test prepared by elementary school teachers for minimum competency assessment involving 30 elementary school teachers. Data analysis results are presented in the form of percentage accuracy of items with quantitative descriptive and examples of items that are described qualitatively. The results showed that the competence of school teachers in compiling literacy and numeracy assessment questions in elementary schools was still not optimal in terms of form, content, context, and cognitive level, especially for reflect and evaluate levels for literacy assessments and reasoning levels for required numeracy assessments. Especially for literary content on literacy, scientific context on literacy and numeracy, and third-level literacy and numeracy. Based on the results of the study, it can be concluded that the competency of elementary school teachers in implementing a minimum competency assessment needs to be carried out to meet the standards and functions of the Indonesian national assessment. This study provides an overview of teacher readiness in carrying out minimum competency assessments in elementary schools related to their role as learning evaluators


2021 ◽  
pp. 146879842110420
Author(s):  
Colleen E Whittingham ◽  
Emily Brown Hoffman ◽  
Kathleen A Paciga

The nature of the literacy assessments valued in the persistent accountability climate within U.S. public education, coupled with an increasingly polarized discourse around what counts as the science of reading (SOR), have resulted in instructional gatekeeping that privileges constrained skill teaching and learning in K-3 settings. The gatekeeper phenomenon is an urgent issue of equity, with children from minoritized populations bearing the brunt of the disparity. By highlighting how commonly enacted policies and practices around assessment and accountability withhold unconstrained skill teaching and learning due to pressure to prove student success via constrained skill mastery, we demonstrate how some students, often the most marginalized, receive insufficient literacy instruction in K-3. To fully actualize an expansive definition of the SOR, an expansive definition of assessment and accountability must also be adopted - one which attends to constrained and unconstrained skills while utilizing appropriate measures to document learning beginning in the earliest grades.


2021 ◽  
Author(s):  
Emily Wood ◽  
Insiya Bhalloo ◽  
Brittany McCaig ◽  
Cristina Feraru ◽  
Monika Molnar

Previous virtual care literature within the field of speech-language pathology has primarily focused on the implementation of specific intervention programs, but recommendations for best practices in virtual assessment, particularly standardized assessment of oral language and literacy abilities are scarce. Given the recent rapid rise in virtual care and research, clinicians and researchers require guidance on best practices for virtual administration of these tools. We informally reviewed the extant literature, and conducted semi-structured interviews with a group of 12 clinicians, students and researchers who had administered standardized language and literacy assessments with typically developing children between the ages of four and eight, in a virtual setting. Six themes: candidacy for virtual assessment, communication and collaboration with caregivers, technology and equipment, virtual administration, ethics, consent and confidentiality, and special considerations for bilingual populations were discussed, to develop a set of recommendations to guide the use of standardized assessments in a virtual setting. In line with the Guidelines International Network, these recommendations were rated by group members, and reviewed by external stakeholders. This paper is one of the first to share recommended practices for virtual assessment in the domain of oral language assessment. As research on the reliability of virtual assessment in this realm is still scarce, we hope the current recommendations will facilitate future clinical research in this area, and in turn will lead to the development of formal Clinical Practice Guidelines.


2021 ◽  
Vol 9 ◽  
pp. 205031212110505
Author(s):  
Emily Wood ◽  
Insiya Bhalloo ◽  
Brittany McCaig ◽  
Cristina Feraru ◽  
Monika Molnar

Objectives: Previous virtual care literature within the field of speech-language pathology has primarily focused on validating the virtual use of intervention programmes. There are fewer articles addressing the validity of conducting virtual assessments, particularly standardized assessment of oral language and literacy abilities in children. In addition, there is a lack of practical, useful recommendations available to support clinicians and researchers on how to conduct these assessment measures virtually. Given the recent rapid rise in virtual care and research as a result of the Coronavirus-19 pandemic, clinicians and researchers require guidance on best practices for virtual administration of these tools imminently. This article seeks to fill this gap in the literature by providing such recommendations. Methods: We (a) completed a narrative review of the extant literature, and (b) conducted semi-structured interviews with a group of 12 clinicians, students and researchers who had administered standardized language and literacy assessments with a variety of monolingual and multilingual school-aged children, with and without speech and language difficulties, in clinical and research settings. Six themes: candidacy for virtual assessment, communication and collaboration with caregivers, technology and equipment, virtual administration, ethics, consent and confidentiality, and considerations for bilingual populations were identified as a result of these two processes and were used to develop a set of recommendations to guide the use of standardized assessments in a virtual setting. In line with the Guidelines International Network, these recommendations were rated by group members, and reviewed by external stakeholders. A quasi-Delphi consensus procedure was used to reach agreement on ratings for recommendations. Results: We have developed and outlined several recommendations for clinicians and researchers to guide their use of standardized language and literacy assessments in virtual care, across six key themes. Conclusions: This article is one of the first to share practical recommendations for virtual assessment in the domain of oral language and literacy assessment for clinicians and researchers. We hope the current recommendations will facilitate future clinical research in this area, and as the body of research in this field grows, this article will act as a basis for the development of formal Clinical Practice Guidelines.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Radhamoney Govender ◽  
Anna J. Hugo

Background: South African primary school learners have participated in several national and international literacy (reading and writing) studies that measure learners’ achievement in different grades and at different intervals. Numerous scholars have analysed the results of these assessments. We extended their analyses by investigating the grade coverage and the aspects of literacy that were included in these assessments, as well as whether the learners’ home language impacts their results.Aim: The authors aim to determine how reliable the results of these assessments are in improving and informing policies relating to literacy teaching in primary schools and to provide recommendations to improve the administration of literacy assessments.Method: Literature on various national and international literacy studies that were conducted in South African primary schools from 2000 to 2016 was identified and analysed according to grade, province, languages in which the assessments were conducted, aspects of literacy that were included in the assessments and the accuracy of the results.Results: The analysis provides evidence that suggests that most literacy assessments target learners in the Intermediate Phase (Grades 4–6) and are not available in all 11 South African official languages. Presently, there are no large-scale literacy assessments for Foundation Phase (Grades 1–3) learners. Moreover, the results of these assessments do not provide us with reliable information about literacy levels in the country because there are vast discrepancies in assessment scores.Conclusion: This article highlights the importance of obtaining reliable information in determining literacy levels in the country and in informing decisions regarding literacy-related policies.


2020 ◽  
Vol 27 (1) ◽  
pp. 96-112 ◽  
Author(s):  
Yang Liu ◽  
Senlin Chen

Physical literacy has become a focus in school physical education and youth sports programs. Despite the global attention, physical literacy remains an elusive concept in regards to its definition, assessment, and interventions. In this article, we review the existing scholarship on physical literacy definitions, assessments, and interventions. We first present the historic evolution of physical literacy and the various definitions, followed by a synthesis of its underlying components. We next summarize the current physical literacy assessments along with a commentary on the challenges to accurately and feasibly assess the concept. Following this we describe the characteristics and efficacy of the existing physical literacy-related interventions in children and adolescents. The article concludes with an interpretive discussion of the findings to stimulate future efforts of research, practice, and policy for fostering physically literate individuals.


BMJ Open ◽  
2020 ◽  
Vol 10 (6) ◽  
pp. e035974
Author(s):  
Melanie Hawkins ◽  
Gerald R Elsworth ◽  
Elizabeth Hoban ◽  
Richard H Osborne

ObjectiveValidity refers to the extent to which evidence and theory support the adequacy and appropriateness of inferences based on score interpretations. The health sector is lacking a theoretically-driven framework for the development, testing and use of health assessments. This study used the Standards for Educational and Psychological Testing framework of five sources of validity evidence to assess the types of evidence reported for health literacy assessments, and to identify studies that referred to a theoretical validity testing framework.MethodsA systematic descriptive literature review investigated methods and results in health literacy assessment development, application and validity testing studies. Electronic searches were conducted in EBSCOhost, Embase, Open Access Theses and Dissertations and ProQuest Dissertations. Data were coded to the Standards’ five sources of validity evidence, and for reference to a validity testing framework.ResultsCoding on 46 studies resulted in 195 instances of validity evidence across the five sources. Only nine studies directly or indirectly referenced a validity testing framework. Evidence based on relations to other variables is most frequently reported.ConclusionsThe health and health equity of individuals and populations are increasingly dependent on decisions based on data collected through health assessments. An evidence-based theoretical framework provides structure and coherence to existing evidence and stipulates where further evidence is required to evaluate the extent to which data are valid for an intended purpose. This review demonstrates the use of the Standards’ theoretical validity testing framework to evaluate sources of evidence reported for health literacy assessments. Findings indicate that theoretical validity testing frameworks are rarely used to collate and evaluate evidence in validation practice for health literacy assessments. Use of the Standards’ theoretical validity testing framework would improve evaluation of the evidence for inferences derived from health assessment data on which public health and health equity decisions are based.


2020 ◽  
Vol 4 (Supplement_2) ◽  
pp. 1329-1329
Author(s):  
Nicholas Marchello ◽  
Christine Daley ◽  
Jinxiang Hu ◽  
Debra Sullivan ◽  
Heather Gibbs

Abstract Objectives Nutrition literacy is the capacity to apply nutrition information to dietary choices and is associated with diet quality. Understanding patient nutrition literacy deficits may help dietitians provide a more patient-centered intervention and improve patient satisfaction with their nutrition care. This pilot study examined the effects of nutrition literacy assessments on patient satisfaction. Methods Participants (n = 89) were patients scheduled for an appointment with an outpatient dietitian. All participants completed the validated Nutrition Literacy Assessment Instrument (NLit) prior to their visit with a dietitian. Intervention-arm dietitians accessed patient NLit results to focus interventions towards individual nutrition literacy deficits. Control-arm dietitians did not access NLit results and provided traditional interventions. All participants returned one month later to retake the NLit and a modified version of the Consumer Assessment of Healthcare Providers and Systems (CAHPS) survey, a patient-centered satisfaction survey developed by the Agency for Healthcare Research and Quality (AHRQ). Correlations were used to examine relationships between patient satisfaction and baseline NLit scores, change in NLit scores, and randomization. Bootstrapped multiple linear regression models were used to examine relationships between patient satisfaction, changes in NLit score, and sociodemographic variables. Results Mean patient satisfaction score for the cohort was 9.01 (10-point scale). Patient satisfaction was correlated with improvements in NLit score (Spearman's r = 0.265, P = 0.012). Partial correlations showed a positive relationship between changes in NLit score and patient satisfaction (r = 0.302, P = 0.006) when controlling for randomization, age, sex, education, income, and ethnicity. Regression models showed a positive association between patient satisfaction and change in NLit score (adjusted r2 = 0.087, P = 0.036). Conclusions Improved nutrition literacy may improve patient satisfaction. Nutrition literacy assessments may aid dietitians to focus nutrition interventions, individualizing nutrition education, and improve patient satisfaction. Funding Sources This work was supported by a CTSA grant from NCATS and the School of Health Professions.


Sign in / Sign up

Export Citation Format

Share Document