scholarly journals New tools for assessing information literacy

2021 ◽  
Author(s):  
Ellen Nierenberg ◽  
Torstein Låg ◽  
Tove I. Dahl

There is a need for short and easily administered measures for assessing students’ levels of information literacy (IL), as currently existing measures are long and cumbersome. We have therefore created a suite of tools, the “Tromsø Information Literacy Suite” (TROILS), for IL assessment. This suite of tools is freely available on an open platform for others to both use, adapt, and supplement. In this presentation, we introduce three TROILS assessment tools: 1.a test to assess students’ knowledge of key aspects of IL 2.a source evaluation measure to assess students’ abilities to select and critically evaluate sources 3.a source use measure to assess students’ abilities to use sources correctly when writing Together, these tools measure what students know and do regarding key facets of IL. We will discuss the tools’ development and present results of our research with students at different levels higher education.The IL test was developed using procedures intended to ensure acceptable psychometric measurement properties. These included expert consultation for content validity, student think-aloud-protocols for readability, item selection based on a pilot sample, exploratory factor analysis, and measures of reliability and validity. The test was deployed during the fall semester of 2019. In addition to assessing students’ IL levels, test results were used to explore the dimensionality of the IL construct. Results indicate that IL is a heterogeneous construct, and we will discuss important implications of this find for how IL is measured. Results from the source evaluation and source use measures were compared with test results to see whether what the students actually do in their coursework correlates with what they know, based on the test. Results indicate weak to moderate, but statistically significant, correlations. All three measures will be used longitudinally to measure students’ progress over three years.

2020 ◽  
Author(s):  
Ellen Nierenberg ◽  
Torstein Låg ◽  
Tove I. Dahl

There is a need for short and easily administered measures for assessing students’ levels of information literacy, as currently existing measures are long and cumbersome. We have therefore created a suite of tools, the “Tromsø Information Literacy Suite” (TROILS), for information literacy assessment. This suite of tools is freely available on an open platform for others to both use, adapt, and supplement.  In this presentation, we introduce four TROILS assessment tools:  1. a survey for assessing students’ knowledge of key aspects of information literacy  2. a survey for measuring how interested students are in being/becoming information literate individuals  3. an annotated bibliography for assessing students’ abilities to critically evaluate information sources  4. a rubric for assessing students’ use of sources in their written work  Together, these tools measure what students know, feel, and do regarding key facets of information literacy. We will discuss the tools’ development and present preliminary results of tests with students in higher education in Norway.  Both surveys were developed using procedures intended to ensure acceptable psychometric measurement properties. These included expert consultation for content validity, student think-aloud-protocols for readability, item selection based on a pilot sample, exploratory factor analysis, estimates of reliability and criterion validity. The final surveys were deployed during the fall semester and will be used longitudinally to measure students’ progress over three years.   Results from the annotated bibliography (source evaluation) and the rubric (source documentation) were compared with survey results to see whether what the students actually do in their coursework correlates with what they know, based on the survey. 


2021 ◽  
Vol 15 (2) ◽  
pp. 78
Author(s):  
Ellen Nierenberg ◽  
Torstein Låg ◽  
Tove Irene Dahl

This study touches upon three major themes in the field of information literacy (IL): the assessment of IL, the association between IL knowledge and skills, and the dimensionality of the IL construct. Three quantitative measures were developed and tested with several samples of university students to assess knowledge and skills for core facets of IL. These measures are freely available, applicable across disciplines, and easy to administer. Results indicate they are likely to be reliable and support valid interpretations. By measuring both knowledge and practice, the tools indicated low to moderate correlations between what students know about IL, and what they actually do when evaluating and using sources in authentic, graded assignments. The study is unique in using actual coursework to compare knowing and doing regarding students’ evaluation and use of sources. It provides one of the most thorough documentations of the development and testing of IL assessment measures to date. Results also urge us to ask whether the source-focused components of IL – information seeking, source evaluation and source use – can be considered unidimensional constructs or sets of disparate and more loosely related components, and findings support their heterogeneity.


2015 ◽  
Vol 95 (1) ◽  
pp. 25-38 ◽  
Author(s):  
Michael G. O'Grady ◽  
Stacey C. Dusing

Background Play is vital for development. Infants and children learn through play. Traditional standardized developmental tests measure whether a child performs individual skills within controlled environments. Play-based assessments can measure skill performance during natural, child-driven play. Purpose The purpose of this study was to systematically review reliability, validity, and responsiveness of all play-based assessments that quantify motor and cognitive skills in children from birth to 36 months of age. Data Sources Studies were identified from a literature search using PubMed, ERIC, CINAHL, and PsycINFO databases and the reference lists of included papers. Study Selection Included studies investigated reliability, validity, or responsiveness of play-based assessments that measured motor and cognitive skills for children to 36 months of age. Data Extraction Two reviewers independently screened 40 studies for eligibility and inclusion. The reviewers independently extracted reliability, validity, and responsiveness data. They examined measurement properties and methodological quality of the included studies. Data Synthesis Four current play-based assessment tools were identified in 8 included studies. Each play-based assessment tool measured motor and cognitive skills in a different way during play. Interrater reliability correlations ranged from .86 to .98 for motor development and from .23 to .90 for cognitive development. Test-retest reliability correlations ranged from .88 to .95 for motor development and from .45 to .91 for cognitive development. Structural validity correlations ranged from .62 to .90 for motor development and from .42 to .93 for cognitive development. One study assessed responsiveness to change in motor development. Limitations Most studies had small and poorly described samples. Lack of transparency in data management and statistical analysis was common. Conclusions Play-based assessments have potential to be reliable and valid tools to assess cognitive and motor skills, but higher-quality research is needed. Psychometric properties should be considered for each play-based assessment before it is used in clinical and research practice.


2021 ◽  
pp. 1-14
Author(s):  
Erin M. Lally ◽  
Hayley Ericksen ◽  
Jennifer Earl-Boehm

Context: Poor lower-extremity biomechanics are predictive of increased risk of injury. Clinicians analyze the single-leg squat (SLS) and step-down (SD) with rubrics and 2D assessments to identify these poor lower-extremity biomechanics. However, evidence on measurement properties of movement assessment tools is not strongly outlined. Measurement properties must be established before movement assessment tools are recommended for clinical use. Objective: The purpose of this study was to systematically review the evidence on measurement properties of rubrics and 2D assessments used to analyze an SLS and SD. Evidence Acquisition: The search strategy was developed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-analysis guidelines. The search was performed in PubMed, SPORTDiscus, and Web of Science databases. The COnsensus-based Standards for the selection of health Measurement INstruments multiphase procedure was used to extract relevant data, evaluate methodological quality of each study, score the results of each movement assessment, and synthesize the evidence. Evidence Synthesis: A total of 44 studies were included after applying eligibility criteria. Reliability and construct validity of knee frontal plane projection angle was acceptable, but criterion validity was unacceptable. Reliability of the Chmielewski rubric was unacceptable. Content validity of the knee-medial-foot and pelvic drop rubrics was acceptable. The remaining rubrics and 2D measurements had inconclusive or conflicting results regarding reliability and validity. Conclusions: Knee frontal plane projection angle is reliable for analyzing the SLS and SD; however, it does not serve as a substitute for 3D motion analysis. The Chmielewski rubric is not recommended for assessing the SLS or SD as it may be unreliable. Most movement assessment tools yield indeterminate results. Within the literature, standardized names, procedures, and reporting of movement assessment tool reliability and validity are inconsistent.


Author(s):  
Beatriz Sánchez-Sánchez ◽  
Beatriz Arranz-Martín ◽  
Beatriz Navarro-Brazález ◽  
Fernando Vergara-Pérez ◽  
Javier Bailón-Cerezo ◽  
...  

Therapeutic patient education programs must assess the competences that patients achieve. Evaluation in the pedagogical domain ensures that learning has taken place among patients. The Prolapse and Incontinence Knowledge Questionnaire (PIKQ) is a tool for assessing patient knowledge about urinary (UI) and pelvic organ prolapse (POP) conditions. The aim of this study was to translate the Prolapse and Incontinence Knowledge Questionnaire (PIKQ) into Spanish and test its measurement properties, as well as propose real practical cases as a competence assessment tool. The cross-cultural adaptation was conducted by a standardized translation/back-translation method. Measurement properties analysis was performed by assessing the validity, reliability, responsiveness, and interpretability. A total of 275 women were recruited. The discriminant validity showed statistically significant differences in the PIKQ scores between patients and expert groups. Cronbach’s alpha revealed good internal consistency. The test–retest reliability showed excellent correlation with UI and POP scales. Regarding responsiveness, the effect size, and standardized response mean demonstrated excellent values. No floor or ceiling effects were shown. In addition, three “real practical cases” evaluating skills in identifying and analyzing, decision making, and problem-solving were developed and tested. The Spanish PIKQ is a comprehensible, valid, reliable, and responsive tool for the Spanish population. Real practical cases are useful competence assessment tools that are well accepted by women with pelvic floor disorders (PFD), improving their understanding and their decision-making regarding PFD.


2021 ◽  
Vol 11 (8) ◽  
pp. 402
Author(s):  
Linda Helene Sillat ◽  
Kairit Tammets ◽  
Mart Laanpere

The rapid increase in recent years in the number of different digital competency frameworks, models, and strategies has prompted an increasing popularity for making the argument in favor of the need to evaluate and assess digital competence. To support the process of digital competence assessment, it is consequently necessary to understand the different approaches and methods. This paper carries out a systematic literature review and includes an analysis of the existing proposals and conceptions of digital competence assessment processes and methods in higher education, with the aim of better understanding the field of research. The review follows three objectives: (i) describe the characteristics of digital competence assessment processes and methods in higher education; (ii) provide an overview of current trends; and, finally, (iii) identify challenges and issues in digital competence assessment in higher education with a focus on the reliability and validity of the proposed methods. On the basis of the findings, and as a result of the COVID-19 pandemic, digital competence assessment in higher education requires more attention, with a specific focus on instrument validity and reliability. Furthermore, it will be of great importance to further investigate the use of assessment tools to support systematic digital competence assessment processes. The analysis includes possible opportunities and ideas for future lines of work in digital competence evaluation in higher education.


2015 ◽  
Vol 7 (3) ◽  
pp. 77
Author(s):  
Nosiba Ali Al-Mousa

<p>This study aimed to identify Classroom Instructional Practices of Islamic education teachers<br />at basic stage in AL- Mafraq and to identify the relationship between these practices and<br />gender and experience variables as well as interaction between them. The researcher prepared<br />a scale which its reliability and validity were checked to measure the level of classroom<br />instructional practices. The sample of study which consisted of (64) teachers (male and<br />female) was chosen randomly.<br />Data were collected and analyzed statistically using means, standard deviation, t-test and<br />Tuky test. Results revealed that the classroom studying practices of Islamic education<br />teachers in basic stage in AL- Mafraq agreed with acceptable educational and social standard<br />(80%), whereas the percentage of the classroom instructional practices was (81-89%). The<br />results also revealed lack of statistical significant differences in the classroom instructional<br />practices of Islamic education teachers in the basic stage in AL- Mafraq due to the gender<br />variable but there were statistical significant differences at (0,05=OC) in classroom<br />instructional practices due to experience variable in favor of respondents with experiences ( 4<br />years and less) and 10 years and more).Additionally, there were no statistical significant<br />differences in the classroom instructional practices of Islamic education teachers due to<br />interaction between gender and experience variable. The researcher recommended conducting<br />further studies with different variables.</p>


2017 ◽  
Vol 7 (4) ◽  
pp. 1 ◽  
Author(s):  
Hassan Saleh Mahdi

Video captioning is a useful tool for language learning. In the literature, video captioning has been investigated by many studies and the results indicated that video captioning may foster vocabulary learning. Most of the previous studies have investigated the effect of full captions on vocabulary learning. One of the key aspects of vocabulary learning is pronunciation. However, the use of mobile devices for teaching pronunciation has not been investigated conclusively. Therefore, this paper attempts to examine the effect of implementing keyword video captioning on L2 pronunciation using mobile devices. Thirty-four Arab EFL university learners participated in this study and were randomly assigned to two groups (key-word captioned video and full captioned video). The study is an experimental one in which pre- and post-tests were administered to both groups. The results indicated that keyword captioning is a useful mode to improve learner’s pronunciation. The post test results indicate that there was no statistically significant difference between the two modes of captioning on vocabulary learning. However, learners at keyword video captioning performed better that full video captioning. 


2016 ◽  
Vol 8 (4) ◽  
pp. 592-596 ◽  
Author(s):  
Mary Ellen J. Goldhamer ◽  
Keith Baker ◽  
Amy P. Cohen ◽  
Debra F. Weinstein

ABSTRACT Background Multi-source evaluation has demonstrated value for trainees, but is not generally provided to residency or fellowship program directors (PDs). Objective To develop, implement, and evaluate a PD multi-source evaluation process. Methods Tools were developed for PD evaluation by trainees, department chairs, and graduate medical education (GME) leadership. Evaluation questions were based on PD responsibilities, including Accreditation Council for Graduate Medical Education (ACGME) requirements. A follow-up survey assessed the process. Results Evaluation completion rates were as follows: trainees in academic year 2012–2013, 53% (958 of 1824), and in academic year 2013–2014, 42% (800 of 1898); GME directors in 2013–2014, 100% (95 of 95); and chairs/chiefs in 2013–2014, 92% (109 of 118). Results of a follow-up survey of PDs (66%, 59 of 90) and chairs (74%, 48 of 65) supports the evaluations' value, with 45% of responding PDs (25 of 56) and 50% of responding chairs (21 of 42) characterizing them as “extremely” or “quite” useful. Most indicated this was the first written evaluation they had received (PDs 78%, 46 of 59) or provided (chairs 69%, 33 of 48) regarding the PD role. More than 60% of PD (30 of 49) and chair respondents (24 of 40) indicated trainee feedback was “extremely” or “quite” useful, and nearly 50% of PDs (29 of 59) and 21% of chairs (10 of 48) planned changes based on the results. Trainee response rates improved in 2014–2015 (52%, 971 of 1872) and 2015–2016 (69%, 1276 of 1837). Conclusions In our institution, multi-source evaluation of PDs was sustained over 4 years with acceptable and improving evaluation completion rates. The process and assessment tools are potentially transferrable to other institutions.


Sign in / Sign up

Export Citation Format

Share Document