scholarly journals New tools for assessing information literacy

2020 ◽  
Author(s):  
Ellen Nierenberg ◽  
Torstein Låg ◽  
Tove I. Dahl

There is a need for short and easily administered measures for assessing students’ levels of information literacy, as currently existing measures are long and cumbersome. We have therefore created a suite of tools, the “Tromsø Information Literacy Suite” (TROILS), for information literacy assessment. This suite of tools is freely available on an open platform for others to both use, adapt, and supplement.  In this presentation, we introduce four TROILS assessment tools:  1. a survey for assessing students’ knowledge of key aspects of information literacy  2. a survey for measuring how interested students are in being/becoming information literate individuals  3. an annotated bibliography for assessing students’ abilities to critically evaluate information sources  4. a rubric for assessing students’ use of sources in their written work  Together, these tools measure what students know, feel, and do regarding key facets of information literacy. We will discuss the tools’ development and present preliminary results of tests with students in higher education in Norway.  Both surveys were developed using procedures intended to ensure acceptable psychometric measurement properties. These included expert consultation for content validity, student think-aloud-protocols for readability, item selection based on a pilot sample, exploratory factor analysis, estimates of reliability and criterion validity. The final surveys were deployed during the fall semester and will be used longitudinally to measure students’ progress over three years.   Results from the annotated bibliography (source evaluation) and the rubric (source documentation) were compared with survey results to see whether what the students actually do in their coursework correlates with what they know, based on the survey. 

2021 ◽  
Author(s):  
Ellen Nierenberg ◽  
Torstein Låg ◽  
Tove I. Dahl

There is a need for short and easily administered measures for assessing students’ levels of information literacy (IL), as currently existing measures are long and cumbersome. We have therefore created a suite of tools, the “Tromsø Information Literacy Suite” (TROILS), for IL assessment. This suite of tools is freely available on an open platform for others to both use, adapt, and supplement. In this presentation, we introduce three TROILS assessment tools: 1.a test to assess students’ knowledge of key aspects of IL 2.a source evaluation measure to assess students’ abilities to select and critically evaluate sources 3.a source use measure to assess students’ abilities to use sources correctly when writing Together, these tools measure what students know and do regarding key facets of IL. We will discuss the tools’ development and present results of our research with students at different levels higher education.The IL test was developed using procedures intended to ensure acceptable psychometric measurement properties. These included expert consultation for content validity, student think-aloud-protocols for readability, item selection based on a pilot sample, exploratory factor analysis, and measures of reliability and validity. The test was deployed during the fall semester of 2019. In addition to assessing students’ IL levels, test results were used to explore the dimensionality of the IL construct. Results indicate that IL is a heterogeneous construct, and we will discuss important implications of this find for how IL is measured. Results from the source evaluation and source use measures were compared with test results to see whether what the students actually do in their coursework correlates with what they know, based on the test. Results indicate weak to moderate, but statistically significant, correlations. All three measures will be used longitudinally to measure students’ progress over three years.


2018 ◽  
Vol 15 (5) ◽  
pp. 5-16
Author(s):  
Kirsten Hostetler ◽  
◽  
Tian Luo ◽  
Jill E. Stefaniak ◽  
◽  
...  

Despite the popularity of metacognitive research, and the inclusion of similar concepts in professional guidelines, librarians have not incorporated metacognitive tools into their assessment strategies. This systematic literature review found (1) metacognitive assessments can act as a learning aide in encouraging higher-order thinking; (2) metacognitive assessments can be effective measurements under proper conditions with experienced learners; and (3) librarians have limited options when selecting assessment tools even as the demand for demonstrating the library’s value to stakeholders is increasing. The paper concludes with gaps in the literature and areas for future directions.


2008 ◽  
Vol 17 (1) ◽  
pp. 13-19 ◽  
Author(s):  
Lisa A. Proctor ◽  
Jill Oswalt

Abstract The purpose of this article is to review augmentative and alternative communication (AAC) assessment issues in the schools. Initially, the article discusses the role and responsibilities of school-based speech-language pathologists in the assessment of children with complex communication needs. Next, the article briefly reflects on the importance of teaming in device selection for children with AAC needs. The main portion of the article provides information on assessment tools and resources related to comprehensive assessment for children with complex communication needs. This includes information on assessment of speech production and the relevance in AAC assessment. This is followed by tools and resources for receptive language and expressive language assessment. Also included in this main section is information on tools that examine academic and social participation. Finally, information on literacy assessment for student with complex communication needs is provided. The intent of the article is to provide the reader with a brief overview of assessment tools and resources for children with complex communication needs.


Author(s):  
Beatriz Sánchez-Sánchez ◽  
Beatriz Arranz-Martín ◽  
Beatriz Navarro-Brazález ◽  
Fernando Vergara-Pérez ◽  
Javier Bailón-Cerezo ◽  
...  

Therapeutic patient education programs must assess the competences that patients achieve. Evaluation in the pedagogical domain ensures that learning has taken place among patients. The Prolapse and Incontinence Knowledge Questionnaire (PIKQ) is a tool for assessing patient knowledge about urinary (UI) and pelvic organ prolapse (POP) conditions. The aim of this study was to translate the Prolapse and Incontinence Knowledge Questionnaire (PIKQ) into Spanish and test its measurement properties, as well as propose real practical cases as a competence assessment tool. The cross-cultural adaptation was conducted by a standardized translation/back-translation method. Measurement properties analysis was performed by assessing the validity, reliability, responsiveness, and interpretability. A total of 275 women were recruited. The discriminant validity showed statistically significant differences in the PIKQ scores between patients and expert groups. Cronbach’s alpha revealed good internal consistency. The test–retest reliability showed excellent correlation with UI and POP scales. Regarding responsiveness, the effect size, and standardized response mean demonstrated excellent values. No floor or ceiling effects were shown. In addition, three “real practical cases” evaluating skills in identifying and analyzing, decision making, and problem-solving were developed and tested. The Spanish PIKQ is a comprehensible, valid, reliable, and responsive tool for the Spanish population. Real practical cases are useful competence assessment tools that are well accepted by women with pelvic floor disorders (PFD), improving their understanding and their decision-making regarding PFD.


2007 ◽  
Vol 35 (1) ◽  
pp. 41-70 ◽  
Author(s):  
Valerie Sonley ◽  
Denise Turner ◽  
Sue Myer ◽  
Yvonne Cotton

PurposeThe purpose of this paper is to report the results of a case study evaluating the revision of the assessment methods of an information literacy module. The revised assessment method took the form of a portfolio.Design/methodology/approachDuring 2004, all six credit modules at the University of Teesside had to be reviewed and restructured into ten credit modules. Following Biggs' principles of constructive alignment, the tutors looked at the existing module aims and learning outcomes. A review of the literature and previous experience informed the selection of the new assessment method by portfolio. An evaluation of the assessment method was undertaken after the module had run.FindingsThe paper finds that the assessment method had real strengths especially in terms of validity. It was also economical and efficient. Students knew what they were expected to do and where they needed to put in effort.Research limitations/implicationsThe assessment by a portfolio method has been carried out once with a relatively small cohort of students, so the findings can only be regarded as interim.Practical implicationsThe tutors believe that they have created a very useful module with an aligned assessment method which would be of benefit to a much greater number of studentsOriginality/valueThere is a shortage of publications that report the results of the use of portfolios for the assessment of information literacy.


2022 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Rabia S. Allari ◽  
Khaldoun Hamdan ◽  
Maha Alkaid Albqoor ◽  
Abeer Shaheen

PurposeTo describe the perceived level of information competency among nursing students in Jordan.Design/methodology/approachCross sectional-correlational design was utilized. Data were collected using an electronic self-administered questionnaire from graduate and undergraduate nursing students in Jordan.FindingsNursing students showed a moderate mean total score of information competency (184.11 out of 280, SD = 22.92). Among information competency subscales, using the information technologies subscale had the highest mean score, while information from the mass media subscale had the lowest mean score. Information competency of nursing students was positively correlated with students' age. Significant differences were found in information competency according to the academic level, addressing scientific research and research in databases in the course of the study, frequency of meeting the supervisor to discuss the research and university sector.Originality/valueAlthough there are numerous studies worldwide that assessed nursing students' information literacy, this paper represents the first study of information literacy competencies among nursing students in Jordan. While the content supports conclusions that have been drawn from other studies, this study is novel in terms of the student population it addressed. Information competency among nursing students can be improved by integrating standard research and information competency courses at the undergraduate level and involving mass media platforms in the nursing education curricula.


Author(s):  
Monica D. T. Rysavy ◽  
Russell Michalak ◽  
Kevin Hunt

This chapter describes how the researchers at a small private Master's level college examined how different delivery modes—face-to-face (F2F), hybrid, and online instruction—may impact first-year students' perceptions of their information literacy (IL) skills compared to their test-assessed information literacy skills using the students perception of information literacy-questionnaire (SPIL-Q) and information literacy assessment (ILA) instruments. These instruments were developed and deployed to international graduate business students in two previous studies: Michalak and Rysavy and Michalak, Rysavy, and Wessel. The students (n=161) in this study were enrolled in a first-year English composition course in the Spring 2017 semester. This iteration achieved an overall response rate of 87.04% (n=141). Overall, results demonstrated the greatest achievement were demonstrated by students in hybrid course sections.


2020 ◽  
Vol 36 (4) ◽  
pp. 356-362
Author(s):  
Esther Z. Barsom ◽  
Ewout van Hees ◽  
Willem A. Bemelman ◽  
Marlies P. Schijven

BackgroundVideo consultation (VC) is considered promising in delivering healthcare closer to the patient and improving patient satisfaction. Indeed, providing care-at-distance via VC is believed to be promising for some situations and patients, serving their needs without associated concomitant costs. In order to assess implementation and perceived benefits, patient satisfaction is frequently measured. Measuring patient satisfaction with VC in healthcare is often performed using quantitative and qualitative outcome analysis. As studies employ different surveys, pooling of data on the topic is troublesome. This systematic review critically appraises, summarizes, and compares available questionnaires in order to identify the most suitable questionnaire for qualitative outcome research using VC in clinical outpatient care.MethodsPubMed, Embase, and Cochrane were searched for relevant articles using predefined inclusion criteria. Methodological quality appraisal of yielded questionnaires to assess VC was performed using the validated COSMIN guideline.ResultsThis systematic search identified twelve studies that used ten different patient satisfaction questionnaires. The overall quality of nine questionnaires was rated as “inadequate” to “doubtful” according to the COSMIN criteria. None of the questionnaires retrieved completed a robust validation process for the purpose of use.Conclusion and recommendationsAlthough high-quality studies on measurement properties of these questionnaires are scarce, the questionnaire developed by Mekhjian has the highest methodological quality achieving validity on internal consistency and the use of a large sample size. Moreover, this questionnaire can be used across healthcare settings. This finding may be instrumental in further studies measuring patient satisfaction with VC.


Sign in / Sign up

Export Citation Format

Share Document