Information literacy self-efficacy: The effect of juggling work and study

2013 ◽  
Vol 35 (4) ◽  
pp. 279-287 ◽  
Author(s):  
Mitchell Ross ◽  
Helen Perkins ◽  
Kelli Bodey
2019 ◽  
Vol 14 (2) ◽  
pp. 128-130
Author(s):  
Brittany Richardson

A Review of: De Meulemeester, A., Buysse, H., & Peleman, R. (2018). Development and validation of an Information Literacy Self-Efficacy Scale for medical students. Journal of Information Literacy, 12(1), 27-47. Retrieved from https://ojs.lboro.ac.uk/JIL/article/view/PRA-V12-I1-2 Abstract Objective – To create and validate a scale evaluating the information literacy (IL) self-efficacy beliefs of medical students. Design – Scale development. Setting – Large, public research university in Belgium. Subjects – 1,252 medical students enrolled in a six-year medical program in the 2013-2014 academic year. Methods – Ten medical-specific IL self-efficacy questions were developed to expand a 28-item Information Literacy Self-Efficacy Scale (ILSES) (Kurbanoglu, Akkoyunlu, & Umay, 2006). Medical students in Years 1 – 5 completed the questionnaire (in English) in the first two weeks of the academic year, with students in Year 6 completing after final exams. Respondents rated their confidence with each item 0 (‘I do not feel confident at all’) to 100 (‘I feel 100% confident’). Principal Axis Factoring analysis was conducted on all 38 items to identify subscales. Responses were found suitable for factor analysis using Bartlett’s Test of Sphericity and the Kaiser-Meyer-Olkin measure (KMO). Factors were extracted using the Kaiser-Gutmann rule with Varimax rotation applied. Cronbach’s alpha was used to test the internal consistency of each identified subscale. Following a One-way-ANOVA testing for significant differences, a Tamhane T2 post-hoc test obtained a pairwise comparison between mean responses for each student year. Main Results – Five subscales with a total of 35 items were validated for inclusion in the Information Literacy Self-Efficacy Scale for Medicine (ILSES-M) and found to have a high reliability (Chronbach’s alpha scores greater than .70). Subscales were labelled by concept, including “Evaluating and Processing Information” (11 items), “Medical Information Literacy Skills” (10 items), “Searching and Finding Information” (6 items), “Using the Library” (4 items), and “Bibliography” (4 items). The factor loading of non-medical subscales closely reflected studies validating the original ILSES (Kurbanoglu, Akkoyunla, & Umay, 2006; Usluel, 2007), suggesting consistency in varying contexts and across time. Although overall subscale means were relatively low, immediate findings among medical students at Ghent University demonstrated an increase in the IL self-efficacy of students as they advance through the 6-year medical program. Students revealed the least confidence in “Using the Library.” Conclusions – The self-efficacy of individuals in approaching IL tasks has an impact on self-motivation and lifelong learning. The authors developed the ILSES-M as part of a longitudinal study protocol appraising the IL self-efficacy beliefs of students in a six-year medical curriculum (De Meulemeester, Peleman, & Buysse, 2018). The ILSES-M “…could give a clear idea about the evolution of perceived IL and the related need for support and training” (p. 43). Further research could evaluate the scale’s impact on curriculum and, conversely, the impact of curricular changes on ILSE. Qualitative research may afford additional context for scale interpretation. The scale may also provide opportunities to assess the confidence levels of incoming students throughout time. The authors suggested further research should apply the ILSES-M in diverse cultural and curricular settings.


2017 ◽  
Vol 35 (5) ◽  
pp. 1035-1051 ◽  
Author(s):  
Khalid Mahmood

Purpose This paper systematically reviews the evidence of reliability and validity of scales available in studies that reported surveys of students to assess their perceived self-efficacy of information literacy (IL) skills. Design/methodology/approach Search in two subject and two general databases and scanning of titles, abstracts and full texts of documents have been carried out in this paper. Findings In total, 45 studies met the eligibility criteria. A large number of studies did not report any psychometric characteristics of data collection instruments they used. The selected studies provided information on 22 scales. The instruments were heterogeneous in number of items and type of scale options. The most used reliability measure was internal consistency (with high values of Cronbach’s alpha), and the most used validity was face/content validity by experts. Practical implications The culture of using good-quality scales needs to be promoted by IL practitioners, authors and journal editors. Originality/value This paper is the first review of its kind, which is useful for IL stakeholders.


2006 ◽  
Vol 62 (6) ◽  
pp. 730-743 ◽  
Author(s):  
S. Serap Kurbanoglu ◽  
Buket Akkoyunlu ◽  
Aysun Umay

2020 ◽  
Vol 108 (2) ◽  
Author(s):  
Lorie A. Kloda ◽  
Jill T. Boruff ◽  
Alexandre Soares Cavalcante

Objective: In educating students in the health professions about evidence-based practice, instructors and librarians typically use the patient, intervention, comparison, outcome (PICO) framework for asking clinical questions. A recent study proposed an alternative framework for the rehabilitation professions. The present study investigated the effectiveness of teaching the alternative framework in an educational setting.Methods: A randomized controlled trial was conducted with students in occupational therapy (OT) and physical therapy (PT) to determine if the alternative framework for asking clinical questions was effective for identifying information needs and searching the literature. Participants were randomly allocated to a control or experimental group to receive ninety minutes of information literacy instruction from a librarian about formulating clinical questions and searching the literature using MEDLINE. The control group received instruction that included the PICO question framework, and the experimental group received instruction that included the alternative framework.Results: There were no significant differences in search performance or search skills (strategy and clinical question formulation) between the two groups. Both the control and experimental groups demonstrated a modest but significant increase in information literacy self-efficacy after the instruction; however, there was no difference between the two groups.Conclusion: When taught in an information literacy session, the new, alternative framework is as effective as PICO when assessing OT and PT students’ searching skills. Librarian-led workshops using either question formulation framework led to an increase in information literacy self-efficacy post-instruction.


Sign in / Sign up

Export Citation Format

Share Document