Purpose
Though we live in a digital era, libraries offer significant hours of in-person reference services, in combination with online reference services. Nevertheless, an increase in requests for in-person, individualized research consultations (IRCs) over the last few years has been observed. IRCs between librarians and students are common practice in academic institutions. While these sessions can be deemed useful for patrons, as they are tailored to their specific needs, however, they can also be time consuming for the librarians. Therefore, it is important to evaluate this service, and assess its impact in order to ensure that the users are getting the most out of their sessions. The purpose of this paper is to gather information on the evaluation and assessment tools that Canadian institutions are using to obtain feedback, measure their impact and improve their consultation services.
Design/methodology/approach
A bilingual (French and English) web-based questionnaire was issued, with a generic definition of IRCs provided. The questionnaire included general demographics and background information on IRC practices among Canadian academic librarians, followed by reflective questions on the assessment process of such practices. The questionnaire was sent to Canadian academic librarians via e-mail, using professional librarian associations’ listservs, and Twitter was used for dissemination as well.
Findings
Major findings of the survey concluded that the disciplines of health sciences and medicine, as well as the arts and humanities are the heaviest users of the IRC service model. On average, these sessions are one hour in length, provided by librarians who often require advanced preparation time to adequately help the user, with infrequent follow-up appointments. It was not surprising that a lack of assessment methods for IRCs was identified among Canadian academic libraries. Most libraries have either no assessment in place for IRCs, or they rely heavily on informal feedback from users, comments from faculty members and so on. A small portion of libraries use usage statistics to assess their IRCs service, but other means of assessment are practically non-existent.
Research limitations/implications
The survey conducted was only distributed to Canadian academic libraries. Institutions across the USA and other countries that also perform IRCs may have methods for evaluating and assessing these sessions which the authors did not gather; therefore, the evidence is biased. As well, each discipline approaches IRCs very differently; therefore, it is challenging to compare the evaluation and assessment methods between each discipline. Furthermore, the study’s population is unknown, as the authors did not know the exact number of librarians or library staff providing IRCs by appointment in academic Canadian institution. While the response rate was reasonably good, it is impossible to know if the sample is representative of the population. Also, it needs to be acknowledged that the study is exploratory in nature as this is the first study solely dedicated at examining academic librarians’ IRC practices. Further research is needed. As future research is needed to evaluate and assess IRCs with an evidence-based approach, the authors will be conducting a pre-test and post-test to assess the impact of IRC on students’ search techniques.
Originality/value
Evidence-based practice for IRCs is limited. Very few studies have been conducted examining the evaluation and assessment methods of these sessions; therefore, it was believed that a “lay of the land,” so to speak, was needed. The study is exploratory in nature, as this is the first study solely dedicated at examining the evaluation and assessment methods of academic librarians’ IRC practices.