scholarly journals Ask Live! UK public libraries and virtual collaboration

2009 ◽  
Vol 27 (86) ◽  
pp. 43-50 ◽  
Author(s):  
Linda Berube

Ask A Librarian, the UK public library digital reference service, has been piloting LSSI's Virtual Reference Toolkit. The pilot, managed by Ask administrator Co-East, went live to public users in May 2003 and will continue through September 2003. The pilot objectives include not only an evaluation of the software and support offered by LSSI, but also the eventual integration of the chat component with the main web-form service, and the implications for uptake and sustainability. This article combines a report of the largely positive initial findings of the pilot with an overview of digital reference service and UK public libraries.

2009 ◽  
Vol 61 (4) ◽  
pp. 152
Author(s):  
Susan Herzog

Recently I completed answering a two-page list of questions from a Virginia library that was planning digital reference service. Their concerns reminded me where the Public Library of Charlotte andMecklenburg County (PLCMC) was about two years ago, when we began to consider virtual reference.


2021 ◽  
Author(s):  
◽  
Suzanne Alison Barnaby

<p>This study looks at digital reference services in New Zealand public libraries to find out what types of services are being provided and what impact they are having on traditional reference services. A survey was sent to twenty-seven selected public libraries with a further sixteen selected libraries added after a low response rate from the first group, to collect information on their digital reference services. The libraries included large, medium and small and were selected from all areas of New Zealand. A questionnaire was used to collect the information and the data was statistically analysed. All large and the majority of medium selected libraries are providing a digital reference service in the form of email or web form. Four of the large libraries are participating in AnyQuestions - a virtual reference service for New Zealand school children, and one large library has their own virtual service. The low response rate and deficiencies in the survey design have resulted in inconclusive results for this study. We know libraries are providing digital reference services and we know something about how the services are provided, but it is still unclear whether these services are having an impact on traditional reference services.</p>


2021 ◽  
Author(s):  
◽  
Suzanne Alison Barnaby

<p>This study looks at digital reference services in New Zealand public libraries to find out what types of services are being provided and what impact they are having on traditional reference services. A survey was sent to twenty-seven selected public libraries with a further sixteen selected libraries added after a low response rate from the first group, to collect information on their digital reference services. The libraries included large, medium and small and were selected from all areas of New Zealand. A questionnaire was used to collect the information and the data was statistically analysed. All large and the majority of medium selected libraries are providing a digital reference service in the form of email or web form. Four of the large libraries are participating in AnyQuestions - a virtual reference service for New Zealand school children, and one large library has their own virtual service. The low response rate and deficiencies in the survey design have resulted in inconclusive results for this study. We know libraries are providing digital reference services and we know something about how the services are provided, but it is still unclear whether these services are having an impact on traditional reference services.</p>


2008 ◽  
Vol 3 (1) ◽  
pp. 72 ◽  
Author(s):  
Stephanie Hall

A review of: Kwon, Nahyun. "Public Library Patrons' Use of Collaborative Chat Reference Service: The Effectiveness of Question Answering by Question Type." Library & Information Science Research 29.1 (Mar. 2007): 70-91. Objective – To assess the effectiveness of a collaborative chat reference service in answering different types of question. Specifically, the study compares the degree of answer completion and the level of user satisfaction for simple factual questions vs. more in-depth subject-based reference questions, and for ‘local’ (pertaining to a particular library) and non-local questions. Design – Content analysis of 415 transcripts of reference transactions, which were also compared to corresponding user satisfaction survey results. Setting – An online collaborative reference service offered by a large public library system (33 branch and regional locations). This service is part of the Metropolitan Co-operative Library System: a virtual reference consortium of U.S. libraries (public, academic, special, and corporate) that provides 24/7 service. Subjects – Reference librarians from around the U.S. (49 different libraries), and users logging into the service via the public library system’s portal (primarily patrons of the 49 libraries). Method – Content analysis was used to evaluate virtual reference transcripts recorded between January and June, 2004. Reliability was enhanced through triangulation, with researchers comparing the content analysis of each transcript against the results of a voluntary exit survey. Of 1,387 transactions that occurred during the period of study, 420 users completed the survey and these formed the basis of the study, apart from 5 transactions that were omitted because the questions were incomprehensible. Questions were examined and assigned to five categories: “simple, factual questions; subject-based research questions; resource access questions; circulation-related questions; and local library information inquiries” (80-81). Answers were classed as either “completely answered, partially answered or unanswered, referred, and problematic endings” (82). Lastly, user satisfaction was surveyed on three measures: satisfaction with the answer, perceived staff quality, and willingness to return. In general, the methods used were clearly described and appeared reliable. Main results – Distribution of question types: By far the largest group of questions were circulation-related (48.9%), with subject-based research questions coming next (25.8%), then simple factual questions (9.6%), resource access questions (8.9%), and local library information inquiries (6.8%). Effectiveness of chat reference service by question type: No statistically significant difference was found between simple factual questions and subject-based research questions in terms of answer completeness and user satisfaction. However, a statistically significant difference was found when comparing ‘local’ (circulation and local library information questions) and ‘non-local’ (simple factual and subject-based research questions), with both satisfaction and answer completeness being lower for local questions. Conclusions – The suggestion that chat reference may not be as appropriate for in-depth, subject-based research questions as it is for simple factual questions is not supported by this research. In fact, the author notes that “subject-based research questions, when answered, were answered as completely as factual questions and found to be the question type that gives the greatest satisfaction to the patrons among all question types” (86). Lower satisfaction and answer completion were found among local vs. non-local queries. Additionally, there appeared to be some confusion among patrons about the nature of the collaborative service – they often assumed that the librarian answering their question was from their local library. The author suggests some form of triage to direct local questions to the appropriate venue from the outset, thus avoiding confusion and unnecessary referrals. The emergence of repetitive questions also signalled the need for the development of FAQs for chat reference staff and the incorporation of such questions into chat reference training.


2004 ◽  
Vol 5 (3) ◽  
pp. 96-100 ◽  
Author(s):  
Bill Macnaught

The paper was presented as a response to Curtis's keynote address published immediately preceding. Bill Macnaught is Head of Cultural Development at Gateshead Council, UK, with responsibility for public libraries. He contextualised Curtis's statements, with reference to the Gateshead experience.


2011 ◽  
Vol 35 (109) ◽  
pp. 3-39 ◽  
Author(s):  
Christine Rooney-Browne

This paper summarises the findings of a report commissioned by the Chartered Institute of Library and Information Professionals' (CILIP) Library and Information Research Group (LIRG) to produce a comprehensive review of existing quantitative and qualitative evaluation methodologies for demonstrating the value of public libraries in the United Kingdom (UK). A thorough literature review of existing research was carried out and an investigation into best practices for evaluating impact was conducted. A wide range of journals and books published within the fields of library and information science and social research have been consulted. Relevant White Papers and Reviews; such as those published by the Chartered Institute of Library and Information Professionals (CILIP), the Scottish Library and Information Council (SLIC); the Museums, Libraries and Archives Council (MLA), the Department of Culture, Sport and Media (DCMS); and the American Library Association (ALA) have been analysed. Additional online searches helped to identify models of best practice; and the most up to date methods currently in use for measuring value outside of the UK. During the early stages of the literature review it became clear that a limited amount of research has been carried out in the UK field of public library valuation. Although academic researchers at Loughborough, Sheffield and Strathclyde University have published various journal articles and reports on this topic there is a lack of evidence that local authorities have been implementing the methodologies that the academics have recommended. Although it is possible that some local authorities may be working in isolation to implement bespoke evaluation methodologies it has been difficult to uncover examples of best practice in the UK. Therefore, as the literature review progressed the author expanded beyond the UK public library sector, and into the broader areas of economics, sociology and psychology. This enabled a more thorough understanding of the increase in evaluations, incentives, benchmarking, objective setting, accountability; and social and economic auditing. It is anticipated that the findings of this research will help the sector to develop more appropriate models for demonstrating the value of public libraries in the 21st century. The original report was compiled in June 2010.


2019 ◽  
Vol 52 (3) ◽  
pp. 713-725 ◽  
Author(s):  
Elaine Robinson ◽  
David McMenemy

Acceptable Use Policies (AUPs) are documents stating the limitations users must agree to when first accessing information and communications technologies (ICTs) in organisations, such as employers, educational institutions and public libraries. AUPs lay out the parameters of acceptable use expected of someone accessing the ICT services provided, and should state in clear and understandable terms what behaviours will attract sanctions, both legal and in terms of restricting future access. Utilising a range of standard readability tests used to measure how understandable documents are, the paper investigates how readable the AUPs presented to public library patrons in the UK are in practice. Of the 206 AUPs in use across the local government departments who manage public library services 200 were obtained and subjected to a range of readability testing procedures. Four readability tests were used for analysis: the Flesch Reading Ease, the Coleman-Liau Index, the Gunning Fog Index and the SMOG Grade. Results for all four readability tests administered on all AUPs raise significant questions. For the Flesch Reading Ease score only 5.5% of AUPs scored at the standard readability level or higher (60+), and 8% scored at a very high level of difficulty akin to a piece of scientific writing. Similarly, for SMOG, only 7.5% of the 200 AUPs scored at the recommended level of 10. Likewise, very few AUPs scored at levels recommended for a general audience with either the Gunning Fog Index (11.5%) or the Coleman-Liau Index (2%). With such variability in readability, the fitness for purpose of the average AUP as a contract patrons must agree to can be called into question. This paper presents the first ever analysis of the readability of library AUPs in the literature. Recommendations are made as to how public library services may improve this aspect of practice.


2016 ◽  
Vol 50 (2) ◽  
pp. 168-185 ◽  
Author(s):  
Simon Wakeling ◽  
Sophie Rutter ◽  
Briony Birdi ◽  
Stephen Pinfield

This paper presents the results of a mixed methods study of interlending and resource sharing in UK public libraries, based on the results of a survey distributed to both senior library managers and interlending staff, and in-depth follow-up interviews with 20 respondents. We present an analysis of perspectives towards rates of interlending, the rationales and strategies for providing the service, the perceived value for money offered by various interlending schemes, the impact of the current digital environment, and views on the future of interlending in the UK. Our findings suggest that while interlending services are undoubtedly threatened by the drastic cuts to public library funding, and that demand for the service is more generally in decline, resource sharing is viewed by some as a potential means of mitigating the effects of increasingly limited acquisitions budgets, and ensuring the public library system continues to provide access to a wide range of resources for its users.


Sign in / Sign up

Export Citation Format

Share Document