Chat Reference Service in Medical Libraries

2003 ◽  
Vol 22 (2) ◽  
pp. 1-13 ◽  
Author(s):  
Cheryl R. Dee
2006 ◽  
Vol 2 (2-3) ◽  
pp. 107-125 ◽  
Author(s):  
Clista C. Clanton ◽  
Geneva B. Staggs ◽  
Thomas L. Williams

2018 ◽  
Vol 35 (8) ◽  
pp. 10-14
Author(s):  
Nove E. Variant Anna

Purpose This paper aims to observe the province’s public library websites in Indonesia and to give some recommendation about knowledge portal website that can support the creation and invention of knowledge. Design/methodology/approach Data and information were gathered by observing library websites at the provincial level to see the digital survey and collection. This survey includes 34 province public library websites in the period from August 1 to 15, 2017. As the survey focuses on the availability of online digital collections, availability of digital services such as the user can have conversation with the librarian through a chat reference service, the availability of trusted external information sources, the availability of user forums for discussion. Findings The result of the research showed that the public library websites in Indonesia are still static (less interactive) and only give standard information about the library services, its operational hours, contact numbers and their collection. According to the result, it is recommended for every public library transforms its website into a knowledge portal website that can give a real and direct effect to the users, especially in the creation of innovation. Originality/value This paper also recommends a framework for a knowledge portal that includes e-resources, user needs, partnership, internet resources, integrated OPAC and collaboration. A survey on a library website is rarely conducted in Indonesia; therefore, this result will be beneficial for developing library websites.


2008 ◽  
Vol 3 (1) ◽  
pp. 72 ◽  
Author(s):  
Stephanie Hall

A review of: Kwon, Nahyun. "Public Library Patrons' Use of Collaborative Chat Reference Service: The Effectiveness of Question Answering by Question Type." Library & Information Science Research 29.1 (Mar. 2007): 70-91. Objective – To assess the effectiveness of a collaborative chat reference service in answering different types of question. Specifically, the study compares the degree of answer completion and the level of user satisfaction for simple factual questions vs. more in-depth subject-based reference questions, and for ‘local’ (pertaining to a particular library) and non-local questions. Design – Content analysis of 415 transcripts of reference transactions, which were also compared to corresponding user satisfaction survey results. Setting – An online collaborative reference service offered by a large public library system (33 branch and regional locations). This service is part of the Metropolitan Co-operative Library System: a virtual reference consortium of U.S. libraries (public, academic, special, and corporate) that provides 24/7 service. Subjects – Reference librarians from around the U.S. (49 different libraries), and users logging into the service via the public library system’s portal (primarily patrons of the 49 libraries). Method – Content analysis was used to evaluate virtual reference transcripts recorded between January and June, 2004. Reliability was enhanced through triangulation, with researchers comparing the content analysis of each transcript against the results of a voluntary exit survey. Of 1,387 transactions that occurred during the period of study, 420 users completed the survey and these formed the basis of the study, apart from 5 transactions that were omitted because the questions were incomprehensible. Questions were examined and assigned to five categories: “simple, factual questions; subject-based research questions; resource access questions; circulation-related questions; and local library information inquiries” (80-81). Answers were classed as either “completely answered, partially answered or unanswered, referred, and problematic endings” (82). Lastly, user satisfaction was surveyed on three measures: satisfaction with the answer, perceived staff quality, and willingness to return. In general, the methods used were clearly described and appeared reliable. Main results – Distribution of question types: By far the largest group of questions were circulation-related (48.9%), with subject-based research questions coming next (25.8%), then simple factual questions (9.6%), resource access questions (8.9%), and local library information inquiries (6.8%). Effectiveness of chat reference service by question type: No statistically significant difference was found between simple factual questions and subject-based research questions in terms of answer completeness and user satisfaction. However, a statistically significant difference was found when comparing ‘local’ (circulation and local library information questions) and ‘non-local’ (simple factual and subject-based research questions), with both satisfaction and answer completeness being lower for local questions. Conclusions – The suggestion that chat reference may not be as appropriate for in-depth, subject-based research questions as it is for simple factual questions is not supported by this research. In fact, the author notes that “subject-based research questions, when answered, were answered as completely as factual questions and found to be the question type that gives the greatest satisfaction to the patrons among all question types” (86). Lower satisfaction and answer completion were found among local vs. non-local queries. Additionally, there appeared to be some confusion among patrons about the nature of the collaborative service – they often assumed that the librarian answering their question was from their local library. The author suggests some form of triage to direct local questions to the appropriate venue from the outset, thus avoiding confusion and unnecessary referrals. The emergence of repetitive questions also signalled the need for the development of FAQs for chat reference staff and the incorporation of such questions into chat reference training.


2017 ◽  
Vol 12 (2) ◽  
pp. 172
Author(s):  
Sue F. Phelps

A Review of: Maloney, K., & Kemp, J. H. (2015). Changes in reference question complexity following the implementation of a proactive chat system: Implications for practice. College & Research Libraries, 76(7), 959-974. http://dx.doi.org/10.5860/crl.76.7.959 Abstract Objective – To determine whether the complexity of reference questions has changed over time; whether chat reference questions are more complex than those at the reference desk; and whether proactive chat increases the number and complexity of questions. Design – Literature review and library data analysis. Setting – Library of a doctoral degree granting university in the United States of America. Methods – The study was carried out in two parts. The first was a meta-analysis of published data with empirical findings about the complexity of questions received at library service points in relationship to staffing levels. The authors used seven studies published between 1977 and 2012 from their literature review to create a matrix to compare reference questions based on the staffing level required to answer the questions (e.g., by a nonprofessional, a generalist, or a librarian). They present these articles in chronological order to illustrate how questions have changed over time. They sorted questions by the service point at which they were asked, either through chat service or at a reference desk. In the second part of the study authors used the READ scale to categorize the complexity of questions asked at the reference desk and via proactive chat reference. They collected data for chat reference for six one-week periods over the course of eight months to provide a representative sample. They recorded reference desk questions for three of those same weeks. Both evaluators scored the data for a single week to norm their results, while the remaining data was coded independently. Main Results – The complexity of questions in the seven articles studied indicated change over time, shown in tables for desk and chat reference. One outlier, a study published in 1977 before reference tools and resources moved online, reported that 62% of questions asked could be answered by nonprofessionals, 38% by a trained generalist, and only 6% required a librarian. The six other studies were published after 2001 when most resources had moved online. Of the questions from these six, authors found a range of 74-90% could be answered by a non-professional, 12-16% by a generalist, and 0-11% required a librarian. Once chat reference was added there was more variation reported between studies, with generalist questions at 30-47% of those reported and 10-23% requiring a librarian. Though the underlying differences in the study designs do not allow for formal analysis, the seven studies indicate that more complex questions are asked via chat service than at the reference desk. Each staffing level was grouped and averaged for comparison. The 1977 study shows nonprofessional questions at 62%, generalist questions at 32%, and librarian questions at 6%. Reference desk questions in the post-2001 articles indicated 81% nonprofessional, 13% generalist, and 5% librarian questions. Post-2001 chat questions were at 49% nonprofessional, 36% generalist, and 15% at librarian level. In the second part of the study, the data coded using the READ scale and collected from the proactive chat system showed an increased number and complexity of questions. The authors identified 4% of questions were rated at a level 1 (e.g., directional, library hours), 30% at level 2 (e.g., known item searching), 39% at level 3 (e.g., reference questions), and 27% at level 4 requiring advanced expertise (e.g., using specialized databases or data sets). Authors combined questions at levels 5 and 6 due to low numbers, and did not describe these when reporting their study. In comparison, 15% of reference desk questions were at a level 3 on the READ scale, and 1% were at level 4. Conclusion – Proactive chat reference service increased the number and the complexity of questions over those received via the reference desk. The frequency of complex questions was too high for nonprofessional staff to refer questions to librarians, causing reevaluation of the tiered service model. Further, this study demonstrates that users still have questions about research, but for users to access services for these questions “reference service must be proactive, convenient, and expert to meet user expectations and research needs” (p. 972).


2020 ◽  
Vol 15 (2) ◽  
pp. 156-158
Author(s):  
Heather MacDonald

A Review of: Meert-Williston, D., & Sandieson, R. (2019). Online Chat Reference: Question Type and the Implication for Staffing in a Large Academic Library. The Reference Librarian, 60(1), 51-61. http://www.tandfonline.com/doi/full/10.1080/02763877.2018.1515688 Abstract Objective – Determine the type of online chat questions to help inform staffing decisions for chat reference service considering their library’s service mandate. Design – Content analysis of consortial online chat questions. Setting – Large academic library in Canada. Subjects – Analysis included 2,734 chat question transcripts. Methods – The authors analyzed chat question transcripts from patrons at the institution for the period of time from September 2013 to August 2014.  The authors coded transcripts by question type using a coding tool created by the authors. For transcripts that fit more than one question type, the authors chose the most prominent type. Main Results – The authors coded the chat questions as follows: service (51%), reference (25%), citation (9%), technology (7%), and miscellaneous (8%). The majority of service questions were informational, followed by account related questions.  Most of the reference chat questions were ready reference with only 16% (4% of the total number of chat questions) being in-depth. After removing miscellaneous questions, those that required a high level of expertise (in-depth reference, instructional, copyright, or citation) equaled 19%. Conclusion – At this institution, one in five chat questions needed a high level of expertise.  Library assistants with sufficient expertise could effectively answer circulation and general reference questions.  With training they could triage complex questions.  


2019 ◽  
Vol 14 (4) ◽  
pp. 2-20
Author(s):  
Kathryn Barrett ◽  
Sabina Pagotto

Abstract Objective – Researchers at an academic library consortium examined whether the service model, staffing choices, and policies of its chat reference service were associated with user dissatisfaction, aiming to identify areas where the collaboration is successful and areas which could be improved. Methods – The researchers examined transcripts, metadata, and survey results from 473 chat interactions originating from 13 universities between June and December 2016. Transcripts were coded for user, operator, and question type; mismatches between the chat operator and user’s institutions, and reveals of such a mismatch; how busy the shift was; proximity to the end of a shift or service closure; and reveals of such aspects of scheduling. Chi-square tests and a binary logistic regression were performed to compare variables to user dissatisfaction. Results – There were no significant relationships between user dissatisfaction and user type, question type, institutional mismatch, busy shifts, chats initiated near the end of a shift or service closure time, or reveals about aspects of scheduling. However, revealing an institutional mismatch was correlated with user dissatisfaction. Operator type was also a significant variable; users expressed less dissatisfaction with graduate student staff hired by the consortium. Conclusions – The study largely reaffirmed the consortium’s service model, staffing practices, and policies. Users are not dissatisfied with the service received from chat operators at partner institutions, or by service provided by non-librarians. Current policies for scheduling, handling shift changes, and service closure are appropriate, but best practices related to disclosing institutional mismatches may need to be changed. This exercise demonstrates that institutions can trust the consortium with their local users’ needs, and underscores the need for periodic service review.


Sign in / Sign up

Export Citation Format

Share Document