scholarly journals Analysis of Question Type Can Help Inform Chat Staffing Decisions

2020 ◽  
Vol 15 (2) ◽  
pp. 156-158
Author(s):  
Heather MacDonald

A Review of: Meert-Williston, D., & Sandieson, R. (2019). Online Chat Reference: Question Type and the Implication for Staffing in a Large Academic Library. The Reference Librarian, 60(1), 51-61. http://www.tandfonline.com/doi/full/10.1080/02763877.2018.1515688 Abstract Objective – Determine the type of online chat questions to help inform staffing decisions for chat reference service considering their library’s service mandate. Design – Content analysis of consortial online chat questions. Setting – Large academic library in Canada. Subjects – Analysis included 2,734 chat question transcripts. Methods – The authors analyzed chat question transcripts from patrons at the institution for the period of time from September 2013 to August 2014.  The authors coded transcripts by question type using a coding tool created by the authors. For transcripts that fit more than one question type, the authors chose the most prominent type. Main Results – The authors coded the chat questions as follows: service (51%), reference (25%), citation (9%), technology (7%), and miscellaneous (8%). The majority of service questions were informational, followed by account related questions.  Most of the reference chat questions were ready reference with only 16% (4% of the total number of chat questions) being in-depth. After removing miscellaneous questions, those that required a high level of expertise (in-depth reference, instructional, copyright, or citation) equaled 19%. Conclusion – At this institution, one in five chat questions needed a high level of expertise.  Library assistants with sufficient expertise could effectively answer circulation and general reference questions.  With training they could triage complex questions.  

2008 ◽  
Vol 3 (1) ◽  
pp. 72 ◽  
Author(s):  
Stephanie Hall

A review of: Kwon, Nahyun. "Public Library Patrons' Use of Collaborative Chat Reference Service: The Effectiveness of Question Answering by Question Type." Library & Information Science Research 29.1 (Mar. 2007): 70-91. Objective – To assess the effectiveness of a collaborative chat reference service in answering different types of question. Specifically, the study compares the degree of answer completion and the level of user satisfaction for simple factual questions vs. more in-depth subject-based reference questions, and for ‘local’ (pertaining to a particular library) and non-local questions. Design – Content analysis of 415 transcripts of reference transactions, which were also compared to corresponding user satisfaction survey results. Setting – An online collaborative reference service offered by a large public library system (33 branch and regional locations). This service is part of the Metropolitan Co-operative Library System: a virtual reference consortium of U.S. libraries (public, academic, special, and corporate) that provides 24/7 service. Subjects – Reference librarians from around the U.S. (49 different libraries), and users logging into the service via the public library system’s portal (primarily patrons of the 49 libraries). Method – Content analysis was used to evaluate virtual reference transcripts recorded between January and June, 2004. Reliability was enhanced through triangulation, with researchers comparing the content analysis of each transcript against the results of a voluntary exit survey. Of 1,387 transactions that occurred during the period of study, 420 users completed the survey and these formed the basis of the study, apart from 5 transactions that were omitted because the questions were incomprehensible. Questions were examined and assigned to five categories: “simple, factual questions; subject-based research questions; resource access questions; circulation-related questions; and local library information inquiries” (80-81). Answers were classed as either “completely answered, partially answered or unanswered, referred, and problematic endings” (82). Lastly, user satisfaction was surveyed on three measures: satisfaction with the answer, perceived staff quality, and willingness to return. In general, the methods used were clearly described and appeared reliable. Main results – Distribution of question types: By far the largest group of questions were circulation-related (48.9%), with subject-based research questions coming next (25.8%), then simple factual questions (9.6%), resource access questions (8.9%), and local library information inquiries (6.8%). Effectiveness of chat reference service by question type: No statistically significant difference was found between simple factual questions and subject-based research questions in terms of answer completeness and user satisfaction. However, a statistically significant difference was found when comparing ‘local’ (circulation and local library information questions) and ‘non-local’ (simple factual and subject-based research questions), with both satisfaction and answer completeness being lower for local questions. Conclusions – The suggestion that chat reference may not be as appropriate for in-depth, subject-based research questions as it is for simple factual questions is not supported by this research. In fact, the author notes that “subject-based research questions, when answered, were answered as completely as factual questions and found to be the question type that gives the greatest satisfaction to the patrons among all question types” (86). Lower satisfaction and answer completion were found among local vs. non-local queries. Additionally, there appeared to be some confusion among patrons about the nature of the collaborative service – they often assumed that the librarian answering their question was from their local library. The author suggests some form of triage to direct local questions to the appropriate venue from the outset, thus avoiding confusion and unnecessary referrals. The emergence of repetitive questions also signalled the need for the development of FAQs for chat reference staff and the incorporation of such questions into chat reference training.


2019 ◽  
Vol 14 (4) ◽  
pp. 2-20
Author(s):  
Kathryn Barrett ◽  
Sabina Pagotto

Abstract Objective – Researchers at an academic library consortium examined whether the service model, staffing choices, and policies of its chat reference service were associated with user dissatisfaction, aiming to identify areas where the collaboration is successful and areas which could be improved. Methods – The researchers examined transcripts, metadata, and survey results from 473 chat interactions originating from 13 universities between June and December 2016. Transcripts were coded for user, operator, and question type; mismatches between the chat operator and user’s institutions, and reveals of such a mismatch; how busy the shift was; proximity to the end of a shift or service closure; and reveals of such aspects of scheduling. Chi-square tests and a binary logistic regression were performed to compare variables to user dissatisfaction. Results – There were no significant relationships between user dissatisfaction and user type, question type, institutional mismatch, busy shifts, chats initiated near the end of a shift or service closure time, or reveals about aspects of scheduling. However, revealing an institutional mismatch was correlated with user dissatisfaction. Operator type was also a significant variable; users expressed less dissatisfaction with graduate student staff hired by the consortium. Conclusions – The study largely reaffirmed the consortium’s service model, staffing practices, and policies. Users are not dissatisfied with the service received from chat operators at partner institutions, or by service provided by non-librarians. Current policies for scheduling, handling shift changes, and service closure are appropriate, but best practices related to disclosing institutional mismatches may need to be changed. This exercise demonstrates that institutions can trust the consortium with their local users’ needs, and underscores the need for periodic service review.


2017 ◽  
Vol 12 (1) ◽  
pp. 50 ◽  
Author(s):  
John Carey ◽  
Ajatshatru Pathak

Abstract Objective – The purpose of this study was to examine the reference service mode preferences of community college (two-year) and four-year college students. Methods – The researchers administered a paper-based, face-to-face questionnaire at two institutions within the City University of New York system: Hunter College, a senior college, and Queensborough Community College, a two-year institution. During the summer of 2015, the researchers surveyed 79 participants, asking them to identify their most and least preferred mediums for accessing library reference services. Results – Nearly 75% of respondents expressed a preference for face-to-face reference, while only about 18% preferred remote reference services (online chat, e-mail, text message, and telephone). Close to 84% of the participants cited remote reference services as their least preferred modes and slightly more than 10% said this of face-to-face. The data reveal a widespread popularity of face-to-face reference service among all types of participants regardless of institutional affiliation, age, gender, academic level, field of study, and race or ethnicity. Conclusion – This study suggests that given the opportunity academic library users will utilize face-to-face reference service for assistance with research assignments. Academic libraries at both two-year and four-year institutions might consider assessing user views on reference modes and targeting support toward services that align with patron preferences.


2018 ◽  
Author(s):  
Adina Mulliken

Eighteen academic library users who are blind were interviewed about their experiences with academic libraries and the libraries’ websites using an open-ended questionnaire and recorded telephone interviews. The study approaches these topics from a user-centered perspective, with the idea that blind users themselves can provide particularly reliable insights into the issues and potential solutions that are most critical to them. Most participants used reference librarians’ assistance, and most had positive experiences. High-level screen reader users requested help with specific needs. A larger number of participants reported contacting a librarian because of feeling overwhelmed by the library website. In some cases, blind users and librarians worked verbally without the screen reader. Users were appreciative of librarians’ help but outcomes were not entirely positive. Other times, librarians worked with users to navigate with a screen reader, which sometimes led to greater independence. Some users expressed satisfaction with working with librarians verbally, particularly if websites did not seem screen reader user friendly, but many users preferred independence. Participants agreed it would be helpful if librarians knew how to use screen readers, or at least if librarians were familiar enough with screen readers to provide relevant verbal cues. Many users liked and used chat reference and many preferred Purdue Online Writing Lab (OWL) to learn citation style, though learning citation style was challenging. Questions such as reference librarians’ role when e-resources are not equally accessible deserve wider discussion in the library literature and in practice. Given the challenges described by the research participants and legal requirements for equally effective electronic and information technologies, libraries and librarians should approach reference services for blind users more proactively. Recommendations are provided. This paper was originally published in Reference and User Services Quarterly at https://journals.ala.org/index.php/rusq/article/view/6528


2005 ◽  
Vol 66 (5) ◽  
pp. 436-455 ◽  
Author(s):  
Sandra L. De Groote ◽  
Josephine L. Dorsch ◽  
Scott Collard ◽  
Carol Scherrer

The purpose of this study was to determine how successfully a large academic library with multiple reference departments and subject specialties could combine virtually to create one digital reference service. Questions were coded to determine who the users of the service were, the types of questions being asked, and the subject expertise of the librarian answering the question. The study found that the majority of questions were submitted by persons affiliated with the university, that ready reference and directional questions predominated, and that the librarians were able to successfully share the duty of answering the general reference questions while ensuring that the questions requiring subject expertise were answered by the appropriate subject specialists. Analysis of the types of questions will inform future decisions regarding webpage redesign, online instruction needs, and more appropriate FAQs (frequently asked questions.)


2017 ◽  
Vol 30 (2) ◽  
pp. 163-184
Author(s):  
Jiebei Luo

Purpose This paper aims to evaluate the performance of a chat reference service implemented at an academic library in a private liberal arts college by gauging its impact on other forms of reference service in terms of usage volume, with a focus on research-related face-to-face reference questions. Design/methodology/approach Two statistical methods are used, namely, the difference-in-differences method and a simple moving average time series analysis, to analyze both the short-term and long-term impact brought by chat reference. Findings This study finds that the usage volume of the traditional face-to-face reference is significantly affected by chat reference in its first service year. The long-term analysis suggests that chat reference volume displays a significant declining trend (−2.06 per cent academic month) since its implementation. Yet, its usage volume relative to other reference services remains stable over time. Originality/value The findings in this case study will be of value to libraries with similar scale and institutional features that are also interested in assessing their chat reference service. In addition, this paper is the first to apply the difference-in-differences approach in the field of library science, and the two statistical methods adopted in this case study can be readily adapted and applied to other similar volume-based library assessment projects.


2014 ◽  
Vol 9 (2) ◽  
pp. 31 ◽  
Author(s):  
Annie M. Hughes

A Review of: Bishop, B. W., & Bartlett, J. A. (2013). Where do we go from here? Informing academic library staffing through reference transaction analysis. College & Research Libraries, 74(5), 489-500. Objective – To identify the quantity of location-based and subject-based questions and determine the locations where those questions are asked in order to inform decision-making regarding optimal placing of staff. Design – Content analysis of location-based and subject-based reference transactions or transcripts collected using LibStats at 15 face-to-face (f2f) service points and via virtual services. Setting – Virtual and f2f service points at University of Kentucky (UK) campus libraries. Subjects – 1,852 location-based and subject-based reference transactions gathered via a systematic sample of every 70th transaction out of 129,572 transactions collected. Methods – Using LibStats, the researchers collected data on location-based and subject-based questions at all service points at UK Libraries between 2008 and 2011. The researchers eliminated transcripts that did not include complete data or questions with fields left blank. If all question fields were properly completed, identification and coding of location-based or subject-based questions took place. Usable transcripts included 1,333 questions that contained sufficient data. For this particular content analysis only the question type, reference mode, and location of question were utilized from the data collected. Unusable transactions were removed prior to content analysis, and reliability testing was conducted to determine interrater and intrarater reliability. Interrater reliability was high (Krippendorff’s alpha = .87%) and intrarater reliability was acceptable (Cohen's kappa = .880). Main Results – From the usable transcripts, 83.7% contained location-based questions and 16.3% were subject-based, and a little over 80% of location-based questions and 77.2% of subject-based questions were asked face-to-face (f2f). Of the location-based questions, 11.5% were directional questions and many of these questions were related to finding places inside the libraries. “Attribute of location” questions related to library services and resources, such as finding an item, printing, circulation, desk supplies, and computer problems, made up 72.8% of total question transactions. Researchers found that subject-based questions were difficult to categorize and noted that other methods would be needed to analyze the content of these questions. Professional librarians and library staff are better equipped to answer these questions, and the location where the question asked is irrelevant. The researchers addressed the issue of where questions were asked by recording the reference mode (chat, e-mail, phone, or f2f) and location service point at UK Libraries. Overall, 79% of questions were asked f2f, rather than via chat or e-mail. Researchers think that this is due to a lack of marketing efforts regarding those services, noting that most questions were asked in the system’s large main library, which also receives the most subject-based questions. Conclusion – This study can inform the UK Libraries system as to where their resources are most needed and allow for more strategic decision-making regarding staffing. The study could also prompt development of a mobile application to answer location-based questions, though more investigation is needed before moving forward with development of a mobile app. Due to the findings of this study, UK Libraries will deploy their professional library staff to locations where subject-based questions were most frequently asked. Because staffing of libraries is one of the “most expensive and valuable resources,” academic libraries can use this method to validate their current staffing strategies or justify the allocation of staff throughout their systems (p. 499).


2017 ◽  
Vol 57 (2) ◽  
pp. 115 ◽  
Author(s):  
Adina Mulliken

Eighteen academic library users who are blind were interviewed about their experiences with academic libraries and the libraries’ websites using an open-ended questionnaire and recorded telephone interviews. The study approaches these topics from a user-centered perspective, with the idea that blind users themselves can provide particularly reliable insights into the issues and potential solutions that are most critical to them. Most participants used reference librarians’ assistance, and most had positive experiences. High-level screen reader users requested help with specific needs. A larger number of participants reported contacting a librarian because of feeling overwhelmed by the library website. In some cases, blind users and librarians worked verbally without the screen reader. Users were appreciative of librarians’ help but outcomes were not entirely positive. Other times, librarians worked with users to navigate with a screen reader, which sometimes led to greater independence. Some users expressed satisfaction with working with librarians verbally, particularly if websites did not seem screen reader user friendly, but many users preferred independence. Participants agreed it would be helpful if librarians knew how to use screen readers, or at least if librarians were familiar enough with screen readers to provide relevant verbal cues. Many users liked and used chat reference and many preferred Purdue Online Writing Lab (OWL) to learn citation style, though learning citation style was challenging. Questions such as reference librarians’ role when e-resources are not equally accessible deserve wider discussion in the library literature and in practice. Given the challenges described by the research participants and legal requirements for equally effective electronic and information technologies, libraries and librarians should approach reference services for blind users more proactively. Recommendations are provided.


2018 ◽  
Vol 13 (2) ◽  
pp. 112-114 ◽  
Author(s):  
Heather MacDonald

A Review of: Keyes, K., & Dworak, E. (2017). Staffing chat reference with undergraduate student assistants at an academic library: A standards-based assessment. Journal of Academic Librarianship, 43(6), 469–478. https://doi.org/10.1016/j.acalib.2017.09.001 Abstract Objective – To determine whether undergraduate students can provide quality chat reference service. Design – Content analysis of undergraduate student, professional librarian, and paraprofessional staff responses in chat reference transcripts. Setting – Academic library. Subjects – 451 chat reference transcripts. Methods – Chat reference transcripts from May 2014–September 2016 were collected. Five categories of answerer were coded: librarian in the reference department (LibR), librarian from another department (LibNR), staff without a Master of Library Science (staff), staff with a Master of Library Science (+staff), and student employee (student). A random sample of 15% of each category of answerer was selected for analysis. The answerer categories were collapsed to librarians, staff, and students for the results section.  Four criteria were used to code chat reference transcripts: difficulty of query, answerer behaviour, problems with transcript answer, and comments from coders. Coding for difficulty was based on the READ scale (Reference Effort Assessment Data). Answerer behaviour was based on The RUSA Guidelines (Reference and User Services Association). Behaviours assessed included: clarity, courtesy, grammar, greeting, instruction, referral, searching, sign off, sources, and whether patrons were asked if their question was answered. All coding was done independently between the two researchers, with very good interrater reliability. Data for variables with disagreement were removed from the analysis. The chi-square test was used to analyze the association between variables. Analysis also included patrons’ ratings and comments about their chat experience. Content and tone were assessed for each patron comment. Main Results – Answerer behaviours showed a significant difference between groups for 3 of the 10 behaviours assessed: courtesy (p=0.031), grammar (p=0.001), and sources (0.041). The difference between groups for courtesy was: staff (88%), librarians (76%), and students (73%). Grammar was correct in most transcripts, but there was a significant difference between the answerer groups: librarians (98%), staff (90%), and students (73%). There was a significant difference between groups that offered sources: librarians (63.8%), staff (62.5%), and students (43.8%).  There was no significant difference between the answerer groups for the other seven behaviours. Overall, 31% of transcripts showed that answerers asked if a patron’s query was answered or if they needed further help. The analysis showed that 79% of transcripts were coded as clear or free of jargon. Greetings were found in 65% of transcripts. Instruction was indicated in 59% of transcripts. Referrals were offered in 27% of all transcripts. Of the transcripts where searching was deemed necessary, 82% showed evidence of searching. A sign off was present in 56% of all transcripts. Transcripts with noted problems were deemed so because of lack of effort, being incomplete or incorrect, having no reference interview, or the answerer should have asked for help. There was no significant difference between answerer groups with respect to problem questions. Of the 24% of patrons who rated their chat experience, 90% rated it as good or great, and no significant difference was found between answerer groups. Question difficulty was coded 50% at level 0-2 (easier), 39% at level 3 (medium difficulty), and 11% at level 4-5 (more difficult). Conclusion – Undergraduate students are capable of providing chat reference that is similar in quality to that of librarians and staff. However, increased training is needed for students in the areas of referrals, providing sources, and signing off. Students do better than librarians and staff with greetings and are more courteous than librarians. There is room for improvement for staff and librarians offering chat services. Tiered chat reference service using undergraduates is a viable option.


Sign in / Sign up

Export Citation Format

Share Document