scholarly journals Work effort, readability and quality of pharmacy transcription of patient directions from electronic prescriptions: a retrospective observational cohort analysis

2020 ◽  
pp. bmjqs-2019-010405
Author(s):  
Yifan Zheng ◽  
Yun Jiang ◽  
Michael P Dorsch ◽  
Yuting Ding ◽  
V G Vinod Vydiswaran ◽  
...  

BackgroundFree-text directions generated by prescribers in electronic prescriptions can be difficult for patients to understand due to their variability, complexity and ambiguity. Pharmacy staff are responsible for transcribing these directions so that patients can take their medication as prescribed. However, little is known about the quality of these transcribed directions received by patients.MethodsA retrospective observational analysis of 529 990 e-prescription directions processed at a mail-order pharmacy in the USA. We measured pharmacy staff editing of directions using string edit distance and execution time using the Keystroke-Level Model. Using the New Dale-Chall (NDC) readability formula, we calculated NDC cloze scores of the patient directions before and after transcription. We also evaluated the quality of directions (eg, included a dose, dose unit, frequency of administration) before and after transcription with a random sample of 966 patient directions.ResultsPharmacy staff edited 83.8% of all e-prescription directions received with a median edit distance of 18 per e-prescription. We estimated a median of 6.64 s of transcribing each e-prescription. The median NDC score increased by 68.6% after transcription (26.12 vs 44.03, p<0.001), which indicated a significant readability improvement. In our sample, 51.4% of patient directions on e-prescriptions contained at least one pre-defined direction quality issue. Pharmacy staff corrected 79.5% of the quality issues.ConclusionPharmacy staff put significant effort into transcribing e-prescription directions. Manual transcription removed the majority of quality issues; however, pharmacy staff still miss or introduce following their manual transcription processes. The development of tools and techniques such as a comprehensive set of structured direction components or machine learning–based natural language processing techniques may help produce clear directions.

2020 ◽  
Vol 38 (1) ◽  
pp. 44-64
Author(s):  
Nikola Nikolić ◽  
Olivera Grljević ◽  
Aleksandar Kovačević

Purpose Student recruitment and retention are important issues for all higher education institutions. Constant monitoring of student satisfaction levels is therefore crucial. Traditionally, students voice their opinions through official surveys organized by the universities. In addition to that, nowadays, social media and review websites such as “Rate my professors” are rich sources of opinions that should not be ignored. Automated mining of students’ opinions can be realized via aspect-based sentiment analysis (ABSA). ABSA s is a sub-discipline of natural language processing (NLP) that focusses on the identification of sentiments (negative, neutral, positive) and aspects (sentiment targets) in a sentence. The purpose of this paper is to introduce a system for ABSA of free text reviews expressed in student opinion surveys in the Serbian language. Sentiment analysis was carried out at the finest level of text granularity – the level of sentence segment (phrase and clause). Design/methodology/approach The presented system relies on NLP techniques, machine learning models, rules and dictionaries. The corpora collected and annotated for system development and evaluation comprise students’ reviews of teaching staff at the Faculty of Technical Sciences, University of Novi Sad, Serbia, and a corpus of publicly available reviews from the Serbian equivalent of the “Rate my professors” website. Findings The research results indicate that positive sentiment can successfully be identified with the F-measure of 0.83, while negative sentiment can be detected with the F-measure of 0.94. While the F-measure for the aspect’s range is between 0.49 and 0.89, depending on their frequency in the corpus. Furthermore, the authors have concluded that the quality of ABSA depends on the source of the reviews (official students’ surveys vs review websites). Practical implications The system for ABSA presented in this paper could improve the quality of service provided by the Serbian higher education institutions through a more effective search and summary of students’ opinions. For example, a particular educational institution could very easily find out which aspects of their service the students are not satisfied with and to which aspects of their service more attention should be directed. Originality/value To the best of the authors’ knowledge, this is the first study of ABSA carried out at the level of sentence segment for the Serbian language. The methodology and findings presented in this paper provide a much-needed bases for further work on sentiment analysis for the Serbian language that is well under-resourced and under-researched in this area.


Author(s):  
Mario Jojoa ◽  
Gema Castillo-Sánchez ◽  
Begonya Garcia-Zapirain ◽  
Isabel De la Torre Diez ◽  
Manuel Franco-Martín

The aim of this study was to build a tool to analyze, using artificial intelligence, the sentiment perception of users who answered two questions from the CSQ &ndash; 8 questionnaires with raw Spanish free-text. Their responses are related to mindfulness, which is a novel technique used to control stress and anxiety caused by different factors in daily life. As such, we proposed an online course where this method was applied in order to improve the quality of life of health care professionals in COVID 19 pandemic times. We also carried out an evaluation of the satis-faction level of the participants involved, with a view to establishing strategies to improve fu-ture experiences. To automatically perform this task, we used Natural Language Processing (NLP) models such as swivel embedding, neural networks and transfer learning, so as to classify the inputs into the following 3 categories: negative, neutral and positive. Due to the lim-ited amount of data available - 86 registers for the first and 68 for the second - transfer learning techniques were required. The length of the text had no limit from the user&rsquo;s standpoint, and our approach attained a maximum accuracy of 93.02 % and 90.53 % respectively based on ground truth labeled by 3 experts. Finally, we proposed a complementary analysis, using com-puter graphic text representation based on word frequency, to help researchers identify relevant information about the opinions with an objective approach to sentiment. The main conclusion drawn from this work is that the application of NLP techniques in small amounts of data using transfer learning is able to obtain enough accuracy in sentiment analysis and text classification stages


1998 ◽  
Vol 37 (04/05) ◽  
pp. 334-344 ◽  
Author(s):  
G. Hripcsak ◽  
C. Friedman

AbstractEvaluating natural language processing (NLP) systems in the clinical domain is a difficult task which is important for advancement of the field. A number of NLP systems have been reported that extract information from free-text clinical reports, but not many of the systems have been evaluated. Those that were evaluated noted good performance measures but the results were often weakened by ineffective evaluation methods. In this paper we describe a set of criteria aimed at improving the quality of NLP evaluation studies. We present an overview of NLP evaluations in the clinical domain and also discuss the Message Understanding Conferences (MUC) [1-41. Although these conferences constitute a series of NLP evaluation studies performed outside of the clinical domain, some of the results are relevant within medicine. In addition, we discuss a number of factors which contribute to the complexity that is inherent in the task of evaluating natural language systems.


2014 ◽  
Vol 05 (03) ◽  
pp. 689-707 ◽  
Author(s):  
S.T. Corley ◽  
M.T. Rupp ◽  
J. Ruiz ◽  
J. Smith ◽  
R. Gill ◽  
...  

Summary Background: Prescribers’ inappropriate use of the free-text Notes field in new electronic prescriptions can create confusion and workflow disruptions at receiving pharmacies that often necessitates contact with prescribers for clarification. The inclusion of inappropriate patient direction (Sig) information in the Notes field is particularly problematic. Objective: We evaluated the effect of a targeted watermark, an embedded overlay, reminder statement in the Notes field of an EHR-based e-prescribing application on the incidence of inappropriate patient directions (Sig) in the Notes field. Methods: E-prescriptions issued by the same exact cohort of 97 prescribers were collected over three time periods: baseline, three months after implementation of the reminder, and 15 months post implementation. Three certified and experienced pharmacy technicians independently reviewed all e-prescriptions for inappropriate Sig-related information in the Notes field. A physician reviewer served as the final adjudicator for e-prescriptions where the three reviewers could not reach a consensus. ANOVA and post hoc Tukey HSD tests were performed on group comparisons where statistical significance was evaluated at p<0.05 Results: The incidence of inappropriate Sig-related information in the Notes field decreased from a baseline of 2.8% to 1.8% three months post-implementation and remained stable after 15 months. In addition, prescribers’ use of the Notes decreased by 22% after 3 months and had stabilized at 18.7% below baseline after 15 months. Conclusion: Insertion of a targeted watermark reminder statement in the Notes field of an e-prescribing application significantly reduced the incidence of inappropriate Sig-related information in Notes and decreased prescribers’ use of this field. Citation: Dhavle AA, Corley ST, Rupp MT, Ruiz J, Smith J, Gill R, Sow M. Evaluation of a user guidance reminder to improve the quality of electronic prescription messages. Appl Clin Inf 2014; 5: 699–707http://dx.doi.org/10.4338/ACI-2014-03-CR-0022


Author(s):  
Philip M. Newton ◽  
Melisa J. Wallace ◽  
Judy McKimm

Facilitating the provision of detailed, deep and useful feedback is an important design feature of any educational programme. Here we evaluate feedback provided to medical students completing short transferable skills projects. Feedback quantity and depth were evaluated before and after a simple intervention to change the structure of the feedback-provision form from a blank free-text feedback form to a structured proforma that asked a pair of short questions for each of the six domains being assessed. Each pair of questions consisted of asking the marker ?占퐓hat was done well???and ?占퐓hat changes would improve the assignment???Changing the form was associated with a significant increase in the quantity of the feedback and in the amount and quality of feedback provided to students. We also observed that, for these double-marked projects, the marker designated as ?占퐉arker 1??consistently wrote more feedback than the marker designated ?占퐉arker 2??


Author(s):  
Nadir Belhaj ◽  
Abdemounaime Hamdane ◽  
Nour El Houda Chaoui ◽  
Habiba Chaoui ◽  
Moulhime El Bekkali

The use of chatbot or conversational agents is becoming common these days by the companies in many fields to make smart conversations with users. Backed by artificial intelligence and natural language processing they provide a strong platform to engage users. These positive aspects of chatbots can be beneficial in the educational sector, especially in conducting online survey. This study aims to explore the feasibility of a new chatbot approach survey as a new survey method in Moroccan university to overcome the web survey’s common response quality problems. Indeed, having student feedback before and after graduation is essential for university assessment. This new approach keeps students engaged, supportive, and even excited to offer feedback without getting bored and dropping the conversation, especially in Moroccan universities known by an overcrowding of students where it is difficult to get their feedback. This feedback feeds into our university' databases for further reporting and decision making to improve the quality of educational content and student-oriented services. Finally, we have shown the effectiveness of our approach by a comparative data study between the traditional online survey and the use of this chatbot.


2021 ◽  
Vol 319 ◽  
pp. 01064
Author(s):  
Issam Aattouchi ◽  
Saida Elmendili ◽  
Fatna Elmendili

Twitter is a microblogging service where users can send and read short messages of 140 characters called “tweets”. Many healthcare-related unstructured and free-text tweets are shared on Twitter, which is becoming a popular domain for medical research. Sentiment analysis is one of the data mining types that provides an estimate of the direction of personality sentiment analysis in natural language processing. By analyzing text, computational linguistics is used to infer and analyze mental knowledge of the web, social media, and related references. The data reviewed actually quantifies the attitudes or feelings of the global society towards specific goods, people, or thoughts and exposes the contextual duality of the knowledge. Sentiment analysis is used in various sectors such as health care. There is an incredible amount of healthcare information available online, such as social media, and websites focused on rating medical problems, that is not accessed in a methodical way. Sentiment analysis has many benefits, such as using medical information to achieve the best possible patient outcome and improve the quality of health care. This review paper focuses on the presented sentiment analysis methods that are used in the medical field.


2017 ◽  
Author(s):  
Nathaniel R. Greenbaum ◽  
Yacine Jernite ◽  
Yoni Halpern ◽  
Shelley Calder ◽  
Larry A. Nathanson ◽  
...  

AbstractObjectiveTo determine the effect of contextual autocomplete, a user interface that uses machine learning, on the efficiency and quality of documentation of presenting problems (chief complaints) in the emergency department (ED).Materials and MethodsWe used contextual autocomplete, a user interface that ranks concepts by their predicted probability, to help nurses enter data about a patient’s reason for visiting the ED. Predicted probabilities were calculated using a previously derived model based on triage vital signs and a brief free text note. We evaluated the percentage and quality of structured data captured using a prospective before-and-after study design.ResultsA total of 279,231 patient encounters were analyzed. Structured data capture improved from 26.2% to 97.2% (p<0.0001). During the post-implementation period, presenting problems were more complete (3.35 vs 3.66; p=0.0004), as precise (3.59 vs. 3.74; p=0.1), and higher in overall quality (3.38 vs. 3.72; p=0.0002). Our system reduced the mean number of keystrokes required to document a presenting problem from 11.6 to 0.6 (p<0.0001), a 95% improvement.DiscussionWe have demonstrated a technique that captures structured data on nearly all patients. We estimate that our system reduces the number of man-hours required annually to type presenting problems at our institution from 92.5 hours to 4.8 hours.ConclusionImplementation of a contextual autocomplete system resulted in improved structured data capture, ontology usage compliance, and data quality.


Sign in / Sign up

Export Citation Format

Share Document