scholarly journals Partners eCare Research Core for Clinical Research: Pilot to Build and Test Silent Best Practice Alert Notifications for Recruiting Inpatient Study Participants

Iproceedings ◽  
2017 ◽  
Vol 3 (1) ◽  
pp. e16
Author(s):  
Emily Caplan ◽  
Nina Schussler ◽  
Harriett Gabbidon ◽  
Lauren Cortese ◽  
Erick Maclean ◽  
...  
2018 ◽  
Author(s):  
Connor Devoe ◽  
Harriett Gabbidon ◽  
Nina Schussler ◽  
Lauren Cortese ◽  
Emily Caplan ◽  
...  

BACKGROUND Participant recruitment, especially for frail, elderly, hospitalized patients, remains one of the greatest challenges for many research groups. Traditional recruitment methods such as chart reviews are often inefficient, low-yielding, time consuming, and expensive. Best Practice Alert (BPA) systems have previously been used to improve clinical care and inform provider decision making, but the system has not been widely used in the setting of clinical research. OBJECTIVE The primary objective of this quality-improvement initiative was to develop, implement, and refine a silent Best Practice Alert (sBPA) system that could maximize recruitment efficiency. METHODS The captured duration of the screening sessions for both methods combined with the allotted research coordinator hours in the Emerald-COPD (chronic obstructive pulmonary disease) study budget enabled research coordinators to estimate the cost-efficiency. RESULTS Prior to implementation, the sBPA system underwent three primary stages of development. Ultimately, the final iteration produced a system that provided similar results as the manual Epic Reporting Workbench method of screening. A total of 559 potential participants who met the basic prescreen criteria were identified through the two screening methods. Of those, 418 potential participants were identified by both methods simultaneously, 99 were identified only by the Epic Reporting Workbench Method, and 42 were identified only by the sBPA method. Of those identified by the Epic Reporting Workbench, only 12 (of 99, 12.12%) were considered eligible. Of those identified by the sBPA method, 30 (of 42, 71.43%) were considered eligible. Using a side-by-side comparison of the sBPA and the traditional Epic Reporting Workbench method of screening, the sBPA screening method was shown to be approximately four times faster than our previous screening method and estimated a projected 442.5 hours saved over the duration of the study. Additionally, since implementation, the sBPA system identified the equivalent of three additional potential participants per week. CONCLUSIONS Automation of the recruitment process allowed us to identify potential participants in real time and find more potential participants who meet basic eligibility criteria. sBPA screening is a considerably faster method that allows for more efficient use of resources. This innovative and instrumental functionality can be modified to the needs of other research studies aiming to use the electronic medical records system for participant recruitment.


2014 ◽  
Vol 58 (3) ◽  
pp. 262-277
Author(s):  
Jeanne Maree Allen ◽  
Julie Rimes

This article reports on ways in which one Australian independent school seeks to develop and sustain best practice and academic integrity in its programs through a system of ongoing program evaluation, involving a systematic, cyclical appraisal of the school’s suite of six faculties. A number of different evaluation methods have been and continue to be used, each developed to best suit the particular program under evaluation. In order to gain an understanding of the effectiveness of this process, we conducted a study into participants’ perceptions of the strengths and weaknesses of the four program evaluations undertaken between 2009 and 2011. Drawing on documentary analysis of the evaluation reports and analysis of questionnaire data from the study participants, a number of findings were generated. These findings are provided and discussed, together with suggestions about ways in which the conceptualisation and conduct of school program evaluations might be enhanced.


2018 ◽  
Vol 122 (9) ◽  
pp. 1574-1577 ◽  
Author(s):  
Adam Fleddermann ◽  
Steve Jones ◽  
Sarah James ◽  
Kevin F. Kennedy ◽  
Michael L. Main ◽  
...  

BMJ Open ◽  
2020 ◽  
Vol 10 (9) ◽  
pp. e037994
Author(s):  
Lydia O'Sullivan ◽  
Prasanth Sukumar ◽  
Rachel Crowley ◽  
Eilish McAuliffe ◽  
Peter Doran

ObjectivesThe first aim of this study was to quantify the difficulty level of clinical research Patient Information Leaflets/Informed Consent Forms (PILs/ICFs) using validated and widely used readability criteria which provide a broad assessment of written communication. The second aim was to compare these findings with best practice guidelines.DesignRetrospective, quantitative analysis of clinical research PILs/ICFs provided by academic institutions, pharmaceutical companies and investigators.SettingPILs/ICFs which had received Research Ethics Committee approval in the last 5 years were collected from Ireland and the UK.InterventionNot applicable.Main outcome measuresPILs/ICFs were evaluated against seven validated readability criteria (Flesch Reading Ease, Flesh Kincaid Grade Level, Simplified Measure of Gobbledegook, Gunning Fog, Fry, Raygor and New Dale Chall). The documents were also scored according to two health literacy-based criteria: the Clear Communication Index (CCI) and the Suitability Assessment of Materials tool. Finally, the documents were assessed for compliance with six best practice metrics from literacy agencies.ResultsA total of 176 PILs were collected, of which 154 were evaluable. None of the PILs/ICFs had the mean reading age of <12 years recommended by the American Medical Association. 7.1% of PILs/ICFs were evaluated as ‘Plain English’, 40.3%: ‘Fairly Difficult’, 51.3%: ‘Difficult’ and 1.3%: ‘Very Difficult’. No PILs/ICFs achieved a CCI >90. Only two documents complied with all six best practice literacy metrics.ConclusionsWhen assessed against both traditional readability criteria and health literacy-based tools, the PILs/ICFs in this study are inappropriately complex. There is also evidence of poor compliance with guidelines produced by literacy agencies. These data clearly evidence the need for improved documentation to underpin the consent process.


2020 ◽  
Vol 17 (6) ◽  
pp. 703-711
Author(s):  
Anita Walden ◽  
Lynsi Garvin ◽  
Michelle Smerek ◽  
Constance Johnson

Background: Increasing and sustaining the engagement of participants in clinical research studies is a goal for clinical investigators, especially for studies that require long-term or frequent involvement of participants. Technology can be used to reduce barriers to participation by providing multiple options for clinical data entry and form submission. However, electronic systems used in clinical research studies should be user-friendly while also ensuring data quality. Directly involving study participants in evaluating the effectiveness and usability of electronic tools may promote wider adoption, maintain involvement, and increase user satisfaction of the technology. While developers of healthcare applications have incorporated user-centered designs, these methods remain uncommon in the design of clinical study tools such as patient-reported outcome surveys or electronic data capture digital health tools. Methods: Our study evaluated whether the clinical research setting may benefit from implementing user-centered design principles. Study participants were recruited to test the web-based form for the Measurement to Understand the Reclassification of Disease of Cabarrus/Kannapolis (MURDOCK) Study Community Translational Population Health Registry and Biorepository that would enable them to complete their study forms electronically. The study enrollment form collects disease history, conditions, smoking status, medications, and other information. The system was initially evaluated by the data management team through traditional user-acceptance testing methods. During the tool evaluation phase, a decision was made to incorporate a small-scale usability study to directly test the system. Results: Results showed that a majority of participants found the system easy to use. Of the eight required tasks, 75% were completed successfully. Of the 72 heuristics violated, language was the most frequent violation. Conclusion: Our study showed that user-centered usability methods can identify important issues and capture information that can enhance the participant’s experience and may improve the quality of study tools.


Sign in / Sign up

Export Citation Format

Share Document