workflow integration
Recently Published Documents


TOTAL DOCUMENTS

74
(FIVE YEARS 29)

H-INDEX

10
(FIVE YEARS 3)

2021 ◽  
Vol 5 (Supplement_1) ◽  
pp. 84-85
Author(s):  
Keith Anderson ◽  
Lisa Peters-Beumer ◽  
Laurie Duff

Abstract Recently, there has been a resounding call for standardized outcome data collection in adult day services (ADS). Outcome data have the potential to demonstrate the effectiveness of ADS, aid in program development, and help leverage funding opportunities. Unfortunately, many ADS centers do not collect outcome data for several reasons, including the cost of data collection software and systems. In this presentation, we present one effort to utilize an existing multiuse ‘off the shelf’ software solution to collect ADS outcome data for a network of ADS providers. The researchers collaborated with software developers and ADS providers to adapt the software to incorporate outcome measures and reporting functionality on both the individual and program levels. Adaptation and adoption required attention to HIPAA compliance, workflow integration, measurement fidelity, and data management processes. Despite these challenges, adapting existing software systems may be a cost-effective way to enable expanded outcome data collection in ADS.


2021 ◽  
Vol 97 ◽  
pp. 103498
Author(s):  
Megan E. Salwei ◽  
Pascale Carayon ◽  
Peter L.T. Hoonakker ◽  
Ann Schoofs Hundt ◽  
Douglas Wiegmann ◽  
...  

2021 ◽  
Vol 3 (4) ◽  
Author(s):  
Quentin Ferré ◽  
Cécile Capponi ◽  
Denis Puthier

Abstract Most epigenetic marks, such as Transcriptional Regulators or histone marks, are biological objects known to work together in n-wise complexes. A suitable way to infer such functional associations between them is to study the overlaps of the corresponding genomic regions. However, the problem of the statistical significance of n-wise overlaps of genomic features is seldom tackled, which prevent rigorous studies of n-wise interactions. We introduce OLOGRAM-MODL, which considers overlaps between n ≥ 2 sets of genomic regions, and computes their statistical mutual enrichment by Monte Carlo fitting of a Negative Binomial distribution, resulting in more resolutive P-values. An optional machine learning method is proposed to find complexes of interest, using a new itemset mining algorithm based on dictionary learning which is resistant to noise inherent to biological assays. The overall approach is implemented through an easy-to-use CLI interface for workflow integration, and a visual tree-based representation of the results suited for explicability. The viability of the method is experimentally studied using both artificial and biological data. This approach is accessible through the command line interface of the pygtftk toolkit, available on Bioconda and from https://github.com/dputhier/pygtftk


2021 ◽  
Vol 12 (05) ◽  
pp. 1120-1134
Author(s):  
Jacqueline M. Soegaard Ballester ◽  
Geoffrey D. Bass ◽  
Richard Urbani ◽  
Glenn Fala ◽  
Rutvij Patel ◽  
...  

Abstract Background Clinical workflows require the ability to synthesize and act on existing and emerging patient information. While offering multiple benefits, in many circumstances electronic health records (EHRs) do not adequately support these needs. Objectives We sought to design, build, and implement an EHR-connected rounding and handoff tool with real-time data that supports care plan organization and team-based care. This article first describes our process, from ideation and development through implementation; and second, the research findings of objective use, efficacy, and efficiency, along with qualitative assessments of user experience. Methods Guided by user-centered design and Agile development methodologies, our interdisciplinary team designed and built Carelign as a responsive web application, accessible from any mobile or desktop device, that gathers and integrates data from a health care institution's information systems. Implementation and iterative improvements spanned January to July 2016. We assessed acceptance via usage metrics, user observations, time–motion studies, and user surveys. Results By July 2016, Carelign was implemented on 152 of 169 total inpatient services across three hospitals staffing 1,616 hospital beds. Acceptance was near-immediate: in July 2016, 3,275 average unique weekly users generated 26,981 average weekly access sessions; these metrics remained steady over the following 4 years. In 2016 and 2018 surveys, users positively rated Carelign's workflow integration, support of clinical activities, and overall impact on work life. Conclusion User-focused design, multidisciplinary development teams, and rapid iteration enabled creation, adoption, and sustained use of a patient-centered digital workflow tool that supports diverse users' and teams' evolving care plan organization needs.


2021 ◽  
Author(s):  
Hugh G. Pemberton ◽  
Lara A. M. Zaki ◽  
Olivia Goodkin ◽  
Ravi K. Das ◽  
Rebecca M. E. Steketee ◽  
...  

AbstractDevelopments in neuroradiological MRI analysis offer promise in enhancing objectivity and consistency in dementia diagnosis through the use of quantitative volumetric reporting tools (QReports). Translation into clinical settings should follow a structured framework of development, including technical and clinical validation steps. However, published technical and clinical validation of the available commercial/proprietary tools is not always easy to find and pathways for successful integration into the clinical workflow are varied. The quantitative neuroradiology initiative (QNI) framework highlights six necessary steps for the development, validation and integration of quantitative tools in the clinic. In this paper, we reviewed the published evidence regarding regulatory-approved QReports for use in the memory clinic and to what extent this evidence fulfils the steps of the QNI framework. We summarize unbiased technical details of available products in order to increase the transparency of evidence and present the range of reporting tools on the market. Our intention is to assist neuroradiologists in making informed decisions regarding the adoption of these methods in the clinic. For the 17 products identified, 11 companies have published some form of technical validation on their methods, but only 4 have published clinical validation of their QReports in a dementia population. Upon systematically reviewing the published evidence for regulatory-approved QReports in dementia, we concluded that there is a significant evidence gap in the literature regarding clinical validation, workflow integration and in-use evaluation of these tools in dementia MRI diagnosis.


Author(s):  
Eleftherios Bandis ◽  
Nikolaos Polatidis ◽  
Maria Diapouli ◽  
Stelios Kapetanakis

Transport infrastructure relies heavily on extended multi sensor networks and data streams to support its advanced real time monitoring and decision making. All relevant stakeholders are highly concerned on how travel patterns, infrastructure capacity and other internal / external factors (such as weather) affect, deteriorate or improve performance. Usually new network infrastructure can be remarkably expensive to build thus the focus is constantly in improving existing workflows, reduce overheads and enforce lean processes. We propose suitable graph-based workflow monitoring met­hods for developing efficient performance measures for the rail industry using extensive business process workflow pattern analysis based on Case-based Reasoning (CBR) combined with standard Data Mining methods. The approach focuses on both data preparation, cleaning and workflow integration of real network data. Preliminary results of this work are promising since workflow integration seems efficient against data complexity and domain peculiarities as well as scale on demand whilst demonstrating efficient accuracy. A number of modelling experiments are presented, that show that the approach proposed here can provide a sound basis for the effective and useful analysis of operational sensor data from train Journeys.


2021 ◽  
Vol 11 (6) ◽  
pp. 480
Author(s):  
Thomas M. Schneider ◽  
Michael T. Eadon ◽  
Rhonda M. Cooper-DeHoff ◽  
Kerri L. Cavanaugh ◽  
Khoa A. Nguyen ◽  
...  

(1) Background: Clinical decision support (CDS) is a vitally important adjunct to the implementation of pharmacogenomic-guided prescribing in clinical practice. A novel CDS was sought for the APOL1, NAT2, and YEATS4 genes to guide optimal selection of antihypertensive medications among the African American population cared for at multiple participating institutions in a clinical trial. (2) Methods: The CDS committee, made up of clinical content and CDS experts, developed a framework and contributed to the creation of the CDS using the following guiding principles: 1. medical algorithm consensus; 2. actionability; 3. context-sensitive triggers; 4. workflow integration; 5. feasibility; 6. interpretability; 7. portability; and 8. discrete reporting of lab results. (3) Results: Utilizing the principle of discrete patient laboratory and vital information, a novel CDS for APOL1, NAT2, and YEATS4 was created for use in a multi-institutional trial based on a medical algorithm consensus. The alerts are actionable and easily interpretable, clearly displaying the purpose and recommendations with pertinent laboratory results, vitals and links to ordersets with suggested antihypertensive dosages. Alerts were either triggered immediately once a provider starts to order relevant antihypertensive agents or strategically placed in workflow-appropriate general CDS sections in the electronic health record (EHR). Detailed implementation instructions were shared across institutions to achieve maximum portability. (4) Conclusions: Using sound principles, the created genetic algorithms were applied across multiple institutions. The framework outlined in this study should apply to other disease-gene and pharmacogenomic projects employing CDS.


10.2196/22973 ◽  
2021 ◽  
Vol 23 (5) ◽  
pp. e22973
Author(s):  
Theresa L Rager ◽  
Cristian Koepfli ◽  
Wasif A Khan ◽  
Sabeena Ahmed ◽  
Zahid Hayat Mahmud ◽  
...  

Background Cholera poses a significant global health burden. In Bangladesh, cholera is endemic and causes more than 100,000 cases each year. Established environmental reservoirs leave millions at risk of infection through the consumption of contaminated water. The Global Task Force for Cholera Control has called for increased environmental surveillance to detect contaminated water sources prior to human infection in an effort to reduce cases and deaths. The OmniVis rapid cholera detection device uses loop-mediated isothermal amplification and particle diffusometry detection methods integrated into a handheld hardware device that attaches to an iPhone 6 to identify and map contaminated water sources. Objective The aim of this study was to evaluate the usability of the OmniVis device with targeted end users to advance the iterative prototyping process and ultimately design a device that easily integrates into users’ workflow. Methods Water quality workers were trained to use the device and subsequently completed an independent device trial and usability questionnaire. Pretraining and posttraining knowledge assessments were administered to ensure training quality did not confound trial and questionnaire Results Device trials identified common user errors and device malfunctions including incorrect test kit insertion and device powering issues. We did not observe meaningful differences in user errors or device malfunctions accumulated per participant across demographic groups. Over 25 trials, the mean time to complete a test was 47 minutes, a significant reduction compared with laboratory protocols, which take approximately 3 days. Overall, participants found the device easy to use and expressed confidence and comfort in using the device independently. Conclusions These results are used to advance the iterative prototyping process of the OmniVis rapid cholera detection device so it can achieve user uptake, workflow integration, and scale to ultimately impact cholera control and elimination strategies. We hope this methodology will promote robust usability evaluations of rapid pathogen detection technologies in device development.


2021 ◽  
Author(s):  
Kea Turner ◽  
Margarita Bobonis Babilonia ◽  
Cristina Naso ◽  
Oliver Nguyen ◽  
Brian D. Gonzalez ◽  
...  

BACKGROUND Rapid implementation of telemedicine for cancer care during COVID-19 required innovative and adaptive solutions among healthcare workers. OBJECTIVE The objective of this qualitative study was to explore healthcare workers’ experiences with telemedicine implementation during COVID-19. METHODS We conducted semi-structured interviews with 40 oncology healthcare workers who implemented telemedicine during COVID-19. The interviews were recorded, transcribed verbatim, and analyzed for themes using Dedoose software (Version 4.12). RESULTS Approximately half of participants were physicians (55%) and one quarter of participants were APPs (25%). Other participants included social workers (n=3), psychologists (n=2), dieticians (n=2), and a pharmacist. Five key themes were identified: 1) establishing and maintaining patient-provider relationships, 2) coordinating care with other providers and informal caregivers, 3) adapting in-person assessments for telemedicine, 4) developing workflows and allocating resources, and 5) future recommendations. Participants described innovative strategies for implementing telemedicine, such as coordinating inter-disciplinary visits with multiple providers. Healthcare workers discussed key challenges, such as workflow integration, lack of physical exam and biometric data, and overcoming the digital divide. Participants recommended policy advocacy to support telemedicine (e.g., medical licensure policies) and monitoring how telemedicine affects patient outcomes and healthcare delivery. CONCLUSIONS To support the growth of telemedicine, implementation strategies are needed to ensure providers and patients have the tools necessary to effectively engage in telemedicine. At the same time, cancer care organizations will need to engage in advocacy to ensure policies are supportive of oncology telemedicine and develop systems to monitor the impact of telemedicine on patient outcomes, healthcare quality, costs, and equity. CLINICALTRIAL N/A


2021 ◽  
Author(s):  
Lennart Martens ◽  
Robbin Bouwmeester ◽  
Ralf Gabriels ◽  
Niels Hulstaert ◽  
Sven Degroeve

Abstract The inclusion of peptide retention time prediction promises to remove peptide identification ambiguity in complex LC-MS identification workflows. However, due to the way peptides are encoded in current prediction models, accurate retention times cannot be predicted for modified peptides. This is especially problematic for fledgling open modification searches, which will benefit from accurate retention time prediction for modified peptides to reduce identification ambiguity. We here therefore present DeepLC, a novel deep learning peptide retention time predictor utilizing a new peptide encoding based on atomic composition that allows the retention time of (previously unseen) modified peptides to be predicted accurately. We show that DeepLC performs similarly to current state-of-the-art approaches for unmodified peptides, and, more importantly, accurately predicts retention times for modifications not seen during training. Moreover, we show that DeepLC’s ability to predict retention times for any modification enables potentially incorrect identifications to be flagged in an open modification search of CD8-positive T-cell proteome data. DeepLC is available under the permissive Apache 2.0 open source license and comes with a user-friendly graphical user interface, as well as a Python package on PyPI, Bioconda, and BioContainers for effortless workflow integration.


Sign in / Sign up

Export Citation Format

Share Document