BMC Medical Informatics and Decision Making
Latest Publications


TOTAL DOCUMENTS

2542
(FIVE YEARS 1012)

H-INDEX

69
(FIVE YEARS 13)

Published By Springer (Biomed Central Ltd.)

1472-6947, 1472-6947

2022 ◽  
Vol 22 (1) ◽  
Author(s):  
Rebekah Pratt ◽  
Daniel M. Saman ◽  
Clayton Allen ◽  
Benjamin Crabtree ◽  
Kris Ohnsorg ◽  
...  

Abstract Background In this paper we describe the use of the Consolidated Framework for Implementation Research (CFIR) to study implementation of a web-based, point-of-care, EHR-linked clinical decision support (CDS) tool designed to identify and provide care recommendations for adults with prediabetes (Pre-D CDS). Methods As part of a large NIH-funded clinic-randomized trial, we identified a convenience sample of interview participants from 22 primary care clinics in Minnesota, North Dakota, and Wisconsin that were randomly allocated to receive or not receive a web-based EHR-integrated prediabetes CDS intervention. Participants included 11 clinicians, 6 rooming staff, and 7 nurse or clinic managers recruited by study staff to participate in telephone interviews conducted by an expert in qualitative methods. Interviews were recorded and transcribed, and data analysis was conducted using a constructivist version of grounded theory. Results Implementing a prediabetes CDS tool into primary care clinics was useful and well received. The intervention was integrated with clinic workflows, supported primary care clinicians in clearly communicating prediabetes risk and management options with patients, and in identifying actionable care opportunities. The main barriers to CDS use were time and competing priorities. Finally, while the implementation process worked well, opportunities remain in engaging the care team more broadly in CDS use. Conclusions The use of CDS tools for engaging patients and providers in care improvement opportunities for prediabetes is a promising and potentially effective strategy in primary care settings. A workflow that incorporates the whole care team in the use of such tools may optimize the implementation of CDS tools like these in primary care settings. Trial registration Name of the registry: Clinicaltrial.gov. Trial registration number: NCT02759055. Date of registration: 05/03/2016. URL of trial registry record: https://clinicaltrials.gov/ct2/show/NCT02759055 Prospectively registered.


2022 ◽  
Vol 22 (1) ◽  
Author(s):  
Guanglei Yu ◽  
Linlin Zhang ◽  
Ying Zhang ◽  
Jiaqi Zhou ◽  
Tao Zhang ◽  
...  

Abstract Background The greatly accelerated development of information technology has conveniently provided adoption for risk stratification, which means more beneficial for both patients and clinicians. Risk stratification offers accurate individualized prevention and therapeutic decision making etc. Hospital discharge records (HDRs) routinely include accurate conclusions of diagnoses of the patients. For this reason, in this paper, we propose an improved model for risk stratification in a supervised fashion by exploring HDRs about coronary heart disease (CHD). Methods We introduced an improved four-layer supervised latent Dirichlet allocation (sLDA) approach called Hierarchical sLDA model, which categorized patient features in HDRs as patient feature-value pairs in one-hot way according to clinical guidelines for lab test of CHD. To address the data missing and imbalance problem, RFs and SMOTE methods are used respectively. After TF-IDF processing of datasets, variational Bayes expectation-maximization method and generalized linear model were used to recognize the latent clinical state of a patient, i.e., risk stratification, as well as to predict CHD. Accuracy, macro-F1, training and testing time performance were used to evaluate the performance of our model. Results According to the characteristics of our datasets, i.e., patient feature-value pairs, we construct a supervised topic model by adding one more Dirichlet distribution hyperparameter to sLDA. Compared with established supervised algorithm Multi-class sLDA model, we demonstrate that our proposed approach enhances training time by 59.74% and testing time by 25.58% but almost no loss of average prediction accuracy on our datasets. Conclusions A model for risk stratification and prediction of CHD based on sLDA model was proposed. Experimental results show that Hierarchical sLDA model we proposed is competitive in time performance and accuracy. Hierarchical processing of patient features can significantly improve the disadvantages of low efficiency and time-consuming Gibbs sampling of sLDA model.


2022 ◽  
Vol 22 (1) ◽  
Author(s):  
Huimin Wang ◽  
Jianxiang Tang ◽  
Mengyao Wu ◽  
Xiaoyu Wang ◽  
Tao Zhang

Abstract Background There are often many missing values in medical data, which directly affect the accuracy of clinical decision making. Discharge assessment is an important part of clinical decision making. Taking the discharge assessment of patients with spontaneous supratentorial intracerebral hemorrhage as an example, this study adopted the missing data processing evaluation criteria more suitable for clinical decision making, aiming at systematically exploring the performance and applicability of single machine learning algorithms and ensemble learning (EL) under different data missing scenarios, as well as whether they had more advantages than traditional methods, so as to provide basis and reference for the selection of suitable missing data processing method in practical clinical decision making. Methods The whole process consisted of four main steps: (1) Based on the original complete data set, missing data was generated by simulation under different missing scenarios (missing mechanisms, missing proportions and ratios of missing proportions of each group). (2) Machine learning and traditional methods (eight methods in total) were applied to impute missing values. (3) The performances of imputation techniques were evaluated and compared by estimating the sensitivity, AUC and Kappa values of prediction models. (4) Statistical tests were used to evaluate whether the observed performance differences were statistically significant. Results The performances of missing data processing methods were different to a certain extent in different missing scenarios. On the whole, machine learning had better imputation performance than traditional methods, especially in scenarios with high missing proportions. Compared with single machine learning algorithms, the performance of EL was more prominent, followed by neural networks. Meanwhile, EL was most suitable for missing imputation under MAR (the ratio of missing proportion 2:1) mechanism, and its average sensitivity, AUC and Kappa values reached 0.908, 0.924 and 0.596 respectively. Conclusions In clinical decision making, the characteristics of missing data should be actively explored before formulating missing data processing strategies. The outstanding imputation performance of machine learning methods, especially EL, shed light on the development of missing data processing technology, and provided methodological support for clinical decision making in presence of incomplete data.


2022 ◽  
Vol 22 (1) ◽  
Author(s):  
Tanatorn Tanantong ◽  
Warut Pannakkong ◽  
Nittaya Chemkomnerd

Abstract Background The overcrowded patients, which cause the long waiting time in public hospitals, become significant problems that affect patient satisfaction toward the hospital. Particularly, the bottleneck usually happens at front-end departments (e.g., the triage and medical record department) as every patient is firstly required to visit these departments. The problem is mainly caused by ineffective resource management. In order to support decision making in the resource management at front-end departments, this paper proposes a framework using simulation and multi-objective optimization techniques considering both operating cost and patient satisfaction. Methods To develop the framework, first, the timestamp of patient arrival time at each station was collected at the triage and medical record department of Thammasat University Hospital in Thailand. A patient satisfaction assessment method was used to convert the time spend into a satisfaction score. Then, the simulation model was built from the current situation of the hospital and was applied scenario analyses for the model improvement. The models were verified and validated. The weighted max–min for fuzzy multi-objective optimization was done by minimizing the operating cost and maximizing the patient satisfaction score. The operating costs and patient satisfaction scores from various scenarios were statistically compared. Finally, a decision-making guideline was proposed to support suitable resource management at the front-end departments of the hospital. Result The three scenarios of the simulation model were built (i.e., a real situation, a one-stop service, and partially shared resources) and ensured to be verified and valid. The optimized results were compared and grouped into three situations which are (1) remain the same satisfaction score but decrease the cost (cost decreased by 2.8%) (2) remain the same satisfaction score but increase the cost (cost increased up to 80%) and (3) decrease the satisfaction score and decrease the cost (satisfaction decreased up to 82% and cost decreased up to 59%). According to the guideline, the situations 1 and 3 were recommended to use in the improvement and the situation 2 was rejected. Conclusion This research demonstrates the resource management framework for the front-end department of the hospital. The experimental results imply that the framework can be used to support the decision making in resource management and used to reduce the risk of applying a non-improvement model in a real situation.


2022 ◽  
Vol 22 (1) ◽  
Author(s):  
Maria DeYoreo ◽  
Carolyn M. Rutter ◽  
Jonathan Ozik ◽  
Nicholson Collier

Abstract Background Microsimulation models are mathematical models that simulate event histories for individual members of a population. They are useful for policy decisions because they simulate a large number of individuals from an idealized population, with features that change over time, and the resulting event histories can be summarized to describe key population-level outcomes. Model calibration is the process of incorporating evidence into the model. Calibrated models can be used to make predictions about population trends in disease outcomes and effectiveness of interventions, but calibration can be challenging and computationally expensive. Methods This paper develops a technique for sequentially updating models to take full advantage of earlier calibration results, to ultimately speed up the calibration process. A Bayesian approach to calibration is used because it combines different sources of evidence and enables uncertainty quantification which is appealing for decision-making. We develop this method in order to re-calibrate a microsimulation model for the natural history of colorectal cancer to include new targets that better inform the time from initiation of preclinical cancer to presentation with clinical cancer (sojourn time), because model exploration and validation revealed that more information was needed on sojourn time, and that the predicted percentage of patients with cancers detected via colonoscopy screening was too low. Results The sequential approach to calibration was more efficient than recalibrating the model from scratch. Incorporating new information on the percentage of patients with cancers detected upon screening changed the estimated sojourn time parameters significantly, increasing the estimated mean sojourn time for cancers in the colon and rectum, providing results with more validity. Conclusions A sequential approach to recalibration can be used to efficiently recalibrate a microsimulation model when new information becomes available that requires the original targets to be supplemented with additional targets.


2022 ◽  
Vol 22 (1) ◽  
Author(s):  
Zhao Shuai ◽  
Diao Xiaolin ◽  
Yuan Jing ◽  
Huo Yanni ◽  
Cui Meng ◽  
...  

Abstract Background Automated ICD coding on medical texts via machine learning has been a hot topic. Related studies from medical field heavily relies on conventional bag-of-words (BoW) as the feature extraction method, and do not commonly use more complicated methods, such as word2vec (W2V) and large pretrained models like BERT. This study aimed at uncovering the most effective feature extraction methods for coding models by comparing BoW, W2V and BERT variants. Methods We experimented with a Chinese dataset from Fuwai Hospital, which contains 6947 records and 1532 unique ICD codes, and a public Spanish dataset, which contains 1000 records and 2557 unique ICD codes. We designed coding tasks with different code frequency thresholds (denoted as $$f_s$$ f s ), with a lower threshold indicating a more complex task. Using traditional classifiers, we compared BoW, W2V and BERT variants on accomplishing these coding tasks. Results When $$f_s$$ f s was equal to or greater than 140 for Fuwai dataset, and 60 for the Spanish dataset, the BERT variants with the whole network fine-tuned was the best method, leading to a Micro-F1 of 93.9% for Fuwai data when $$f_s=200$$ f s = 200 , and a Micro-F1 of 85.41% for the Spanish dataset when $$f_s=180$$ f s = 180 . When $$f_s$$ f s fell below 140 for Fuwai dataset, and 60 for the Spanish dataset, BoW turned out to be the best, leading to a Micro-F1 of 83% for Fuwai dataset when $$f_s=20$$ f s = 20 , and a Micro-F1 of 39.1% for the Spanish dataset when $$f_s=20$$ f s = 20 . Our experiments also showed that both the BERT variants and BoW possessed good interpretability, which is important for medical applications of coding models. Conclusions This study shed light on building promising machine learning models for automated ICD coding by revealing the most effective feature extraction methods. Concretely, our results indicated that fine-tuning the whole network of the BERT variants was the optimal method for tasks covering only frequent codes, especially codes that represented unspecified diseases, while BoW was the best for tasks involving both frequent and infrequent codes. The frequency threshold where the best-performing method varied differed between different datasets due to factors like language and codeset.


2022 ◽  
Vol 22 (1) ◽  
Author(s):  
Jiang Luo ◽  
Yan Wang ◽  
Yongze Zhang ◽  
Xiaofang Yan ◽  
Xiaoting Huang ◽  
...  

Abstract Background This study was designed for the research and development (R&D) and application of a storage inflow and outflow management system enabling departments to perform efficient, scientific, and information-based consumable management. Methods In the endocrinology department of a hospital, expert and R&D teams in consumable management were set up, and an information-based storage inflow and outflow management system for consumables was designed and developed. The system was operated on a personal computer and was divided into three modules: public consumables, bed consumables, and quality control management. The functions of the system included storage inflow and outflow, early warnings, response to user queries, and statistics on consumables. Data were derived from the hospital information system (HIS,ZHIY SOFTWARE HIS VERSION4.0) and a questionnaire survey. Economic indicators, work efficiency of consumable management, nurse burnout, consumable stockroom management, and staff satisfaction were compared under manual management, Excel-based management, and the consumable storage inflow and outflow management system. The results of the questionnaire were analysed using the R software, version 4.1.0. Results Dates were obtained from manual management, Excel-based management and the consumable storage inflow and outflow management system. Under these three methods, the daily prices of department consumables per bed were 53.43 ± 10.27 yuan, 38.65 ± 8.56 yuan, and 31.98 ± 7.36 yuan, respectively, indicating that the new management system reduced costs for the department. The time spent daily on consumable management was shortened from 119.5 (106.75, 123.5) min to 56.5 (48.5, 60.75) to 20 (17.25, 24.25) min. Nurses’ emotional fatigue and job indifference scores, respectively, decreased from 22.90 ± 1.65 and 8.75 ± 1.25 under manual management to 19.70 ± 1.72 and 6.90 ± 1.37 under Excel-based management and to 17.20 ± 2.04 and 6.00 ± 1.30 under the novel system; the satisfaction of the warehouse keeper and collection staff, respectively, increased from 76.62% and 80.78% to 91.6% and 90.5% to 98.8% and 98.5% under the three successive systems. Conclusions The storage inflow and outflow management system achieved produced good results in the storage and classification of consumables.


2022 ◽  
Vol 22 (1) ◽  
Author(s):  
Jurandir Barreto Galdino Junior ◽  
Hélio Roberto Hékis ◽  
José Alfredo Ferreira Costa ◽  
Íon Garcia Mascarenhas de Andrade ◽  
Eric Lucas dos Santos Cabral ◽  
...  

Abstract Background In Brazil, many public hospitals face constant problems related to high demand vis-à-vis an overall scarcity of resources, which hinders the operations of different sectors such as the surgical centre, as it is considered one of the most relevant pillars for the proper hospital functioning, due to its complexity, criticality as well as economic and social importance. Proper asset management based on well-founded decisions is, therefore, a sine-qua-non condition for addressing such demands. However, subjectivity and other difficulties present in decisions make the management of hospital resources a constant challenge. Methods Thus, the present work proposes the application of a hybrid approach, formed by the QFD tools, fuzzy logic and SERVQUAL as a decision support tool for the quality planning of the surgical centre of the Onofre Lopes Teaching Hospital (Hospital Universitário Onofre Lopes—HUOL). To accomplish such objective, it was necessary to discover and analyse the main needs of the medical team working in the operating room, through the application of the SERVQUAL questionnaire, associated with fuzzy logic. Results Then, the most relevant deficiencies were transformed into entries for the QFD-fuzzy, where they were translated into project requirements. Soon after, the analysis of the existing relationships between the inputs and these requirements was carried out, generating the ranking of actions with the greatest impact on the improvement of the surgical centre overall quality. Conclusions As a result, it was found that the proposed methodology can optimize the decision process to which hospital managers are submitted, improving the surgical centre operation efficiency.


2022 ◽  
Vol 22 (1) ◽  
Author(s):  
Jacques Balayla

Abstract Background Bayes’ theorem confers inherent limitations on the accuracy of screening tests as a function of disease prevalence. Herein, we establish a mathematical model to determine whether sequential testing with a single test overcomes the aforementioned Bayesian limitations and thus improves the reliability of screening tests. Methods We use Bayes’ theorem to derive the positive predictive value equation, and apply the Bayesian updating method to obtain the equation for the positive predictive value (PPV) following repeated testing. We likewise derive the equation which determines the number of iterations of a positive test needed to obtain a desired positive predictive value, represented graphically by the tablecloth function. Results For a given PPV ($$\rho$$ ρ ) approaching k, the number of positive test iterations needed given a prevalence of disease ($$\phi$$ ϕ ) is: $$n_i =\lim _{\rho \rightarrow k}\left\lceil \frac{ln\left[ \frac{\rho (\phi -1)}{\phi (\rho -1)}\right] }{ln\left[ \frac{a}{1-b}\right] }\right\rceil \qquad \qquad (1)$$ n i = lim ρ → k l n ρ ( ϕ - 1 ) ϕ ( ρ - 1 ) l n a 1 - b ( 1 ) where $$n_i$$ n i = number of testing iterations necessary to achieve $$\rho$$ ρ , the desired positive predictive value, ln = the natural logarithm, a = sensitivity, b = specificity, $$\phi$$ ϕ = disease prevalence/pre-test probability and k = constant. Conclusions Based on the aforementioned derivation, we provide reference tables for the number of test iterations needed to obtain a $$\rho (\phi )$$ ρ ( ϕ ) of 50, 75, 95 and 99% as a function of various levels of sensitivity, specificity and disease prevalence/pre-test probability. Clinical validation of these concepts needs to be obtained prior to its widespread application.


2022 ◽  
Vol 22 (1) ◽  
Author(s):  
Josephus F. M. van den Heuvel ◽  
Marije Hogeveen ◽  
Margo Lutke Holzik ◽  
Arno F. J. van Heijst ◽  
Mireille N. Bekker ◽  
...  

Abstract Background In case of extreme premature delivery at 24 weeks of gestation, both early intensive care and palliative comfort care for the neonate are considered treatment options. Prenatal counseling, preferably using shared decision making, is needed to agree on the treatment option in case labor progresses. This article described the development of a digital decision aid (DA) to support pregnant women, partners and clinicians in prenatal counseling for imminent extreme premature labor. Methods This DA is developed following the International Patient Decision Aid Standards. The Dutch treatment guideline and the Dutch recommendations for prenatal counseling in extreme prematurity were used as basis. Development of the first prototype was done by expert clinicians and patients, further improvements were done after alpha testing with involved clinicians, patients and other experts (n = 12), and beta testing with non-involved clinicians and patients (n = 15). Results The final version includes information, probabilities and figures depending on users’ preferences. Furthermore, it elicits patient values and provides guidance to aid parents and professionals in making a decision for either early intensive care or palliative comfort care in threatening extreme premature delivery. Conclusion A decision aid was developed to support prenatal counseling regarding the decision on early intensive care versus palliative comfort care in case of extreme premature delivery at 24 weeks gestation. It was well accepted by parents and healthcare professionals. Our multimedia, digital DA is openly available online to support prenatal counseling and personalized, shared decision-making in imminent extreme premature labor.


Sign in / Sign up

Export Citation Format

Share Document