scholarly journals Machine learning approaches for the prediction of postoperative complication risk in liver resection patients

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Siyu Zeng ◽  
Lele Li ◽  
Yanjie Hu ◽  
Li Luo ◽  
Yuanchen Fang

Abstract Background For liver cancer patients, the occurrence of postoperative complications increases the difficulty of perioperative nursing, prolongs the hospitalization time of patients, and leads to large increases in hospitalization costs. The ability to identify influencing factors and to predict the risk of complications in patients with liver cancer after surgery could assist doctors to make better clinical decisions. Objective The aim of the study was to develop a postoperative complication risk prediction model based on machine learning algorithms, which utilizes variables obtained before or during the liver cancer surgery, to predict when complications present with clinical symptoms and the ways of reducing the risk of complications. Methods The study subjects were liver cancer patients who had undergone liver resection. There were 175 individuals, and 13 variables were recorded. 70% of the data were used for the training set, and 30% for the test set. The performance of five machine learning models, logistic regression, decision trees-C5.0, decision trees-CART, support vector machines, and random forests, for predicting postoperative complication risk in liver resection patients were compared. The significant influencing factors were selected by combining results of multiple methods, based on which the prediction model of postoperative complications risk was created. The results were analyzed to give suggestions of how to reduce the risk of complications. Results Random Forest gave the best performance from the decision curves analysis. The decision tree-C5.0 algorithm had the best performance of the five machine learning algorithms if ACC and AUC were used as evaluation indicators, producing an area under the receiver operating characteristic curve value of 0.91 (95% CI 0.77–1), with an accuracy of 92.45% (95% CI 85–100%), the sensitivity of 87.5%, and specificity of 94.59%. The duration of operation, patient’s BMI, and length of incision were significant influencing factors of postoperative complication risk in liver resection patients. Conclusions To reduce the risk of complications, it appears to be important that the patient's BMI should be above 22.96 before the operation, and the duration of the operation should be minimized.

2020 ◽  
Vol 10 (11) ◽  
pp. 3854
Author(s):  
Seongkeun Park ◽  
Jieun Byun ◽  
Ji young Woo

Background: Approximately 20–50% of prostate cancer patients experience biochemical recurrence (BCR) after radical prostatectomy (RP). Among them, cancer recurrence occurs in about 20–30%. Thus, we aim to reveal the utility of machine learning algorithms for the prediction of early BCR after RP. Methods: A total of 104 prostate cancer patients who underwent magnetic resonance imaging and RP were evaluated. Four well-known machine learning algorithms (i.e., k-nearest neighbors (KNN), multilayer perceptron (MLP), decision tree (DT), and auto-encoder) were applied to build a prediction model for early BCR using preoperative clinical and imaging and postoperative pathologic data. The sensitivity, specificity, and accuracy for detection of early BCR of each algorithm were evaluated. Area under the receiver operating characteristics (AUROC) analyses were conducted. Results: A prediction model using an auto-encoder showed the highest prediction ability of early BCR after RP using all data as input (AUC = 0.638) and only preoperative clinical and imaging data (AUC = 0.656), followed by MLP (AUC = 0.607 and 0.598), KNN (AUC = 0.596 and 0.571), and DT (AUC = 0.534 and 0.495). Conclusion: The auto-encoder-based prediction system has the potential for accurate detection of early BCR and could be useful for long-term follow-up planning in prostate cancer patients after RP.


Author(s):  
Sheela Rani P ◽  
Dhivya S ◽  
Dharshini Priya M ◽  
Dharmila Chowdary A

Machine learning is a new analysis discipline that uses knowledge to boost learning, optimizing the training method and developing the atmosphere within which learning happens. There square measure 2 sorts of machine learning approaches like supervised and unsupervised approach that square measure accustomed extract the knowledge that helps the decision-makers in future to require correct intervention. This paper introduces an issue that influences students' tutorial performance prediction model that uses a supervised variety of machine learning algorithms like support vector machine , KNN(k-nearest neighbors), Naïve Bayes and supplying regression and logistic regression. The results supported by various algorithms are compared and it is shown that the support vector machine and Naïve Bayes performs well by achieving improved accuracy as compared to other algorithms. The final prediction model during this paper may have fairly high prediction accuracy .The objective is not just to predict future performance of students but also provide the best technique for finding the most impactful features that influence student’s while studying.


2021 ◽  
Author(s):  
Sunil Saha ◽  
Amiya Gayen ◽  
Kaustuv Mukherjee ◽  
Hamid Reza Pourghasemi ◽  
M. Santosh

Abstract Machine learning techniques offer powerful tools for the assessment and management of groundwater resources. Here, we evaluated the groundwater potential maps (GWPMs) in Md. Bazar Block of Birbhum District, India using four GIS-based machine-learning algorithms (MLA) such as predictive neural network (PNN), decision tree (DT), Naïve Bayes classifier (NBC), and random forest (RF). We used a database of 85 dug wells and one piezometer location identified using extensive field study, and employed 12 influencing factors (elevation, slope, drainage density (DD), topographical wetness index, geomorphology, lineament density, rainfall, geology, pond density, land use/land cover (LULC), geology, and soil texture) for evaluation through GIS. The 85 dug wells and 1 piezometer locations were sub-divided into two classes: 70:30 for training and model validation. The DT, RF, PNN, and NBC MLAs were implemented to analyse the relationship between the dug well locations and groundwater influencing factors to generate GWPMs. The results predict excellent groundwater potential areas (GPA) DT RF of 17.38%, 14.69%, 20.43%, and 13.97% of the study area, respectively. The prediction accuracy of each GWPM was determined using a receiver operating characteristic (ROC) curve. Using the 30% data sets (validation data), accuracies of 80.1%, 78.30%, 75.20%, and 69.2% were obtained for the PNN, RF, DT, and NBC models, respectively. The ROC values show that the four implemented models provide satisfactory and suitable results for GWP mapping in this region. In addition, the well-known mean decrease Gini (MDG) from the RF MLA was implemented to determine the relative importance of the variables for groundwater potentiality assessment. The MDG revealed that drainage density, lineament density, geomorphology, pond density, elevation, and stream junction frequency were the most useful determinants of GWPM. Our approach to delineate the GWPM can aid in the effective planning and management of groundwater resources in this region.


2019 ◽  
Vol 8 (2) ◽  
pp. 4499-4504

Heart diseases are responsible for the greatest number of deaths all over the world. These diseases are usually not detected in early stages as the cost of medical diagnostics is not affordable by a majority of the people. Research has shown that machine learning methods have a great capability to extract valuable information from the medical data. This information is used to build the prediction models which provide cost effective technological aid for a medical practitioner to detect the heart disease in early stages. However, the presence of some irrelevant and redundant features in medical data deteriorates the competence of the prediction system. This research was aimed to improve the accuracy of the existing methods by removing such features. In this study, brute force-based algorithm of feature selection was used to determine relevant significant features. After experimenting rigorously with 7528 possible combinations of features and 5 machine learning algorithms, 8 important features were identified. A prediction model was developed using these significant features. Accuracy of this model is experimentally calculated to be 86.4%which is higher than the results of existing studies. The prediction model proposed in this study shall help in predicting heart disease efficiently.


Nanoscale ◽  
2022 ◽  
Author(s):  
Xiaojie Zhang ◽  
Changsheng Zhou ◽  
Fanghua Wu ◽  
Chang Gao ◽  
Qianqian Liu ◽  
...  

Abstract Long-term unsolved health problems from pre-/Intra-/postoperative complications and thermal ablation complications pose threats to liver cancer patients. To reduce the threats, we propose a multimodal-imaging guided surgical navigation system...


Author(s):  
Ruchika Malhotra ◽  
Anuradha Chug

Software maintenance is an expensive activity that consumes a major portion of the cost of the total project. Various activities carried out during maintenance include the addition of new features, deletion of obsolete code, correction of errors, etc. Software maintainability means the ease with which these operations can be carried out. If the maintainability can be measured in early phases of the software development, it helps in better planning and optimum resource utilization. Measurement of design properties such as coupling, cohesion, etc. in early phases of development often leads us to derive the corresponding maintainability with the help of prediction models. In this paper, we performed a systematic review of the existing studies related to software maintainability from January 1991 to October 2015. In total, 96 primary studies were identified out of which 47 studies were from journals, 36 from conference proceedings and 13 from others. All studies were compiled in structured form and analyzed through numerous perspectives such as the use of design metrics, prediction model, tools, data sources, prediction accuracy, etc. According to the review results, we found that the use of machine learning algorithms in predicting maintainability has increased since 2005. The use of evolutionary algorithms has also begun in related sub-fields since 2010. We have observed that design metrics is still the most favored option to capture the characteristics of any given software before deploying it further in prediction model for determining the corresponding software maintainability. A significant increase in the use of public dataset for making the prediction models has also been observed and in this regard two public datasets User Interface Management System (UIMS) and Quality Evaluation System (QUES) proposed by Li and Henry is quite popular among researchers. Although machine learning algorithms are still the most popular methods, however, we suggest that researchers working on software maintainability area should experiment on the use of open source datasets with hybrid algorithms. In this regard, more empirical studies are also required to be conducted on a large number of datasets so that a generalized theory could be made. The current paper will be beneficial for practitioners, researchers and developers as they can use these models and metrics for creating benchmark and standards. Findings of this extensive review would also be useful for novices in the field of software maintainability as it not only provides explicit definitions, but also lays a foundation for further research by providing a quick link to all important studies in the said field. Finally, this study also compiles current trends, emerging sub-fields and identifies various opportunities of future research in the field of software maintainability.


Sign in / Sign up

Export Citation Format

Share Document