scholarly journals For Acute Pancreatitis using Supervised Machine Learning Algorithms

The data present in healthcare industry is very huge and delicate which requires to be managed watchfully. There are multiple fatal diseases which grow rapidly all over the world pancreatitis is one among them. Medical professionals want a reliable prediction system to diagnose Pancreatitis. Getting useful information out of the data which has been examined using diverse perspective and various machine learning methods and grouping the required information is a bit difficult task. When various data mining methods are applied on a huge and accessible data which will definitely provide us with the required information to the users. Pancreatitis contributes to Infection, Kidney failure, Breathing problem, Diabetes, Malnutrition, Pancreatic cancer. So, mining the Pancreatitis data in efficient way is a crucial concern. An outcome feature has to be predicted using a dataset where the outcome may contain only two constants that is either 1 or 0. 0 refers to the sufferer having Acute Pancreatitis and 1 refers to the sufferer may have chronic pancreatitis. Thus, an outcome feature with exemplary accuracy has to be predicted using the test dataset and classification algorithms. In order to realize this data is very necessary and then diverse classification techniques can be experimented. Then a finest model can be preferred which gives the maximum accuracy among all others.

2021 ◽  
Author(s):  
Mohamed Ibrahim Mohamed ◽  
Dinesh Mehta ◽  
Erdal Ozkan

Abstract Determining the closure pressure is crucial for optimal hydraulic fracturing design and successful execution of fracturing treatment. Historically, the use of diagnostic tests before the main fracturing treatment has significantly advanced to gain more information about the pattern of fracture propagation and fluid performance to optimize the designs. The goal is to inject a small volume of fracturing fluid to breakdown the formation and create small fracture geometry, then once pumping is stopped the pressure decline is analyzed to observe the fracture closure. Many analytical methods such as G-Function, square root of time, etc. have been developed to determine the fracture closure pressure. There are cases in which there is difficulty in determining the fracture closure pressure, as well as personal bias and field experiences make it challenging to interpret the changes in the pressure derivative slope and identify fracture closure. These conditions include: High permeability reservoirs where fracture closure occurs very fast due to the quick fluid leakoff.Extremely low permeability reservoir, which requires a long shut-in time for the fluid to leak off and determine the fracture closure pressure.The non-ideal fluid leak-off behavior under complex conditions. The objective of this study is to apply machine learning methods to implement a predesigned algorithm to execute the required tasks and predict the fracture closure pressure while minimizing the shortcomings in determining the closure pressure for non-ideal or subjective conditions. This paper demonstrates training different supervised machine learning algorithms to help predict fracture closure pressure. The workflow involves using the datasets to train and optimize the models, which subsequently are used to predict the closure pressure of testing data. The output results are then compared with actual results from more than 120 DFIT data points. We further propose an integrated approach to feature selection and dataset processing and study the effects of data processing on the success of the model prediction. The results from this study limit the subjectivity and the need for the experience of personal interpreting the data. We speculate that a linear regression and MLP neural network algorithms can yield high scores in the prediction of fracture closure pressure.


Energies ◽  
2021 ◽  
Vol 14 (22) ◽  
pp. 7714
Author(s):  
Ha Quang Man ◽  
Doan Huy Hien ◽  
Kieu Duy Thong ◽  
Bui Viet Dung ◽  
Nguyen Minh Hoa ◽  
...  

The test study area is the Miocene reservoir of Nam Con Son Basin, offshore Vietnam. In the study we used unsupervised learning to automatically cluster hydraulic flow units (HU) based on flow zone indicators (FZI) in a core plug dataset. Then we applied supervised learning to predict HU by combining core and well log data. We tested several machine learning algorithms. In the first phase, we derived hydraulic flow unit clustering of porosity and permeability of core data using unsupervised machine learning methods such as Ward’s, K mean, Self-Organize Map (SOM) and Fuzzy C mean (FCM). Then we applied supervised machine learning methods including Artificial Neural Networks (ANN), Support Vector Machines (SVM), Boosted Tree (BT) and Random Forest (RF). We combined both core and log data to predict HU logs for the full well section of the wells without core data. We used four wells with six logs (GR, DT, NPHI, LLD, LSS and RHOB) and 578 cores from the Miocene reservoir to train, validate and test the data. Our goal was to show that the correct combination of cores and well logs data would provide reservoir engineers with a tool for HU classification and estimation of permeability in a continuous geological profile. Our research showed that machine learning effectively boosts the prediction of permeability, reduces uncertainty in reservoir modeling, and improves project economics.


Neurosurgery ◽  
2020 ◽  
Author(s):  
Nicolai Maldaner ◽  
Anna M Zeitlberger ◽  
Marketa Sosnova ◽  
Johannes Goldberg ◽  
Christian Fung ◽  
...  

Abstract BACKGROUND Current prognostic tools in aneurysmal subarachnoid hemorrhage (aSAH) are constrained by being primarily based on patient and disease characteristics on admission. OBJECTIVE To develop and validate a complication- and treatment-aware outcome prediction tool in aSAH. METHODS This cohort study included data from an ongoing prospective nationwide multicenter registry on all aSAH patients in Switzerland (Swiss SOS [Swiss Study on aSAH]; 2009-2015). We trained supervised machine learning algorithms to predict a binary outcome at discharge (modified Rankin scale [mRS] ≤ 3: favorable; mRS 4-6: unfavorable). Clinical and radiological variables on admission (“Early” Model) as well as additional variables regarding secondary complications and disease management (“Late” Model) were used. Performance of both models was assessed by classification performance metrics on an out-of-sample test dataset. RESULTS Favorable functional outcome at discharge was observed in 1156 (62.0%) of 1866 patients. Both models scored a high accuracy of 75% to 76% on the test set. The “Late” outcome model outperformed the “Early” model with an area under the receiver operator characteristics curve (AUC) of 0.85 vs 0.79, corresponding to a specificity of 0.81 vs 0.70 and a sensitivity of 0.71 vs 0.79, respectively. CONCLUSION Both machine learning models show good discrimination and calibration confirmed on application to an internal test dataset of patients with a wide range of disease severity treated in different institutions within a nationwide registry. Our study indicates that the inclusion of variables reflecting the clinical course of the patient may lead to outcome predictions with superior predictive power compared to a model based on admission data only.


2021 ◽  
Vol 12 ◽  
Author(s):  
Jia-Wei Tang ◽  
Qing-Hua Liu ◽  
Xiao-Cong Yin ◽  
Ya-Cheng Pan ◽  
Peng-Bo Wen ◽  
...  

Raman spectroscopy (RS) is a widely used analytical technique based on the detection of molecular vibrations in a defined system, which generates Raman spectra that contain unique and highly resolved fingerprints of the system. However, the low intensity of normal Raman scattering effect greatly hinders its application. Recently, the newly emerged surface enhanced Raman spectroscopy (SERS) technique overcomes the problem by mixing metal nanoparticles such as gold and silver with samples, which greatly enhances signal intensity of Raman effects by orders of magnitudes when compared with regular RS. In clinical and research laboratories, SERS provides a great potential for fast, sensitive, label-free, and non-destructive microbial detection and identification with the assistance of appropriate machine learning (ML) algorithms. However, choosing an appropriate algorithm for a specific group of bacterial species remains challenging, because with the large volumes of data generated during SERS analysis not all algorithms could achieve a relatively high accuracy. In this study, we compared three unsupervised machine learning methods and 10 supervised machine learning methods, respectively, on 2,752 SERS spectra from 117 Staphylococcus strains belonging to nine clinically important Staphylococcus species in order to test the capacity of different machine learning methods for bacterial rapid differentiation and accurate prediction. According to the results, density-based spatial clustering of applications with noise (DBSCAN) showed the best clustering capacity (Rand index 0.9733) while convolutional neural network (CNN) topped all other supervised machine learning methods as the best model for predicting Staphylococcus species via SERS spectra (ACC 98.21%, AUC 99.93%). Taken together, this study shows that machine learning methods are capable of distinguishing closely related Staphylococcus species and therefore have great application potentials for bacterial pathogen diagnosis in clinical settings.


2019 ◽  
Vol 143 (8) ◽  
pp. 990-998 ◽  
Author(s):  
Min Yu ◽  
Lindsay A. L. Bazydlo ◽  
David E. Bruns ◽  
James H. Harrison

Context.— Turnaround time and productivity of clinical mass spectrometric (MS) testing are hampered by time-consuming manual review of the analytical quality of MS data before release of patient results. Objective.— To determine whether a classification model created by using standard machine learning algorithms can verify analytically acceptable MS results and thereby reduce manual review requirements. Design.— We obtained retrospective data from gas chromatography–MS analyses of 11-nor-9-carboxy-delta-9-tetrahydrocannabinol (THC-COOH) in 1267 urine samples. The data for each sample had been labeled previously as either analytically unacceptable or acceptable by manual review. The dataset was randomly split into training and test sets (848 and 419 samples, respectively), maintaining equal proportions of acceptable (90%) and unacceptable (10%) results in each set. We used stratified 10-fold cross-validation in assessing the abilities of 6 supervised machine learning algorithms to distinguish unacceptable from acceptable assay results in the training dataset. The classifier with the highest recall was used to build a final model, and its performance was evaluated against the test dataset. Results.— In comparison testing of the 6 classifiers, a model based on the Support Vector Machines algorithm yielded the highest recall and acceptable precision. After optimization, this model correctly identified all unacceptable results in the test dataset (100% recall) with a precision of 81%. Conclusions.— Automated data review identified all analytically unacceptable assays in the test dataset, while reducing the manual review requirement by about 87%. This automation strategy can focus manual review only on assays likely to be problematic, allowing improved throughput and turnaround time without reducing quality.


Generally, the most complicated task in the healthcare field is the diagnosis of the disease itself. The diagnosis phase in disease detection is usually the most time-consuming task and is prone to most of the errors. Such complications can be effectively handled if the disease detection process is well automated by incorporating effective machine learning algorithms trained with some benchmark datasets. It should also be noted that huge amounts of data that are acquired from Heart Specialization Hospitals are being wasted every year. In this paper, various classification algorithms have been used to train the machine to diagnose heart disease. By a comparative study of various learning models, we have identified the appropriate learning model for the heart disease dataset. Initially, the work will begin with an overview of various machine learning algorithms followed by the algorithmic comparison.


Mathematics ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 1209
Author(s):  
Philip Cho ◽  
Aihua Wood ◽  
Krishnamurthy Mahalingam ◽  
Kurt Eyink

Point defects play a fundamental role in the discovery of new materials due to their strong influence on material properties and behavior. At present, imaging techniques based on transmission electron microscopy (TEM) are widely employed for characterizing point defects in materials. However, current methods for defect detection predominantly involve visual inspection of TEM images, which is laborious and poses difficulties in materials where defect related contrast is weak or ambiguous. Recent efforts to develop machine learning methods for the detection of point defects in TEM images have focused on supervised methods that require labeled training data that is generated via simulation. Motivated by a desire for machine learning methods that can be trained on experimental data, we propose two self-supervised machine learning algorithms that are trained solely on images that are defect-free. Our proposed methods use principal components analysis (PCA) and convolutional neural networks (CNN) to analyze a TEM image and predict the location of a defect. Using simulated TEM images, we show that PCA can be used to accurately locate point defects in the case where there is no imaging noise. In the case where there is imaging noise, we show that incorporating a CNN dramatically improves model performance. Our models rely on a novel approach that uses the residual between a TEM image and its PCA reconstruction.


2020 ◽  
Vol 14 (2) ◽  
pp. 140-159
Author(s):  
Anthony-Paul Cooper ◽  
Emmanuel Awuni Kolog ◽  
Erkki Sutinen

This article builds on previous research around the exploration of the content of church-related tweets. It does so by exploring whether the qualitative thematic coding of such tweets can, in part, be automated by the use of machine learning. It compares three supervised machine learning algorithms to understand how useful each algorithm is at a classification task, based on a dataset of human-coded church-related tweets. The study finds that one such algorithm, Naïve-Bayes, performs better than the other algorithms considered, returning Precision, Recall and F-measure values which each exceed an acceptable threshold of 70%. This has far-reaching consequences at a time where the high volume of social media data, in this case, Twitter data, means that the resource-intensity of manual coding approaches can act as a barrier to understanding how the online community interacts with, and talks about, church. The findings presented in this article offer a way forward for scholars of digital theology to better understand the content of online church discourse.


Sign in / Sign up

Export Citation Format

Share Document