Using a Machine Learning Logistic Regression Algorithm to Classify Nanomedicine Clinical Trials in a Known Repository

Author(s):  
Charles M. Pérez-Espinoza ◽  
Nuvia Beltran-Robayo ◽  
Teresa Samaniego-Cobos ◽  
Abel Alarcón-Salvatierra ◽  
Ana Rodriguez-Mendez ◽  
...  
2021 ◽  
Vol 2083 (3) ◽  
pp. 032059
Author(s):  
Qiang Chen ◽  
Meiling Deng

Abstract Regression algorithms are commonly used in machine learning. Based on encryption and privacy protection methods, the current key hot technology regression algorithm and the same encryption technology are studied. This paper proposes a PPLAR based algorithm. The correlation between data items is obtained by logistic regression formula. The algorithm is distributed and parallelized on Hadoop platform to improve the computing speed of the cluster while ensuring the average absolute error of the algorithm.


Scientific Knowledge and Electronic devices are growing day by day. In this aspect, many expert systems are involved in the healthcare industry using machine learning algorithms. Deep neural networks beat the machine learning techniques and often take raw data i.e., unrefined data to calculate the target output. Deep learning or feature learning is used to focus on features which is very important and gives a complete understanding of the model generated. Existing methodology used data mining technique like rule based classification algorithm and machine learning algorithm like hybrid logistic regression algorithm to preprocess data and extract meaningful insights of data. This is, however a supervised data. The proposed work is based on unsupervised data that is there is no labelled data and deep neural techniques is deployed to get the target output. Machine learning algorithms are compared with proposed deep learning techniques using TensorFlow and Keras in the aspect of accuracy. Deep learning methodology outfits the existing rule based classification and hybrid logistic regression algorithm in terms of accuracy. The designed methodology is tested on the public MIT-BIH arrhythmia database, classifying four kinds of abnormal beats. The proposed approach based on deep learning technique offered a better performance, improving the results when compared to machine learning approaches of the state-of-the-art


Author(s):  
Abdul Karim ◽  
Azhari Azhari ◽  
Samir Brahim Belhaouri ◽  
Ali Adil Qureshi

The fact is quite transparent that almost everybody around the world is using android apps. Half of the population of this planet is associated with messaging, social media, gaming, and browsers. This online marketplace provides free and paid access to users. On the Google Play store, users are encouraged to download countless of applications belonging to predefined categories. In this research paper, we have scrapped thousands of users reviews and app ratings. We have scrapped 148 apps’ reviews from 14 categories. We have collected 506259 reviews from Google play store and subsequently checked the semantics of reviews about some applications form users to determine whether reviews are positive, negative, or neutral. We have evaluated the results by using different machine learning algorithms like Naïve Bayes, Random Forest, and Logistic Regression algorithm. we have calculated Term Frequency (TF) and Inverse Document Frequency (IDF) with different parameters like accuracy, precision, recall, and F1 and compared the statistical result of these algorithms. We have visualized these statistical results in the form of a bar chart. In this paper, the analysis of each algorithm is performed one by one, and the results have been compared. Eventually, We've discovered that Logistic Regression is the best algorithm for a review-analysis of all Google play store. We have proved that Logistic Regression gets the speed of precision, accuracy, recall, and F1 in both after preprocessing and data collection of this dataset.


2021 ◽  
Vol 2021 ◽  
pp. 1-6
Author(s):  
Shouyun Lv ◽  
Shizong Li ◽  
Zhiwei Yu ◽  
Kaiqiong Wang ◽  
Xin Qiao ◽  
...  

To conduct better research in hepatocellular carcinoma resection, this paper used 3D machine learning and logistic regression algorithm to study the preoperative assistance of patients undergoing hepatectomy. In this study, the logistic regression model was analyzed to find the influencing factors for the survival and recurrence of patients. The clinical data of 50 HCC patients who underwent extensive hepatectomy (≥4 segments of the liver) admitted to our hospital from June 2020 to December 2020 were selected to calculate the liver volume, simulated surgical resection volume, residual liver volume, surgical margin, etc. The results showed that the simulated liver volume of 50 patients was 845.2 + 285.5 mL, and the actual liver volume of 50 patients was 826.3 ± 268.1 mL, and there was no significant difference between the two groups (t = 0.425; P  > 0.05). Compared with the logistic regression model, the machine learning method has a better prediction effect, but the logistic regression model has better interpretability. The analysis of the relationship between the liver tumour and hepatic vessels in practical problems has specific clinical application value for accurately evaluating the volume of liver resection and surgical margin.


Author(s):  
Umniy Salamah

The predictions about the number of people with diabetes will be increased which leads to a reduced balanced ratio between the quality of the eye care service providers with the number of patients. The alternative to solve this problem is to provide early detection service for the last condition of eye health in the diabetic patients. To detect the damage of the retina can be done help machine learning algorithm of the logistics regression. The justification for selection the logistic regression algorithm for retina damage detection in this research is that it has been widely used in a variety of machine learning problems where LR can describe the response variables with one or more variables predictors well. The methodology of research contained five phases, including preparation, feature extraction, normalization, classification, evaluation for processing dataset of digital fundus image were provided by EyePACS using scikit-learn as machine learning library and the Python as programming language. As result, we found the accuracy of retina damage detection using logistic regression is 0.7392 with following by F1-score 0.6317, Recall 0.7392, Precision 0.6043 and Kappa 0.0051.


Author(s):  
Qinghong Yang ◽  
Xiangquan Hu ◽  
Zhichao Cheng ◽  
Kang Miao

MIOOs are orders created temporarily for the purpose of occupying the inventories of sellers. MIOOs disrupt normal business activities and harm both sellers and consumers. This study aims to determine the best practice and model of the technical solutions that can effectively and systematically limit malicious inventory occupied orders (MIOOs), using the methods of analytical mining and case studies. This work contains three contributions. Firstly, this work solves MIOOs problem by using machine learning technology. The result of the study indicates that 93% of MIOOs from the sample data are actually predictable and preventable. Secondly, this work presents a methodology of solving MIOOs problem which can be applied by other companies. The methodology in this paper consists of four major steps, namely doing statistics concerning MIOOs, using logistic regression algorithm to train a mode, optimizing the model, and applying the model. Finally, this work finds unique features of MIOOs, and they can help better understanding the behind logic of MIOO producers.


Information ◽  
2021 ◽  
Vol 12 (12) ◽  
pp. 490
Author(s):  
Cristián Castillo-Olea ◽  
Roberto Conte-Galván ◽  
Clemente Zuñiga ◽  
Alexandra Siono ◽  
Angelica Huerta ◽  
...  

Background: The current pandemic caused by SARS-CoV-2 is an acute illness of global concern. SARS-CoV-2 is an infectious disease caused by a recently discovered coronavirus. Most people who get sick from COVID-19 experience either mild, moderate, or severe symptoms. In order to help make quick decisions regarding treatment and isolation needs, it is useful to determine which significant variables indicate infection cases in the population served by the Tijuana General Hospital (Hospital General de Tijuana). An Artificial Intelligence (Machine Learning) mathematical model was developed in order to identify early-stage significant variables in COVID-19 patients. Methods: The individual characteristics of the study subjects included age, gender, age group, symptoms, comorbidities, diagnosis, and outcomes. A mathematical model that uses supervised learning algorithms, allowing the identification of the significant variables that predict the diagnosis of COVID-19 with high precision, was developed. Results: Automatic algorithms were used to analyze the data: for Systolic Arterial Hypertension (SAH), the Logistic Regression algorithm showed results of 91.0% in area under ROC (AUC), 80% accuracy (CA), 80% F1 and 80% Recall, and 80.1% precision for the selected variables, while for Diabetes Mellitus (DM) with the Logistic Regression algorithm it obtained 91.2% AUC, 89.2% accuracy, 88.8% F1, 89.7% precision, and 89.2% recall for the selected variables. The neural network algorithm showed better results for patients with Obesity, obtaining 83.4% AUC, 91.4% accuracy, 89.9% F1, 90.6% precision, and 91.4% recall. Conclusions: Statistical analyses revealed that the significant predictive symptoms in patients with SAH, DM, and Obesity were more substantial in fatigue and myalgias/arthralgias. In contrast, the third dominant symptom in people with SAH and DM was odynophagia.


2020 ◽  
Vol 8 (6) ◽  
pp. 5056-5060

Safety of Women has become a major issue in India. Especially at night women think a lot before coming out of their homes. We daily come up with news of how women are subjected to a lot of violence and harassment or get molested in public areas. This paper focuses on the issue of helping Women that they don’t ever never feel alone in the middle of any situations. The project idea is to predict whether the given place at any time is safe for a women to go or not. There are many preexisting applications that are useful at the time of crisis situations. At some situations when a women is in trouble, she is not able to use those applications. And there are also so many rehabilation centres which are used after the situation has happened. But our proposed model will help women to take precautions so that they never ever get that situation. For this idea we used Machine Learning. Machine learning is used to train the data and make quality predictions by recognizing the patterns in data. We applied different algorithms like Naïve Bayes, K-Nearest Neighbours, Logistic Regression models. Logistic regression is the best fit among other machine learning algorithms and it is more effective than others. In this paper, we used Logistic regression algorithm of Sklearn machine learning library to classify the dataset. Information about some set of areas in Tamilnadu are collected and was used in our project. When a women alone want to go out for any personal work or any financial work without knowing any safety details about the place she wants to go our application helps more better.


2019 ◽  
Author(s):  
Oskar Flygare ◽  
Jesper Enander ◽  
Erik Andersson ◽  
Brjánn Ljótsson ◽  
Volen Z Ivanov ◽  
...  

**Background:** Previous attempts to identify predictors of treatment outcomes in body dysmorphic disorder (BDD) have yielded inconsistent findings. One way to increase precision and clinical utility could be to use machine learning methods, which can incorporate multiple non-linear associations in prediction models. **Methods:** This study used a random forests machine learning approach to test if it is possible to reliably predict remission from BDD in a sample of 88 individuals that had received internet-delivered cognitive behavioral therapy for BDD. The random forest models were compared to traditional logistic regression analyses. **Results:** Random forests correctly identified 78% of participants as remitters or non-remitters at post-treatment. The accuracy of prediction was lower in subsequent follow-ups (68%, 66% and 61% correctly classified at 3-, 12- and 24-month follow-ups, respectively). Depressive symptoms, treatment credibility, working alliance, and initial severity of BDD were among the most important predictors at the beginning of treatment. By contrast, the logistic regression models did not identify consistent and strong predictors of remission from BDD. **Conclusions:** The results provide initial support for the clinical utility of machine learning approaches in the prediction of outcomes of patients with BDD. **Trial registration:** ClinicalTrials.gov ID: NCT02010619.


Sign in / Sign up

Export Citation Format

Share Document