This paper proposes an optimization strategy for the best selection process of suppliers. Based on recent literature reviews, the paper assumes a selection of commonly used variables for selecting suppliers, and using Logistic regression algorithm technique, to build a model of optimization that learns from customer’s requirements and supplier’s data, and then make predictions and recommendations for best suppliers. The supplier selection process can quickly at times, turn into a complex task for decision-makers, to dealing with the growing number of supplier base list. But Logistics regression technique makes the process easier in the ability to efficiently fetch customer’s requirements with the entire supplier base list and determine by predicting a list of potential suppliers meeting the actual requirements. The selected suppliers make up the recommendation list for the best suppliers for the requirements. And finally, graphical representations are given to showcase the framework analysis, variable selection, and other illustrations about the model analysis
Introduction: During this epidemic, a problem in fundamental education affecting all globe is occurring, and we note that education and learning were online and conducted in students. Academic performance of students must be forecast, so that the instructor may better identify the missing pupils and offer teachers a proactive opportunity to develop additional resources for the student to maximize their chances of graduation. Students' academic achievement in higher learning (EH) has been extensively studied in addressing academic inadequacies, rising drop-out rates, graduation delays, and other difficult questions. Simply said, the performance of students refers to the amount to which short and long-term educational objectives are met. Academics nonetheless judge student achievement from different viewpoints, from grades, average grade points (GPAs) to prospective jobs. The literature encompasses numerous computing attempts to improve student performance in schools and colleges, primarily through data mining and analysis learning. However, the efficiency of current smart techniques and models is still unanimous. Method: This study employs multiple methods for machine learning to forecast student progress. With its accurate data sample prediction, five integrated classification algorithms have been created to forecast students' academic success (support vectors, decision-making trees algorithm and perceptron algorithm, logistic regression algorithm and a random forest algorithm). Results: Students' academic achievement has been reviewed and assessed. The performance of five learning machines mentioned in Section 4 is discussed here. First, we displayed the data after pre-processing by simply displaying distributions to form the data packet and then evaluated 5 important learning methods and described the variables in the data set. The entire series of 480 characteristics were examined.
Tensor networks have emerged as promising tools for machine learning, inspired by their widespread use as variational ansatze in quantum many-body physics. It is well known that the success of a given tensor network ansatz depends in part on how well it can reproduce the underlying entanglement structure of the target state, with different network designs favoring different scaling patterns. We demonstrate here how a related correlation analysis can be applied to tensor network machine learning, and explore whether classical data possess correlation scaling patterns similar to those found in quantum states which might indicate the best network to use for a given dataset. We utilize mutual information as measure of correlations in classical data, and show that it can serve as a lower-bound on the entanglement needed for a probabilistic tensor network classifier. We then develop a logistic regression algorithm to estimate the mutual information between bipartitions of data features, and verify its accuracy on a set of Gaussian distributions designed to mimic different correlation patterns. Using this algorithm, we characterize the scaling patterns in the MNIST and Tiny Images datasets, and find clear evidence of boundary-law scaling in the latter. This quantum-inspired classical analysis offers insight into the design of tensor networks which are best suited for specific learning tasks.
Objective: To explore the molecular mechanism and search for the candidate differentially expressed genes (DEGs) with the predictive and prognostic potentiality that is detectable in the whole blood of patients with ST-segment elevation (STEMI) and those with post-STEMI HF.Methods: In this study, we downloaded GSE60993, GSE61144, GSE66360, and GSE59867 datasets from the NCBI-GEO database. DEGs of the datasets were investigated using R. Gene ontology (GO) and pathway enrichment were performed via ClueGO, CluePedia, and DAVID database. A protein interaction network was constructed via STRING. Enriched hub genes were analyzed by Cytoscape software. The least absolute shrinkage and selection operator (LASSO) logistic regression algorithm and receiver operating characteristics analyses were performed to build machine learning models for predicting STEMI. Hub genes for further validated in patients with post-STEMI HF from GSE59867.Results: We identified 90 upregulated DEGs and nine downregulated DEGs convergence in the three datasets (|log2FC| ≥ 0.8 and adjusted p < 0.05). They were mainly enriched in GO terms relating to cytokine secretion, pattern recognition receptors signaling pathway, and immune cells activation. A cluster of eight genes including ITGAM, CLEC4D, SLC2A3, BST1, MCEMP1, PLAUR, GPR97, and MMP25 was found to be significant. A machine learning model built by SLC2A3, CLEC4D, GPR97, PLAUR, and BST1 exerted great value for STEMI prediction. Besides, ITGAM and BST1 might be candidate prognostic DEGs for post-STEMI HF.Conclusions: We reanalyzed the integrated transcriptomic signature of patients with STEMI showing predictive potentiality and revealed new insights and specific prospective DEGs for STEMI risk stratification and HF development.
People have shown an increasing interest in urban gardening. Irrigation is one of the common methods used to take care of the plant growth. However, the proper irrigation timing of plant is much unclear for most people. Moreover, the manual irrigation is impossible when people do not have physical access to the plant in a long period of time. Hence, a smart irrigation system using Raspberry Pi has been proposed to ease the irrigation. In this system, three different sensors, including moisture, humidity and temperature sensors are installed in the soil of the plant. The collected data from the sensors will be used to predict whether the plant need to be watered or not. This system implements a machine-learning algorithm called Binary Logistic Regression using Python library to test the accuracy of the system. The accuracy of the algorithm to predict the irrigation is 82%. The finding from this study is believed to be helpful as it may contribute to the development of better irrigation system.
Background: The current pandemic caused by SARS-CoV-2 is an acute illness of global concern. SARS-CoV-2 is an infectious disease caused by a recently discovered coronavirus. Most people who get sick from COVID-19 experience either mild, moderate, or severe symptoms. In order to help make quick decisions regarding treatment and isolation needs, it is useful to determine which significant variables indicate infection cases in the population served by the Tijuana General Hospital (Hospital General de Tijuana). An Artificial Intelligence (Machine Learning) mathematical model was developed in order to identify early-stage significant variables in COVID-19 patients. Methods: The individual characteristics of the study subjects included age, gender, age group, symptoms, comorbidities, diagnosis, and outcomes. A mathematical model that uses supervised learning algorithms, allowing the identification of the significant variables that predict the diagnosis of COVID-19 with high precision, was developed. Results: Automatic algorithms were used to analyze the data: for Systolic Arterial Hypertension (SAH), the Logistic Regression algorithm showed results of 91.0% in area under ROC (AUC), 80% accuracy (CA), 80% F1 and 80% Recall, and 80.1% precision for the selected variables, while for Diabetes Mellitus (DM) with the Logistic Regression algorithm it obtained 91.2% AUC, 89.2% accuracy, 88.8% F1, 89.7% precision, and 89.2% recall for the selected variables. The neural network algorithm showed better results for patients with Obesity, obtaining 83.4% AUC, 91.4% accuracy, 89.9% F1, 90.6% precision, and 91.4% recall. Conclusions: Statistical analyses revealed that the significant predictive symptoms in patients with SAH, DM, and Obesity were more substantial in fatigue and myalgias/arthralgias. In contrast, the third dominant symptom in people with SAH and DM was odynophagia.
In order to improve the accuracy of the evaluation results of multiperception intelligent wearable devices, the mathematical statistical characteristics based on speech, behavior, environment, and physical signs are proposed; first, the PCA feature compression algorithm was used to reduce the dimension of these features, and the differences among different training samples were compared and analyzed; then, three weak classifiers are designed using the logistic regression algorithm, and finally, a strong classifier with higher prediction accuracy is designed according to the boosting decision fusion method and ensemble learning idea. The results showed that the accuracy of the logistic regression model trained with the feature data of voice PCA was 0.964, but the recall rate and crossover results were significantly reduced to 0.844 and 0.846, respectively. The accuracy, accuracy and recall of the decision fusion model based on the boosting method and integrated learning are 0.969, and the prediction accuracy of K-folds cross-validation is also as high as 0.956; the superposition fusion results of three weak classifiers achieve a better classification effect.
Natural products are an excellent source of skeletons for medicinal seeds. Triterpenes and saponins are representative natural products that exhibit anti-herpes simplex virus type 1 (HSV-1) activity. However, there has been a lack of comprehensive information on the anti-HSV-1 activity of triterpenes. Therefore, expanding information on the anti-HSV-1 activity of triterpenes and improving the efficiency of their exploration are urgently required. To improve the efficiency of the development of anti-HSV-1 active compounds, we constructed a predictive model for the anti-HSV-1 activity of triterpenes by using the information obtained from previous studies using machine learning methods. In this study, we constructed a binary classification model (i.e., active or inactive) using a logistic regression algorithm. As a result of the evaluation of predictive model, the accuracy for the test data is 0.79, and the area under the curve (AUC) is 0.86. Additionally, to enrich the information on the anti-HSV-1 activity of triterpenes, a plaque reduction assay was performed on 20 triterpenes. As a result, chikusetsusaponin IVa (11: IC50 = 13.06 μM) was found to have potent anti-HSV-1 with three potentially anti-HSV-1 active triterpenes. The assay result was further used for external validation of predictive model. The prediction of the test compounds in the activity test showed a high accuracy (0.83) and AUC (0.81). We also found that this predictive model was found to be able to successfully narrow down the active compounds. This study provides more information on the anti-HSV-1 activity of triterpenes. Moreover, the predictive model can improve the efficiency of the development of active triterpenes by integrating many previous studies to clarify potential relationships.
Regression algorithms are commonly used in machine learning. Based on encryption and privacy protection methods, the current key hot technology regression algorithm and the same encryption technology are studied. This paper proposes a PPLAR based algorithm. The correlation between data items is obtained by logistic regression formula. The algorithm is distributed and parallelized on Hadoop platform to improve the computing speed of the cluster while ensuring the average absolute error of the algorithm.
To conduct better research in hepatocellular carcinoma resection, this paper used 3D machine learning and logistic regression algorithm to study the preoperative assistance of patients undergoing hepatectomy. In this study, the logistic regression model was analyzed to find the influencing factors for the survival and recurrence of patients. The clinical data of 50 HCC patients who underwent extensive hepatectomy (≥4 segments of the liver) admitted to our hospital from June 2020 to December 2020 were selected to calculate the liver volume, simulated surgical resection volume, residual liver volume, surgical margin, etc. The results showed that the simulated liver volume of 50 patients was 845.2 + 285.5 mL, and the actual liver volume of 50 patients was 826.3 ± 268.1 mL, and there was no significant difference between the two groups (t = 0.425;
> 0.05). Compared with the logistic regression model, the machine learning method has a better prediction effect, but the logistic regression model has better interpretability. The analysis of the relationship between the liver tumour and hepatic vessels in practical problems has specific clinical application value for accurately evaluating the volume of liver resection and surgical margin.