scholarly journals Machine Learning Model for IRIS Flower Classification using Tensor Flow and PyTorch

Author(s):  
Dr. Kalaivazhi Vijayaragavan ◽  
S. Prakathi ◽  
S. Rajalakshmi ◽  
M Sandhiya

Machine learning is a subfield of artificial intelligence, which is learning algorithms to make decision-based on data and try to behave like a human being. Classification is one of the most fundamental concepts in machine learning. It is a process of recognizing, understanding, and grouping ideas and objects into pre-set categories or sub-populations. Using precategorized training datasets, machine learning concept use variety of algorithms to classify the future datasets into categories. Classification algorithms use input training data in machine learning to predict the subsequent data that fall into one of the predetermined categories. To improve the classification accuracy design of neural network is regarded as effective model to obtain better accuracy. However, design of neural network is usually consider scaling layer, perceptron layers and probabilistic layer. In this paper, an enhanced model selection can be evaluated with training and testing strategy. Further, the classification accuracy can be predicted. Finally by using two popular machine learning frameworks: PyTorch and Tensor Flow the prediction of classification accuracy is compared. Results demonstrate that the proposed method can predict with more accuracy. After the deployment of our machine learning model the performance of the model has been evaluated with the help of iris data set.

2021 ◽  
Author(s):  
Eric Sonny Mathew ◽  
Moussa Tembely ◽  
Waleed AlAmeri ◽  
Emad W. Al-Shalabi ◽  
Abdul Ravoof Shaik

Abstract A meticulous interpretation of steady-state or unsteady-state relative permeability (Kr) experimental data is required to determine a complete set of Kr curves. In this work, three different machine learning models was developed to assist in a faster estimation of these curves from steady-state drainage coreflooding experimental runs. The three different models that were tested and compared were extreme gradient boosting (XGB), deep neural network (DNN) and recurrent neural network (RNN) algorithms. Based on existing mathematical models, a leading edge framework was developed where a large database of Kr and Pc curves were generated. This database was used to perform thousands of coreflood simulation runs representing oil-water drainage steady-state experiments. The results obtained from these simulation runs, mainly pressure drop along with other conventional core analysis data, were utilized to estimate Kr curves based on Darcy's law. These analytically estimated Kr curves along with the previously generated Pc curves were fed as features into the machine learning model. The entire data set was split into 80% for training and 20% for testing. K-fold cross validation technique was applied to increase the model accuracy by splitting the 80% of the training data into 10 folds. In this manner, for each of the 10 experiments, 9 folds were used for training and the remaining one was used for model validation. Once the model is trained and validated, it was subjected to blind testing on the remaining 20% of the data set. The machine learning model learns to capture fluid flow behavior inside the core from the training dataset. The trained/tested model was thereby employed to estimate Kr curves based on available experimental results. The performance of the developed model was assessed using the values of the coefficient of determination (R2) along with the loss calculated during training/validation of the model. The respective cross plots along with comparisons of ground-truth versus AI predicted curves indicate that the model is capable of making accurate predictions with error percentage between 0.2 and 0.6% on history matching experimental data for all the three tested ML techniques (XGB, DNN, and RNN). This implies that the AI-based model exhibits better efficiency and reliability in determining Kr curves when compared to conventional methods. The results also include a comparison between classical machine learning approaches, shallow and deep neural networks in terms of accuracy in predicting the final Kr curves. The various models discussed in this research work currently focusses on the prediction of Kr curves for drainage steady-state experiments; however, the work can be extended to capture the imbibition cycle as well.


Author(s):  
Dr. M. P. Borawake

Abstract: The food we consume plays an important role in our daily life. It provides us energy which is needed to work, grow, be active, and to learn and think. The healthy food is essential for good health and nutrition. Light, oxygen, heat, humidity, temperature and spoilage bacteria can all affect both safety and quality of perishable foods. Food kept at room temperature undergoes some chemical reactions after certain period of time, which affects the taste, texture and smell of a food. Consuming spoiled food is harmful for consumers as it can lead to foodborne diseases. This project aims at detecting spoiled food using appropriate sensors and monitoring gases released by the particular food item. Sensors will measure the different parameters of food such as pH, ammonia gas, oxygen level, moisture, etc. The microcontroller takes the readings from sensors and these readings then given as an input to a machine learning model which can decide whether the food is spoilt or not based on training data set. Also, we plan to implement a machine learning model which can calculate the lifespan of that food item. Index Terms: Arduino Uno, Food spoilage, IoT, Machine Learning, Sensors.


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
O Tan ◽  
J.Z Wu ◽  
K.K Yeo ◽  
S.Y.A Lee ◽  
J.S Hon ◽  
...  

Abstract Background Warfarin titration via International Normalised Ratio (INR) monitoring can be a challenge for patients on long term anticoagulation due to its multiple drug interactions, long half life and patient's individual reponse, yet critically important due to its narrow therapeutic index. Machine learning may have a role in learning and replicating existing warfarin prescribing practices, to be potentially incorporated into an automated warfarin titration model. Purpose We aim to explore the feasibility of using machine learning to develop a model that can learn and predict actual warfarin titration practices. Methods A retrospective dataset of 4,247 patients with 48,895 data points of INR values were obtained from our institutional database. Patients who had less than 5 visits recorded, invalid or missing values were excluded. Variables studied included age, warfarin indication, warfarin dose, target INR range, actual INR values, time between titration and time in therapeutic range (TTR as defined by the Rosenthal formula). The machine learning model was developed on an unbiased training data set (1,805 patients), further refined on a handpicked balanced validation set (400 patients), before being evaluated on two balanced test sets of 100 patients each. The test sets were handpicked based on the criteria of TTR (“in vs out of range”) and stability of INR results (“low vs high fluctuation”) (Table 1). Given the time series nature of the data, a Recurrent Neural Network (RNN) was chosen to learn warfarin prescription practices. Long-short term memory (LSTM) cells were further employed to address the problem of time gaps between warfarin titration visits which could result in vanishing gradients. Results A total of 2,163 patients with 42,622 data points were studied (mean age 65±11.7 years, 54.7% male). The mean TTR was 65.4%. The total warfarin dose per week as predicted by the RNN was compared with actual total warfarin dose per week prescribed for each patient in the test sets. The coefficient of determination for the RNN in the “in vs out of range” and “low vs high fluctuation” test sets were 0.941 and 0.962 respectively (Figure 1). Conclusion This proof of concept study demonstrated that a RNN based machine learning model was able to learn and predict warfarin dosage changes with reasonable accuracy. The findings merit further evaluation of the potential use of machine learning in an automated warfarin titration model. Funding Acknowledgement Type of funding source: None


Author(s):  
Dhilsath Fathima.M ◽  
S. Justin Samuel ◽  
R. Hari Haran

Aim: This proposed work is used to develop an improved and robust machine learning model for predicting Myocardial Infarction (MI) could have substantial clinical impact. Objectives: This paper explains how to build machine learning based computer-aided analysis system for an early and accurate prediction of Myocardial Infarction (MI) which utilizes framingham heart study dataset for validation and evaluation. This proposed computer-aided analysis model will support medical professionals to predict myocardial infarction proficiently. Methods: The proposed model utilize the mean imputation to remove the missing values from the data set, then applied principal component analysis to extract the optimal features from the data set to enhance the performance of the classifiers. After PCA, the reduced features are partitioned into training dataset and testing dataset where 70% of the training dataset are given as an input to the four well-liked classifiers as support vector machine, k-nearest neighbor, logistic regression and decision tree to train the classifiers and 30% of test dataset is used to evaluate an output of machine learning model using performance metrics as confusion matrix, classifier accuracy, precision, sensitivity, F1-score, AUC-ROC curve. Results: Output of the classifiers are evaluated using performance measures and we observed that logistic regression provides high accuracy than K-NN, SVM, decision tree classifiers and PCA performs sound as a good feature extraction method to enhance the performance of proposed model. From these analyses, we conclude that logistic regression having good mean accuracy level and standard deviation accuracy compared with the other three algorithms. AUC-ROC curve of the proposed classifiers is analyzed from the output figure.4, figure.5 that logistic regression exhibits good AUC-ROC score, i.e. around 70% compared to k-NN and decision tree algorithm. Conclusion: From the result analysis, we infer that this proposed machine learning model will act as an optimal decision making system to predict the acute myocardial infarction at an early stage than an existing machine learning based prediction models and it is capable to predict the presence of an acute myocardial Infarction with human using the heart disease risk factors, in order to decide when to start lifestyle modification and medical treatment to prevent the heart disease.


2021 ◽  
Author(s):  
Junjie Shi ◽  
Jiang Bian ◽  
Jakob Richter ◽  
Kuan-Hsun Chen ◽  
Jörg Rahnenführer ◽  
...  

AbstractThe predictive performance of a machine learning model highly depends on the corresponding hyper-parameter setting. Hence, hyper-parameter tuning is often indispensable. Normally such tuning requires the dedicated machine learning model to be trained and evaluated on centralized data to obtain a performance estimate. However, in a distributed machine learning scenario, it is not always possible to collect all the data from all nodes due to privacy concerns or storage limitations. Moreover, if data has to be transferred through low bandwidth connections it reduces the time available for tuning. Model-Based Optimization (MBO) is one state-of-the-art method for tuning hyper-parameters but the application on distributed machine learning models or federated learning lacks research. This work proposes a framework $$\textit{MODES}$$ MODES that allows to deploy MBO on resource-constrained distributed embedded systems. Each node trains an individual model based on its local data. The goal is to optimize the combined prediction accuracy. The presented framework offers two optimization modes: (1) $$\textit{MODES}$$ MODES -B considers the whole ensemble as a single black box and optimizes the hyper-parameters of each individual model jointly, and (2) $$\textit{MODES}$$ MODES -I considers all models as clones of the same black box which allows it to efficiently parallelize the optimization in a distributed setting. We evaluate $$\textit{MODES}$$ MODES by conducting experiments on the optimization for the hyper-parameters of a random forest and a multi-layer perceptron. The experimental results demonstrate that, with an improvement in terms of mean accuracy ($$\textit{MODES}$$ MODES -B), run-time efficiency ($$\textit{MODES}$$ MODES -I), and statistical stability for both modes, $$\textit{MODES}$$ MODES outperforms the baseline, i.e., carry out tuning with MBO on each node individually with its local sub-data set.


2020 ◽  
Vol 6 ◽  
Author(s):  
Jaime de Miguel Rodríguez ◽  
Maria Eugenia Villafañe ◽  
Luka Piškorec ◽  
Fernando Sancho Caparrini

Abstract This work presents a methodology for the generation of novel 3D objects resembling wireframes of building types. These result from the reconstruction of interpolated locations within the learnt distribution of variational autoencoders (VAEs), a deep generative machine learning model based on neural networks. The data set used features a scheme for geometry representation based on a ‘connectivity map’ that is especially suited to express the wireframe objects that compose it. Additionally, the input samples are generated through ‘parametric augmentation’, a strategy proposed in this study that creates coherent variations among data by enabling a set of parameters to alter representative features on a given building type. In the experiments that are described in this paper, more than 150 k input samples belonging to two building types have been processed during the training of a VAE model. The main contribution of this paper has been to explore parametric augmentation for the generation of large data sets of 3D geometries, showcasing its problems and limitations in the context of neural networks and VAEs. Results show that the generation of interpolated hybrid geometries is a challenging task. Despite the difficulty of the endeavour, promising advances are presented.


10.2196/18142 ◽  
2020 ◽  
Vol 8 (9) ◽  
pp. e18142
Author(s):  
Ramin Mohammadi ◽  
Mursal Atif ◽  
Amanda Jayne Centi ◽  
Stephen Agboola ◽  
Kamal Jethwani ◽  
...  

Background It is well established that lack of physical activity is detrimental to the overall health of an individual. Modern-day activity trackers enable individuals to monitor their daily activities to meet and maintain targets. This is expected to promote activity encouraging behavior, but the benefits of activity trackers attenuate over time due to waning adherence. One of the key approaches to improving adherence to goals is to motivate individuals to improve on their historic performance metrics. Objective The aim of this work was to build a machine learning model to predict an achievable weekly activity target by considering (1) patterns in the user’s activity tracker data in the previous week and (2) behavior and environment characteristics. By setting realistic goals, ones that are neither too easy nor too difficult to achieve, activity tracker users can be encouraged to continue to meet these goals, and at the same time, to find utility in their activity tracker. Methods We built a neural network model that prescribes a weekly activity target for an individual that can be realistically achieved. The inputs to the model were user-specific personal, social, and environmental factors, daily step count from the previous 7 days, and an entropy measure that characterized the pattern of daily step count. Data for training and evaluating the machine learning model were collected over a duration of 9 weeks. Results Of 30 individuals who were enrolled, data from 20 participants were used. The model predicted target daily count with a mean absolute error of 1545 (95% CI 1383-1706) steps for an 8-week period. Conclusions Artificial intelligence applied to physical activity data combined with behavioral data can be used to set personalized goals in accordance with the individual’s level of activity and thereby improve adherence to a fitness tracker; this could be used to increase engagement with activity trackers. A follow-up prospective study is ongoing to determine the performance of the engagement algorithm.


2022 ◽  
pp. 1559-1575
Author(s):  
Mário Pereira Véstias

Machine learning is the study of algorithms and models for computing systems to do tasks based on pattern identification and inference. When it is difficult or infeasible to develop an algorithm to do a particular task, machine learning algorithms can provide an output based on previous training data. A well-known machine learning model is deep learning. The most recent deep learning models are based on artificial neural networks (ANN). There exist several types of artificial neural networks including the feedforward neural network, the Kohonen self-organizing neural network, the recurrent neural network, the convolutional neural network, the modular neural network, among others. This article focuses on convolutional neural networks with a description of the model, the training and inference processes and its applicability. It will also give an overview of the most used CNN models and what to expect from the next generation of CNN models.


Symmetry ◽  
2020 ◽  
Vol 12 (9) ◽  
pp. 1465
Author(s):  
Taikyeong Jeong

When attempting to apply a large-scale database that holds the behavioral intelligence training data of deep neural networks, the classification accuracy of the artificial intelligence algorithm needs to reflect the behavioral characteristics of the individual. When a change in behavior is recognized, that is, a feedback model based on a data connection model is applied, an analysis of time series data is performed by extracting feature vectors and interpolating data in a deep neural network to overcome the limitations of the existing statistical analysis. Using the results of the first feedback model as inputs to the deep neural network and, furthermore, as the input values of the second feedback model, and interpolating the behavioral intelligence data, that is, context awareness and lifelog data, including physical activities, involves applying the most appropriate conditions. The results of this study show that this method effectively improves the accuracy of the artificial intelligence results. In this paper, through an experiment, after extracting the feature vector of a deep neural network and restoring the missing value, the classification accuracy was verified to improve by about 20% on average. At the same time, by adding behavioral intelligence data to the time series data, a new data connection model, the Deep Neural Network Feedback Model, was proposed, and it was verified that the classification accuracy can be improved by about 8 to 9% on average. Based on the hypothesis, the F (X′) = X model was applied to thoroughly classify the training data set and test data set to present a symmetrical balance between the data connection model and the context-aware data. In addition, behavioral activity data were extrapolated in terms of context-aware and forecasting perspectives to prove the results of the experiment.


Sign in / Sign up

Export Citation Format

Share Document