Machine Learning Techniques on Multidimensional Curve Fitting Data Based on R- Square and Chi-Square Methods

Author(s):  
Vidyullatha P ◽  
D. Rajeswara Rao

<p>Curve fitting is one of the procedures in data analysis and is helpful for prediction analysis showing graphically how the data points are related to one another whether it is in linear or non-linear model. Usually, the curve fit will find the concentrates along the curve or it will just use to smooth the data and upgrade the presence of the plot. Curve fitting checks the relationship between independent variables and dependent variables with the objective of characterizing a good fit model. Curve fitting finds mathematical equation that best fits given information. In this paper, 150 unorganized data points of environmental variables are used to develop Linear and non-linear data modelling which are evaluated by utilizing 3 dimensional ‘Sftool’ and ‘Labfit’ machine learning techniques. In Linear model, the best estimations of the coefficients are realized by the estimation of R- square turns in to one and in Non-Linear models with least Chi-square are the criteria. </p>

Author(s):  
Vidyullatha P ◽  
D. Rajeswara Rao

<p>Curve fitting is one of the procedures in data analysis and is helpful for prediction analysis showing graphically how the data points are related to one another whether it is in linear or non-linear model. Usually, the curve fit will find the concentrates along the curve or it will just use to smooth the data and upgrade the presence of the plot. Curve fitting checks the relationship between independent variables and dependent variables with the objective of characterizing a good fit model. Curve fitting finds mathematical equation that best fits given information. In this paper, 150 unorganized data points of environmental variables are used to develop Linear and non-linear data modelling which are evaluated by utilizing 3 dimensional ‘Sftool’ and ‘Labfit’ machine learning techniques. In Linear model, the best estimations of the coefficients are realized by the estimation of R- square turns in to one and in Non-Linear models with least Chi-square are the criteria. </p>


Author(s):  
Ramesh Ponnala ◽  
K. Sai Sowjanya

Prediction of Cardiovascular ailment is an important task inside the vicinity of clinical facts evaluation. Machine learning knowledge of has been proven to be effective in helping in making selections and predicting from the huge amount of facts produced by using the healthcare enterprise. on this paper, we advocate a unique technique that pursuits via finding good sized functions by means of applying ML strategies ensuing in improving the accuracy inside the prediction of heart ailment. The severity of the heart disease is classified primarily based on diverse methods like KNN, choice timber and so on. The prediction version is added with special combos of capabilities and several known classification techniques. We produce a stronger performance level with an accuracy level of a 100% through the prediction version for heart ailment with the Hybrid Random forest area with a linear model (HRFLM).


Author(s):  
Juan Gómez-Sanchis ◽  
Emilio Soria-Olivas ◽  
Marcelino Martinez-Sober ◽  
Jose Blasco ◽  
Juan Guerrero ◽  
...  

This work presents a new approach for one of the main problems in the analysis of atmospheric phenomena, the prediction of atmospheric concentrations of different elements. The proposed methodology is more efficient than other classical approaches and is used in this work to predict tropospheric ozone concentration. The relevance of this problem stems from the fact that excessive ozone concentrations may cause several problems related to public health. Previous research by the authors of this work has shown that the classical approach to this problem (linear models) does not achieve satisfactory results in tropospheric ozone concentration prediction. The authors’ approach is based on Machine Learning (ML) techniques, which include algorithms related to neural networks, fuzzy systems and advanced statistical techniques for data processing. In this work, the authors focus on one of the main ML techniques, namely, neural networks. These models demonstrate their suitability for this problem both in terms of prediction accuracy and information extraction.


Algorithms ◽  
2021 ◽  
Vol 14 (9) ◽  
pp. 258
Author(s):  
Tran Dinh Khang ◽  
Manh-Kien Tran ◽  
Michael Fowler

Clustering is an unsupervised machine learning method with many practical applications that has gathered extensive research interest. It is a technique of dividing data elements into clusters such that elements in the same cluster are similar. Clustering belongs to the group of unsupervised machine learning techniques, meaning that there is no information about the labels of the elements. However, when knowledge of data points is known in advance, it will be beneficial to use a semi-supervised algorithm. Within many clustering techniques available, fuzzy C-means clustering (FCM) is a common one. To make the FCM algorithm a semi-supervised method, it was proposed in the literature to use an auxiliary matrix to adjust the membership grade of the elements to force them into certain clusters during the computation. In this study, instead of using the auxiliary matrix, we proposed to use multiple fuzzification coefficients to implement the semi-supervision component. After deriving the proposed semi-supervised fuzzy C-means clustering algorithm with multiple fuzzification coefficients (sSMC-FCM), we demonstrated the convergence of the algorithm and validated the efficiency of the method through a numerical example.


2017 ◽  
Author(s):  
Ari S. Benjamin ◽  
Hugo L. Fernandes ◽  
Tucker Tomlinson ◽  
Pavan Ramkumar ◽  
Chris VerSteeg ◽  
...  

AbstractNeuroscience has long focused on finding encoding models that effectively ask “what predicts neural spiking?” and generalized linear models (GLMs) are a typical approach. It is often unknown how much of explainable neural activity is captured, or missed, when fitting a GLM. Here we compared the predictive performance of GLMs to three leading machine learning methods: feedforward neural networks, gradient boosted trees (using XGBoost), and stacked ensembles that combine the predictions of several methods. We predicted spike counts in macaque motor (M1) and somatosensory (S1) cortices from standard representations of reaching kinematics, and in rat hippocampal cells from open field location and orientation. In general, the modern methods (particularly XGBoost and the ensemble) produced more accurate spike predictions and were less sensitive to the preprocessing of features. This discrepancy in performance suggests that standard feature sets may often relate to neural activity in a nonlinear manner not captured by GLMs. Encoding models built with machine learning techniques, which can be largely automated, more accurately predict spikes and can offer meaningful benchmarks for simpler models.


Foods ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 2767
Author(s):  
Muhammad Hilal Kabir ◽  
Mahamed Lamine Guindo ◽  
Rongqin Chen ◽  
Fei Liu

Millet is a primary food for people living in the dry and semi-dry regions and is dispersed within most parts of Europe, Africa, and Asian countries. As part of the European Union (EU) efforts to establish food originality, there is a global need to create Protected Geographical Indication (PGI) and Protected Designation of Origin (PDO) of crops and agricultural products to ensure the integrity of the food supply. In the present work, Visible and Near-Infrared Spectroscopy (Vis-NIR) combined with machine learning techniques was used to discriminate 16 millet varieties (n = 480) originating from various regions of China. Five different machine learning algorithms, namely, K-nearest neighbor (K-NN), Linear discriminant analysis (LDA), Logistic regression (LR), Random Forest (RF), and Support vector machine (SVM), were used to train the NIR spectra of these millet samples and to assess their discrimination performance. Visible cluster trends were obtained from the Principal Component Analysis (PCA) of the spectral data. Cross-validation was used to optimize the performance of the models. Overall, the F-Score values were as follows: SVM with 99.5%, accompanied by RF with 99.5%, LDA with 99.5%, K-NN with 99.1%, and LR with 98.8%. Both the linear and non-linear algorithms yielded positive results, but the non-linear models appear slightly better. The study revealed that applying Vis-NIR spectroscopy assisted by machine learning technique can be an essential tool for tracing the origins of millet, contributing to a safe authentication method in a quick, relatively cheap, and non-destructive way.


2018 ◽  
Vol 3 (24) ◽  
pp. eaau2489 ◽  
Author(s):  
I. M. Van Meerbeek ◽  
C. M. De Sa ◽  
R. F. Shepherd

In a step toward soft robot proprioception, and therefore better control, this paper presents an internally illuminated elastomer foam that has been trained to detect its own deformation through machine learning techniques. Optical fibers transmitted light into the foam and simultaneously received diffuse waves from internal reflection. The diffuse reflected light was interpreted by machine learning techniques to predict whether the foam was twisted clockwise, twisted counterclockwise, bent up, or bent down. Machine learning techniques were also used to predict the magnitude of the deformation type. On new data points, the model predicted the type of deformation with 100% accuracy and the magnitude of the deformation with a mean absolute error of 0.06°. This capability may impart soft robots with more complete proprioception, enabling them to be reliably controlled and responsive to external stimuli.


2021 ◽  
Vol 14 (3) ◽  
pp. 120
Author(s):  
Susanna Levantesi ◽  
Giulia Zacchia

In recent years, machine learning techniques have assumed an increasingly central role in many areas of research, from computer science to medicine, including finance. In the current study, we applied it to financial literacy to test its accuracy, compared to a standard parametric model, in the estimation of the main determinants of financial knowledge. Using recent data on financial literacy and inclusion among Italian adults, we empirically tested how tree-based machine learning methods, such as decision trees, random, forest and gradient boosting techniques, can be a valuable complement to standard models (generalized linear models) for the identification of the groups in the population in most need of improving their financial knowledge.


2021 ◽  
Author(s):  
Etienne-Victor Depasquale ◽  
Humaira Abdul Salam ◽  
Franco Davoli

Abstract This article surveys the literature, over the period 2010-2020, on measurement of power consumption and relevant power models of virtual entities as they apply to the telco cloud. Hardware power meters are incapable of measuring power consumption of individual virtual entities co-hosted on a physical machine. Thus, software power meters are inevitable, yet their development is difficult. Indeed, there is no direct approach to measurement and, therefore, modeling through proxies of power consumption must be used. In this survey, we present trends, fallacies and pitfalls. Notably, we identify limitations of the widely-used linear models and the progression towards Artificial Intelligence / Machine Learning techniques as a means of dealing with the seven major dimensions of variability: workload type; computer virtualization agents; system architecture and resources; concurrent, co-hosted virtualized entities; approaches towards attribution of power consumption to virtual entities; frequency; and temperature.


Sign in / Sign up

Export Citation Format

Share Document