Application of data approximation and classification in measurement systems - comparison of “neural network” and “Least Squares” approximation

Author(s):  
Amir Jabbari ◽  
Reiner Jedermann ◽  
Walter Lang
Author(s):  
Deniz Erdogmus ◽  
Jose C. Principe

Learning systems depend on three interrelated components: topologies, cost/performance functions, and learning algorithms. Topologies provide the constraints for the mapping, and the learning algorithms offer the means to find an optimal solution; but the solution is optimal with respect to what? Optimality is characterized by the criterion and in neural network literature, this is the least addressed component, yet it has a decisive influence in generalization performance. Certainly, the assumptions behind the selection of a criterion should be better understood and investigated. Traditionally, least squares has been the benchmark criterion for regression problems; considering classification as a regression problem towards estimating class posterior probabilities, least squares has been employed to train neural network and other classifier topologies to approximate correct labels. The main motivation to utilize least squares in regression simply comes from the intellectual comfort this criterion provides due to its success in traditional linear least squares regression applications – which can be reduced to solving a system of linear equations. For nonlinear regression, the assumption of Gaussianity for the measurement error combined with the maximum likelihood principle could be emphasized to promote this criterion. In nonparametric regression, least squares principle leads to the conditional expectation solution, which is intuitively appealing. Although these are good reasons to use the mean squared error as the cost, it is inherently linked to the assumptions and habits stated above. Consequently, there is information in the error signal that is not captured during the training of nonlinear adaptive systems under non-Gaussian distribution conditions when one insists on second-order statistical criteria. This argument extends to other linear-second-order techniques such as principal component analysis (PCA), linear discriminant analysis (LDA), and canonical correlation analysis (CCA). Recent work tries to generalize these techniques to nonlinear scenarios by utilizing kernel techniques or other heuristics. This begs the question: what other alternative cost functions could be used to train adaptive systems and how could we establish rigorous techniques for extending useful concepts from linear and second-order statistical techniques to nonlinear and higher-order statistical learning methodologies?


Author(s):  
Jatinder Kumar ◽  
Ajay Bansal

The experimental determination of various properties of diesel-biodiesel mixtures is very time consuming as well as tedious process. Any tool helpful in estimation of these properties without experimentation can be of immense utility. In present work, other tools of determination of properties of diesel-biodiesel blends were tried. A traditional statistical technique of linear regression (principle of least squares) was used to estimate the flash point, fire point, density and viscosity of diesel and biodiesel mixtures. A set of seven neural network architectures, three training algorithms along with ten different sets of weight and biases were examined to choose best Artificial Neural Network (ANN) to predict the above-mentioned properties of dieselbiodiesel mixtures. The performance of both of the traditional linear regression and ANN techniques were then compared to check their validity to predict the properties of various mixtures of diesel and biodiesel. Key words: Biodiesel; Artificial Neural Network; Principle of least squares; Diesel; Linear Regression. DOI: 10.3126/kuset.v6i2.4017Kathmandu University Journal of Science, Engineering and Technology Vol.6. No II, November, 2010, pp.98-103


Sign in / Sign up

Export Citation Format

Share Document