Selection of Radial Basis Functions via Genetic Algorithms in Pattern Recognition Problems

Author(s):  
Renato Tinós ◽  
Luiz Otávio Murta Júnior
2020 ◽  
Vol 62 (5) ◽  
pp. 471-480 ◽  
Author(s):  
Emre İsa Albak

Abstract In this study, the effects of sections added to multi-cell square tubes on crash performance are examined. Square, hexagonal and circular sections are added to multi-cell square tubes and their results are examined. Finite element analyses under axial loading are performed to examine the crash performance of the multi-cell tubes. Analyses show that by adding a section to the multi-cell square tubes. the crash behavior of the tubes is improved. According to the results, S5H multi-cell square tube reveals the best crash performance. The optimization of S5H is carried out by using genetic algorithms and radial basis functions. The S5H tube presents a good crashworthiness performance and could be used as an energy absorber.


2002 ◽  
Vol 14 (8) ◽  
pp. 1979-2002 ◽  
Author(s):  
Katsuyuki Hagiwara

In considering a statistical model selection of neural networks and radial basis functions under an overrealizable case, the problem of unidentifiability emerges. Because the model selection criterion is an unbiased estimator of the generalization error based on the training error, this article analyzes the expected training error and the expected generalization error of neural networks and radial basis functions in overrealizable cases and clarifies the difference from regular models, for which identifiability holds. As a special case of an overrealizable scenario, we assumed a gaussian noise sequence as training data. In the least-squares estimation under this assumption, we first formulated the problem, in which the calculation of the expected errors of unidentifiable networks is reduced to the calculation of the expectation of the supremum of thex2 process. Under this formulation, we gave an upper bound of the expected training error and a lower bound of the expected generalization error, where the generalization is measured at a set of training inputs. Furthermore, we gave stochastic bounds on the training error and the generalization error. The obtained upper bound of the expected training error is smaller than in regular models, and the lower bound of the expected generalization error is larger than in regular models. The result tells us that the degree of overfitting in neural networks and radial basis functions is higher than in regular models. Correspondingly, it also tells us that the generalization capability is worse than in the case of regular models. The article may be enough to show a difference between neural networks and regular models in the context of the least-squares estimation in a simple situation. This is a first step in constructing a model selection criterion in an overrealizable case. Further important problems in this direction are also included in this article.


Sign in / Sign up

Export Citation Format

Share Document