scholarly journals Generalizing Psychological Similarity Spaces to Unseen Stimuli

Author(s):  
Lucas Bechberger ◽  
Kai-Uwe Kühnberger

AbstractThe cognitive framework of conceptual spaces proposes to represent concepts as regions in psychological similarity spaces. These similarity spaces are typically obtained through multidimensional scaling (MDS), which converts human dissimilarity ratings for a fixed set of stimuli into a spatial representation. One can distinguish metric MDS (which assumes that the dissimilarity ratings are interval or ratio scaled) from nonmetric MDS (which only assumes an ordinal scale). In our first study, we show that despite its additional assumptions, metric MDS does not necessarily yield better solutions than nonmetric MDS. In this chapter, we furthermore propose to learn a mapping from raw stimuli into the similarity space using artificial neural networks (ANNs) in order to generalize the similarity space to unseen inputs. In our second study, we show that a linear regression from the activation vectors of a convolutional ANN to similarity spaces obtained by MDS can be successful and that the results are sensitive to the number of dimensions of the similarity space.

Metals ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 18
Author(s):  
Rahel Jedamski ◽  
Jérémy Epp

Non-destructive determination of workpiece properties after heat treatment is of great interest in the context of quality control in production but also for prevention of damage in subsequent grinding process. Micromagnetic methods offer good possibilities, but must first be calibrated with reference analyses on known states. This work compares the accuracy and reliability of different calibration methods for non-destructive evaluation of carburizing depth and surface hardness of carburized steel. Linear regression analysis is used in comparison with new methods based on artificial neural networks. The comparison shows a slight advantage of neural network method and potential for further optimization of both approaches. The quality of the results can be influenced, among others, by the number of teaching steps for the neural network, whereas more teaching steps does not always lead to an improvement of accuracy for conditions not included in the initial calibration.


Forests ◽  
2019 ◽  
Vol 10 (3) ◽  
pp. 268 ◽  
Author(s):  
Ivaldo Tavares Júnior ◽  
Jonas Rocha ◽  
Ângelo Ebling ◽  
Antônio Chaves ◽  
José Zanuncio ◽  
...  

Equations to predict Eucalyptus timber volume are continuously updated, but most of them cannot be used for certain locations. Thus, equations of similar strata are applied to clonal plantations where trees cannot be felled to fit volumetric models. The objective of this study was to use linear regression and artificial neural networks (ANN) to reduce the number of trees sampled while maintaining the accuracy of commercial volume predictions with bark up to 4 cm in diameter at the top (v) of Eucalyptus clones. Two methods were evaluated in two scenarios: (a) regression model fit and ANN training with 80% of the data (533 trees) and per clone group with 80% of the trees in each group; and (b) model fit and ANN training with trees of only one clone group at ages two and three, with sample intensities of six, five, four, three, two, and one tree per diameter class. The real and predicted v averages did not differ in sample intensities from six to two trees per diameter class with different methods. The frequency distribution of individuals by volume class by the two methods (regression and ANN) compared to the real values were similar in scenarios (a) and (b) by the Kolmogorov–Smirnov test (p-value > 0.01). The application of ANN was more effective for total data analysis with non-linear behavior, without sampled environment stratification. The Prodan model also generates estimates with accuracy, and, among the regression models, is the best fit to the data. The volume with bark up to 4 cm in diameter at the top of Eucalyptus clones can be predicted with at least three trees per diameter class with regression (root mean square error in percentage, RMSE = 12.32%), and at least four trees per class with ANN (RMSE = 11.73%).


Sign in / Sign up

Export Citation Format

Share Document