Belemnites of the Neobelemnella kazimiroviensis group – a comparison of the self-organizing Kohonen networks algorithm with the classic palaeopopulation statistical approach and approaching their origin

2020 ◽  
Vol 295 (1) ◽  
pp. 23-51
Author(s):  
Norbert Keutgen ◽  
Anna J. Keutgen

Belemnites of the Neobelemnella kazimiroviensis group were classified applying the Artificial Neural Networks method, the self-organizing Kohonen algorithm. Four species are distinguished, Neobelemnella kazimiroviensis (Skołozdrówna, 1932), Neobelemnella pensaensis (Naidin, 1952), Neobelemnella skolozdrownae (Kongiel, 1962), and Neobelemnella aff. kazimiroviensis (Skołozdrówna, 1932). The first two species occur in the Upper Maastrichtian of Central Asia (Kazakhstan, Turkmenistan), Russia, Poland, Denmark, the Netherlands and Belgium. N. skolozdrownae is limited to Poland, Denmark, the Netherlands and Belgium, while N. aff. kazimiroviensis occurs in the Volga Basin (Russia) and Kazakhstan. The evolution of the N. kazimiroviensis group from a member of the Belemnella praearkhangelskii group of Central Russia or Kazakhstan or from the Belemnitella americana group from New Jersey (USA) is discussed applying Hierarchical Cluster Analysis and Multidimensional Comparative Analysis. A member of the Bt. americana group – ? Neo-belemnella subfusiformis (Whitfield, 1892) – is referred to the genus Neobelemnella Naidin, 1975 albeit with a query. This supports the hypothesis that the N. kazimiroviensis group could have evolved from a North American precursor.

2022 ◽  
Vol 2022 ◽  
pp. 1-10
Author(s):  
Junyao Ling

This paper introduces the basic concepts and main characteristics of parallel self-organizing networks and analyzes and predicts parallel self-organizing networks through neural networks and their hybrid models. First, we train and describe the law and development trend of the parallel self-organizing network through historical data of the parallel self-organizing network and then use the discovered law to predict the performance of the new data and compare it with its true value. Second, this paper takes the prediction and application of chaotic parallel self-organizing networks as the main research line and neural networks as the main research method. Based on the summary and analysis of traditional neural networks, it jumps out of inertial thinking and first proposes phase space. Reconstruction parameters and neural network structure parameters are unified and optimized, and then, the idea of dividing the phase space into multiple subspaces is proposed. The multi-neural network method is adopted to track and predict the local trajectory of the chaotic attractor in the subspace with high precision to improve overall forecasting performance. During the experiment, short-term and longer-term prediction experiments were performed on the chaotic parallel self-organizing network. The results show that not only the accuracy of the simulation results is greatly improved but also the prediction performance of the real data observed in reality is also greatly improved. When predicting the parallel self-organizing network, the minimum error of the self-organizing difference model is 0.3691, and the minimum error of the self-organizing autoregressive neural network is 0.008, and neural network minimum error is 0.0081. In the parallel self-organizing network prediction of sports event scores, the errors of the above models are 0.0174, 0.0081, 0.0135, and 0.0381, respectively.


2014 ◽  
pp. 210-216
Author(s):  
Hirotaka Inoue ◽  
Kyoshiro Sugiyama

R ecently, mul tiple classifier systems have been used for practical applications to improve classification accuracy. Self-generating neural networks are one of the most suitable base-classifiers for multiple classifier systems because of their simple settings and fast learning ability. However, the computation cost of the multiple classifier system based on self-generating neural networks increases in proportion to the numbers of self-gene rating neural networks. In this paper, w e propose a novel prunin g method for efficient classification and we call this model a self-organizing neural grove. Experiments have been conducted to compare the self-organizing neural grove with bagging and the self-organizing neural grove with boosting, and support vector machine. The results show that the self-organizing neural grove can improve its classification accuracy as well as reducing the computation cost.


2021 ◽  
Vol 3 (4) ◽  
pp. 879-899
Author(s):  
Christos Ferles ◽  
Yannis Papanikolaou ◽  
Stylianos P. Savaidis ◽  
Stelios A. Mitilineos

The self-organizing convolutional map (SOCOM) hybridizes convolutional neural networks, self-organizing maps, and gradient backpropagation optimization into a novel integrated unsupervised deep learning model. SOCOM structurally combines, architecturally stacks, and algorithmically fuses its deep/unsupervised learning components. The higher-level representations produced by its underlying convolutional deep architecture are embedded in its topologically ordered neural map output. The ensuing unsupervised clustering and visualization operations reflect the model’s degree of synergy between its building blocks and synopsize its range of applications. Clustering results are reported on the STL-10 benchmark dataset coupled with the devised neural map visualizations. The series of conducted experiments utilize a deep VGG-based SOCOM model.


Sign in / Sign up

Export Citation Format

Share Document