Depth-Dependent Approach to the Selection of the Optimal Hypothesis in Classification Problems

2016 ◽  
Vol 48 (7) ◽  
pp. 65-76
Author(s):  
Alexander A. Galkin
2019 ◽  
Vol 8 (2S11) ◽  
pp. 3790-3794 ◽  

The formation of a characteristic space in classification problems can be divided into two stages: the choice of the initial description of objects and the formation of an informative description of objects on the basis of a reduction in the dimension of the space of the original description


Author(s):  
Séamus Lankford ◽  
◽  
Diarmuid Grimes

The training and optimization of neural networks, using pre-trained, super learner and ensemble approaches is explored. Neural networks, and in particular Convolutional Neural Networks (CNNs), are often optimized using default parameters. Neural Architecture Search (NAS) enables multiple architectures to be evaluated prior to selection of the optimal architecture. Our contribution is to develop, and make available to the community, a system that integrates open source tools for the neural architecture search (OpenNAS) of image classification models. OpenNAS takes any dataset of grayscale, or RGB images, and generates the optimal CNN architecture. Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO) and pre-trained models serve as base learners for ensembles. Meta learner algorithms are subsequently applied to these base learners and the ensemble performance on image classification problems is evaluated. Our results show that a stacked generalization ensemble of heterogeneous models is the most effective approach to image classification within OpenNAS.


2015 ◽  
Vol 26 (2) ◽  
pp. 997-1020
Author(s):  
Marcelo Azevedo Costa ◽  
Thiago de Souza Rodrigues ◽  
André Gabriel FC da Costa ◽  
René Natowicz ◽  
Antônio Pádua Braga

This work proposes a sequential methodology for selecting variables in classification problems in which the number of predictors is much larger than the sample size. The methodology includes a Monte Carlo permutation procedure that conditionally tests the null hypothesis of no association among the outcomes and the available predictors. In order to improve computing aspects, we propose a new parametric distribution, the Truncated and Zero Inflated Gumbel Distribution. The final application is to find compact classification models with improved performance for genomic data. Results using real data sets show that the proposed methodology selects compact models with optimized classification performances.


Author(s):  
Hassan Ramchoun ◽  
Mohammed Amine Janati Idrissi ◽  
Youssef Ghanou ◽  
Mohamed Ettaouil

Multilayer perceptron has a large amount of classifications and regression applications in many fields: pattern recognition, voice, and classification problems. But the architecture choice in particular, the activation function type used for each neuron has a great impact on the convergence and performance. In the present article, the authors introduce a new approach to optimize the selection of network architecture, weights, and activation functions. To solve the obtained model the authors use a genetic algorithm and train the network with a back-propagation method. The numerical results show the effectiveness of the approach shown in this article, and the advantages of the new model compared to the existing previous model in the literature.


Author(s):  
J. F. BALDWIN ◽  
RITA A. RIBEIRO

This paper presents a selection of alternative approaches to handle classification problems using the paradigm of case-based reasoning with fuzzy concepts. Our main concern is classification and pattern recognition queries in a fuzzy environment. An example was developed to explain the various methods and the results compared.


2019 ◽  
Vol 42 ◽  
Author(s):  
Gian Domenico Iannetti ◽  
Giorgio Vallortigara

Abstract Some of the foundations of Heyes’ radical reasoning seem to be based on a fractional selection of available evidence. Using an ethological perspective, we argue against Heyes’ rapid dismissal of innate cognitive instincts. Heyes’ use of fMRI studies of literacy to claim that culture assembles pieces of mental technology seems an example of incorrect reverse inferences and overlap theories pervasive in cognitive neuroscience.


Sign in / Sign up

Export Citation Format

Share Document