scholarly journals Predicting bankruptcy using neural networks and other classification methods: The influence of variable selection techniques on model accuracy

2010 ◽  
Vol 73 (10-12) ◽  
pp. 2047-2060 ◽  
Author(s):  
Philippe du Jardin
2014 ◽  
Vol 8 (1) ◽  
pp. 15-21
Author(s):  
Dmitrienko V. D ◽  
Yu. Zakovorotnyi A ◽  
Yu. Leonov S ◽  
Khavina I. P

A new discrete neural networks adaptive resonance theory (ART), which allows solving problems with multiple solutions, is developed. New algorithms neural networks teaching ART to prevent degradation and reproduction classes at training noisy input data is developed. Proposed learning algorithms discrete ART networks, allowing obtaining different classification methods of input.


2019 ◽  
Vol 29 (2) ◽  
pp. 393-405 ◽  
Author(s):  
Magdalena Piotrowska ◽  
Gražina Korvel ◽  
Bożena Kostek ◽  
Tomasz Ciszewski ◽  
Andrzej Cżyzewski

Abstract Automatic classification methods, such as artificial neural networks (ANNs), the k-nearest neighbor (kNN) and self-organizing maps (SOMs), are applied to allophone analysis based on recorded speech. A list of 650 words was created for that purpose, containing positionally and/or contextually conditioned allophones. For each word, a group of 16 native and non-native speakers were audio-video recorded, from which seven native speakers’ and phonology experts’ speech was selected for analyses. For the purpose of the present study, a sub-list of 103 words containing the English alveolar lateral phoneme /l/ was compiled. The list includes ‘dark’ (velarized) allophonic realizations (which occur before a consonant or at the end of the word before silence) and 52 ‘clear’ allophonic realizations (which occur before a vowel), as well as voicing variants. The recorded signals were segmented into allophones and parametrized using a set of descriptors, originating from the MPEG 7 standard, plus dedicated time-based parameters as well as modified MFCC features proposed by the authors. Classification methods such as ANNs, the kNN and the SOM were employed to automatically detect the two types of allophones. Various sets of features were tested to achieve the best performance of the automatic methods. In the final experiment, a selected set of features was used for automatic evaluation of the pronunciation of dark /l/ by non-native speakers.


2020 ◽  
pp. 1577-1597
Author(s):  
Mohammed Akour ◽  
Wasen Yahya Melhem

This article describes how classification methods on software defect prediction is widely researched due to the need to increase the software quality and decrease testing efforts. However, findings of past researches done on this issue has not shown any classifier which proves to be superior to the other. Additionally, there is a lack of research that studies the effects and accuracy of genetic programming on software defect prediction. To find solutions for this problem, a comparative software defect prediction experiment between genetic programming and neural networks are performed on four datasets from the NASA Metrics Data repository. Generally, an interesting degree of accuracy is detected, which shows how the metric-based classification is useful. Nevertheless, this article specifies that the application and usage of genetic programming is highly recommended due to the detailed analysis it provides, as well as an important feature in this classification method which allows the viewing of each attributes impact in the dataset.


2021 ◽  
Vol 2142 (1) ◽  
pp. 012013
Author(s):  
A S Nazdryukhin ◽  
A M Fedrak ◽  
N A Radeev

Abstract This work presents the results of using self-normalizing neural networks with automatic selection of hyperparameters, TabNet and NODE to solve the problem of tabular data classification. The method of automatic selection of hyperparameters was realised. Testing was carried out with the open source framework OpenML AutoML Benchmark. As part of the work, a comparative analysis was carried out with seven classification methods, experiments were carried out for 39 datasets with 5 methods. NODE shows the best results among the following methods and overperformed standard methods for four datasets.


Sign in / Sign up

Export Citation Format

Share Document