COMPENSATION COMPETITIVE LEARNING

Author(s):  
ZHI-QIANG LIU ◽  
YAJUN ZHANG

In general, in competitive learning the requirement for the initial number of prototypes is a difficult task, as we do not usually know the number of clusters in the input data a priori. The behavior and performance of the competitive algorithms are very sensitive to the initial locations and number of the prototypes. In this paper after investigating several important competitive learning paradigms, we present compensation techniques for overcoming the problems in competitive learning. Our experimental results show that competition with compensation can improve the performance of the learning algorithm.

2006 ◽  
Vol 19 (2) ◽  
pp. 261-269 ◽  
Author(s):  
Georgeta Budura ◽  
Corina Botoca ◽  
Nicolae Miclău

This paper presents and discusses some competitive learning algorithms for data clustering. A new competitive learning algorithm, named the dynamically penalized rival competitive learning algorithm (DPRCL), is introduced and studied. It is a variant of the rival penalized competitive algorithm [1] and it performs appropriate clustering without knowing the clusters number, by automatically driving the extra seed points far away from the input data set. It does not have the 'dead units' problem. Simulations results, performed in different conditions, are presented showing that the performance of the new DPRCL algorithm is better comparative with other competitive algorithms.


Author(s):  
Michiharu Maeda ◽  
◽  
Noritaka Shigei ◽  
Hiromi Miyajima ◽  
Kenichi Suzaki ◽  
...  

Two reductions in competitive learning founded on distortion standards are discussed from the viewpoint of generating necessary and appropriate reference vectors under the condition of their predetermined number. The first approach is termed the segmental reduction and competitive learning algorithm. The algorithm is presented as follows: First, numerous reference vectors are prepared and the algorithm is processed under competitive learning. Next, reference vectors are sequentially eliminated to reach their prespecified number based on the partition error criterion. The second approach is termed the general reduction and competitive learning algorithm. The algorithm is presented as follows: First, numerous reference vectors are prepared and the algorithm is processed under competitive learning. Next, reference vectors are sequentially erased based on the average distortion criterion. Experimental results demonstrate the effectiveness of our approaches compared to conventional techniques in average distortion. The two approaches are applied to image coding to determine their feasibility in quality and computation time.


Author(s):  
Salima Ouadfel ◽  
Mohamed Batouche ◽  
Abdlemalik Ahmed-Taleb

In order to implement clustering under the condition that the number of clusters is not known a priori, the authors propose a novel automatic clustering algorithm in this chapter, based on particle swarm optimization algorithm. ACPSO can partition images into compact and well separated clusters without any knowledge on the real number of clusters. ACPSO used a novel representation scheme for the search variables in order to determine the optimal number of clusters. The partition of each particle of the swarm evolves using evolving operators which aim to reduce dynamically the number of naturally occurring clusters in the image as well as to refine the cluster centers. Experimental results on real images demonstrate the effectiveness of the proposed approach.


Author(s):  
Wan Maseri Binti Wan Mohd ◽  
A.H. Beg ◽  
Tutut Herawan ◽  
A. Noraziah ◽  
K. F. Rabbi

K-means is an unsupervised learning and partitioning clustering algorithm. It is popular and widely used for its simplicity and fastness. K-means clustering produce a number of separate flat (non-hierarchical) clusters and suitable for generating globular clusters. The main drawback of the k-means algorithm is that the user must specify the number of clusters in advance. This paper presents an improved version of K-means algorithm with auto-generate an initial number of clusters (k) and a new approach of defining initial Centroid for effective and efficient clustering process. The underlined mechanism has been analyzed and experimented. The experimental results show that the number of iteration is reduced to 50% and the run time is lower and constantly based on maximum distance of data points, regardless of how many data points.


2011 ◽  
Vol 1 (3) ◽  
pp. 1-14 ◽  
Author(s):  
Wan Maseri Binti Wan Mohd ◽  
A.H. Beg ◽  
Tutut Herawan ◽  
A. Noraziah ◽  
K. F. Rabbi

K-means is an unsupervised learning and partitioning clustering algorithm. It is popular and widely used for its simplicity and fastness. K-means clustering produce a number of separate flat (non-hierarchical) clusters and suitable for generating globular clusters. The main drawback of the k-means algorithm is that the user must specify the number of clusters in advance. This paper presents an improved version of K-means algorithm with auto-generate an initial number of clusters (k) and a new approach of defining initial Centroid for effective and efficient clustering process. The underlined mechanism has been analyzed and experimented. The experimental results show that the number of iteration is reduced to 50% and the run time is lower and constantly based on maximum distance of data points, regardless of how many data points.


Author(s):  
Sotetsu Suzugamine ◽  
◽  
Takeru Aoki ◽  
Keiki Takadama ◽  
Hiroyuki Sato

The cortical learning algorithm (CLA) is a type of time-series data prediction algorithm based on the human neocortex. CLA uses multiple columns to represent an input data value at a timestep, and each column has multiple cells to represent the time-series context of the input data. In the conventional CLA, the numbers of columns and cells are user-defined parameters. These parameters depend on the input data, which can be unknown before learning. To avoid the necessity for setting these parameters beforehand, in this work, we propose a self-structured CLA that dynamically adjusts the numbers of columns and cells according to the input data. The experimental results using the time-series test inputs of a sine wave, combined sine wave, and logistic map data demonstrate that the proposed self-structured algorithm can dynamically adjust the numbers of columns and cells depending on the input data. Moreover, the prediction accuracy is higher than those of the conventional long short-term memory and CLAs with various fixed numbers of columns and cells. Furthermore, the experimental results on a multistep prediction of real-world power consumption show that the proposed self-structured CLA achieves a higher prediction accuracy than the conventional long short-term memory.


2014 ◽  
Vol 45 (3) ◽  
pp. 239-245 ◽  
Author(s):  
Robert J. Calin-Jageman ◽  
Tracy L. Caldwell

A recent series of experiments suggests that fostering superstitions can substantially improve performance on a variety of motor and cognitive tasks ( Damisch, Stoberock, & Mussweiler, 2010 ). We conducted two high-powered and precise replications of one of these experiments, examining if telling participants they had a lucky golf ball could improve their performance on a 10-shot golf task relative to controls. We found that the effect of superstition on performance is elusive: Participants told they had a lucky ball performed almost identically to controls. Our failure to replicate the target study was not due to lack of impact, lack of statistical power, differences in task difficulty, nor differences in participant belief in luck. A meta-analysis indicates significant heterogeneity in the effect of superstition on performance. This could be due to an unknown moderator, but no effect was observed among the studies with the strongest research designs (e.g., high power, a priori sampling plan).


2013 ◽  
Vol 1 (3) ◽  
pp. 48-65
Author(s):  
Yuting Chen

A concurrent program is intuitively associated with probability: the executions of the program can produce nondeterministic execution program paths due to the interleavings of threads, whereas some paths can always be executed more frequently than the others. An exploration of the probabilities on the execution paths is expected to provide engineers or compilers with support in helping, either at coding phase or at compile time, to optimize some hottest paths. However, it is not easy to take a static analysis of the probabilities on a concurrent program in that the scheduling of threads of a concurrent program usually depends on the operating system and hardware (e.g., processor) on which the program is executed, which may be vary from machine to machine. In this paper the authors propose a platform independent approach, called ProbPP, to analyzing probabilities on the execution paths of the multithreaded programs. The main idea of ProbPP is to calculate the probabilities on the basis of two kinds of probabilities: Primitive Dependent Probabilities (PDPs) representing the control dependent probabilities among the program statements and Thread Execution Probabilities (TEPs) representing the probabilities of threads being scheduled to execute. The authors have also conducted two preliminary experiments to evaluate the effectiveness and performance of ProbPP, and the experimental results show that ProbPP can provide engineers with acceptable accuracy.


Friction ◽  
2021 ◽  
Author(s):  
Vigneashwara Pandiyan ◽  
Josef Prost ◽  
Georg Vorlaufer ◽  
Markus Varga ◽  
Kilian Wasmer

AbstractFunctional surfaces in relative contact and motion are prone to wear and tear, resulting in loss of efficiency and performance of the workpieces/machines. Wear occurs in the form of adhesion, abrasion, scuffing, galling, and scoring between contacts. However, the rate of the wear phenomenon depends primarily on the physical properties and the surrounding environment. Monitoring the integrity of surfaces by offline inspections leads to significant wasted machine time. A potential alternate option to offline inspection currently practiced in industries is the analysis of sensors signatures capable of capturing the wear state and correlating it with the wear phenomenon, followed by in situ classification using a state-of-the-art machine learning (ML) algorithm. Though this technique is better than offline inspection, it possesses inherent disadvantages for training the ML models. Ideally, supervised training of ML models requires the datasets considered for the classification to be of equal weightage to avoid biasing. The collection of such a dataset is very cumbersome and expensive in practice, as in real industrial applications, the malfunction period is minimal compared to normal operation. Furthermore, classification models would not classify new wear phenomena from the normal regime if they are unfamiliar. As a promising alternative, in this work, we propose a methodology able to differentiate the abnormal regimes, i.e., wear phenomenon regimes, from the normal regime. This is carried out by familiarizing the ML algorithms only with the distribution of the acoustic emission (AE) signals captured using a microphone related to the normal regime. As a result, the ML algorithms would be able to detect whether some overlaps exist with the learnt distributions when a new, unseen signal arrives. To achieve this goal, a generative convolutional neural network (CNN) architecture based on variational auto encoder (VAE) is built and trained. During the validation procedure of the proposed CNN architectures, we were capable of identifying acoustics signals corresponding to the normal and abnormal wear regime with an accuracy of 97% and 80%. Hence, our approach shows very promising results for in situ and real-time condition monitoring or even wear prediction in tribological applications.


Genes ◽  
2021 ◽  
Vol 12 (4) ◽  
pp. 527
Author(s):  
Eran Elhaik ◽  
Dan Graur

In the last 15 years or so, soft selective sweep mechanisms have been catapulted from a curiosity of little evolutionary importance to a ubiquitous mechanism claimed to explain most adaptive evolution and, in some cases, most evolution. This transformation was aided by a series of articles by Daniel Schrider and Andrew Kern. Within this series, a paper entitled “Soft sweeps are the dominant mode of adaptation in the human genome” (Schrider and Kern, Mol. Biol. Evolut. 2017, 34(8), 1863–1877) attracted a great deal of attention, in particular in conjunction with another paper (Kern and Hahn, Mol. Biol. Evolut. 2018, 35(6), 1366–1371), for purporting to discredit the Neutral Theory of Molecular Evolution (Kimura 1968). Here, we address an alleged novelty in Schrider and Kern’s paper, i.e., the claim that their study involved an artificial intelligence technique called supervised machine learning (SML). SML is predicated upon the existence of a training dataset in which the correspondence between the input and output is known empirically to be true. Curiously, Schrider and Kern did not possess a training dataset of genomic segments known a priori to have evolved either neutrally or through soft or hard selective sweeps. Thus, their claim of using SML is thoroughly and utterly misleading. In the absence of legitimate training datasets, Schrider and Kern used: (1) simulations that employ many manipulatable variables and (2) a system of data cherry-picking rivaling the worst excesses in the literature. These two factors, in addition to the lack of negative controls and the irreproducibility of their results due to incomplete methodological detail, lead us to conclude that all evolutionary inferences derived from so-called SML algorithms (e.g., S/HIC) should be taken with a huge shovel of salt.


Sign in / Sign up

Export Citation Format

Share Document