scholarly journals Smart Contact Tracing and Classifier System for Covid-19 Cases

The growing shreds of evidence and spread of COVID-19 in recent times have shown that to effortlessly and optimally tackle the rate at which COVID-19 infected individuals affect uninfected individuals has become a pressing challenge. This demands the need for a smart contact tracing method for COVID-19 contact tracing. This paper reviewed and analysed the available contact tracing models, contact tracing applications used by 36 countries, and their underlined classifier systems and techniques being used for COVID-19 contact tracing, machine learning classifier methods and ways in which these classifiers are evaluated. The incremental method was adopted because it results in a step-by-step rule set that continually changes. Three categories of learning classifier systems were also studied and recommended the Smartphone Mobile Bluetooth (BLE) and Michigan learning classifier system because it offers a short-range communication that is available regardless of the operating system and classifies based on set rules quickly and faster.

2009 ◽  
Vol 17 (3) ◽  
pp. 307-342 ◽  
Author(s):  
Jaume Bacardit ◽  
Natalio Krasnogor

In this paper we empirically evaluate several local search (LS) mechanisms that heuristically edit classification rules and rule sets to improve their performance. Two kinds of operators are studied, (1) rule-wise operators, which edit individual rules, and (2) a rule set-wise operator, which takes the rules from N parents (N ≥ 2) to generate a new offspring, selecting the minimum subset of candidate rules that obtains maximum training accuracy. Moreover, various ways of integrating these operators within the evolutionary cycle of learning classifier systems are studied. The combinations of LS operators and policies are integrated in a Pittsburgh approach framework that we call MPLCS for memetic Pittsburgh learning classifier system. MPLCS is systematically evaluated using various metrics. Several datasets were employed with the objective of identifying which combination of operators and policies scale well, are robust to noise, generate compact solutions, and use the least amount of computational resources to solve the problems.


1994 ◽  
Vol 2 (1) ◽  
pp. 19-36 ◽  
Author(s):  
Robert E. Smith ◽  
H. Brown Cribbs

This paper suggests a simple analogy between learning classifier systems (LCSs) and neural networks (NNs). By clarifying the relationship between LCSs and NNs, the paper indicates how techniques from one can be utilized in the other. The paper points out that the primary distinguishing characteristic of the LCS is its use of a co-adaptive genetic algorithm (GA), where the end product of evolution is a diverse population of individuals that cooperate to perform useful computation. This stands in contrast to typical GA/NN schemes, where a population of networks is employed to evolve a single, optimized network. To fully illustrate the LCS/NN analogy used in this paper, an LCS-like NN is implemented and tested. The test is constructed to run parallel to a similar GA/NN study that did not employ a co-adaptive GA. The test illustrates the LCS/NN analogy and suggests an interesting new method for applying GAs in NNs. Final comments discuss extensions of this work and suggest how LCS and NN studies can further benefit each other.


Author(s):  
Maciej Troć ◽  
Olgierd Unold

Self-adaptation of parameters in a learning classifier system ensemble machineSelf-adaptation is a key feature of evolutionary algorithms (EAs). Although EAs have been used successfully to solve a wide variety of problems, the performance of this technique depends heavily on the selection of the EA parameters. Moreover, the process of setting such parameters is considered a time-consuming task. Several research works have tried to deal with this problem; however, the construction of algorithms letting the parameters adapt themselves to the problem is a critical and open problem of EAs. This work proposes a novel ensemble machine learning method that is able to learn rules, solve problems in a parallel way and adapt parameters used by its components. A self-adaptive ensemble machine consists of simultaneously working extended classifier systems (XCSs). The proposed ensemble machine may be treated as a meta classifier system. A new self-adaptive XCS-based ensemble machine was compared with two other XCS-based ensembles in relation to one-step binary problems: Multiplexer, One Counts, Hidden Parity, and randomly generated Boolean functions, in a noisy version as well. Results of the experiments have shown the ability of the model to adapt the mutation rate and the tournament size. The results are analyzed in detail.


Author(s):  
Atsushi Wada ◽  
◽  
Keiki Takadama ◽  
◽  

Learning Classifier Systems (LCSs) are rule-based adaptive systems that have both Reinforcement Learning (RL) and rule-discovery mechanisms for effective and practical online learning. An analysis of the reinforcement process of XCS, one of the currently mainstream LCSs, is performed from the aspect of RL. Upon comparing XCS's update method with gradient-descent-based parameter update in RL, differences are found in the following elements: (1) residual term, (2) gradient term, and (3) payoff definition. All possible combinations of the variants in each element are implemented and tested on multi-step benchmark problems. This revealed that few specific combinations work effectively with XCS's accuracy-based rule-discovery process, while pure gradient-descent-based update showed the worst performance.


2013 ◽  
Vol 21 (3) ◽  
pp. 361-387 ◽  
Author(s):  
Richard J. Preen ◽  
Larry Bull

A number of representation schemes have been presented for use within learning classifier systems, ranging from binary encodings to artificial neural networks. This paper presents results from an investigation into using a temporally dynamic symbolic representation within the XCSF learning classifier system. In particular, dynamical arithmetic networks are used to represent the traditional condition-action production system rules to solve continuous-valued reinforcement learning problems and to perform symbolic regression, finding competitive performance with traditional genetic programming on a number of composite polynomial tasks. In addition, the network outputs are later repeatedly sampled at varying temporal intervals to perform multistep-ahead predictions of a financial time series.


2007 ◽  
Vol 19 (4) ◽  
pp. 321-337 ◽  
Author(s):  
Yang Gao ◽  
Joshua Zhexue Huang ◽  
Lei Wu

2002 ◽  
Vol 10 (2) ◽  
pp. 185-205 ◽  
Author(s):  
Larry Bull ◽  
Jacob Hurst

Learning classifier systems traditionally use genetic algorithms to facilitate rule discovery, where rule fitness is payoff based. Current research has shifted to the use of accuracy-based fitness. This paper re-examines the use of a particular payoff-based learning classifier system—ZCS. By using simple difference equation models of ZCS, we show that this system is capable of optimal performance subject to appropriate parameter settings. This is demonstrated for both single- and multistep tasks. Optimal performance of ZCS in well-known, multistep maze tasks is then presented to support the findings from the models.


2007 ◽  
Vol 13 (1) ◽  
pp. 69-86 ◽  
Author(s):  
Matthew Studley ◽  
Larry Bull

We investigate the performance of a learning classifier system in some simple multi-objective, multi-step maze problems, using both random and biased action-selection policies for exploration. Results show that the choice of action-selection policy can significantly affect the performance of the system in such environments. Further, this effect is directly related to population size, and we relate this finding to recent theoretical studies of learning classifier systems in single-step problems.


2003 ◽  
Vol 11 (3) ◽  
pp. 279-298 ◽  
Author(s):  
Xavier Llorà ◽  
David E. Goldberg

This paper analyzes the impact of using noisy data sets in Pittsburgh-style learning classifier systems. This study was done using a particular kind of learning classifier system based on multiobjective selection. Our goal was to characterize the behavior of this kind of algorithms when dealing with noisy domains. For this reason, we developed a theoretical model for predicting theminimal achievable error in noisy domains. Combining this theoretical model for crisp learners with graphical representations of the evolved hypotheses through multiobjective techniques, we are able to bound the behavior of a learning classifier system. This kind of modeling lets us identify relevant characteristics of the evolved hypotheses, such as overfitting conditions that lead to hypotheses that poorly generalize the concept to be learned.


2019 ◽  
Vol 4 (1) ◽  
Author(s):  
Matthew R. Karlsen ◽  
Sotiris K. Moschoyiannis ◽  
Vlad B. Georgiev

AbstractRandom Boolean Networks (RBNs) are an arguably simple model which can be used to express rather complex behaviour, and have been applied in various domains. RBNs may be controlled using rule-based machine learning, specifically through the use of a learning classifier system (LCS) – an eXtended Classifier System (XCS) can evolve a set of condition-action rules that direct an RBN from any state to a target state (attractor). However, the rules evolved by XCS may not be optimal, in terms of minimising the total cost along the paths used to direct the network from any state to a specified attractor. In this paper, we present an algorithm for uncovering the optimal set of control rules for controlling random Boolean networks. We assign relative costs for interventions and ‘natural’ steps. We then compare the performance of this optimal rule calculator algorithm (ORC) and the XCS variant of learning classifier systems. We find that the rules evolved by XCS are not optimal in terms of total cost. The results provide a benchmark for future improvement.


Sign in / Sign up

Export Citation Format

Share Document