scholarly journals Extending XCS with Cyclic Graphs for Scalability on Complex Boolean Problems

2017 ◽  
Vol 25 (2) ◽  
pp. 173-204 ◽  
Author(s):  
Muhammad Iqbal ◽  
Will N. Browne ◽  
Mengjie Zhang

A main research direction in the field of evolutionary machine learning is to develop a scalable classifier system to solve high-dimensional problems. Recently work has begun on autonomously reusing learned building blocks of knowledge to scale from low-dimensional problems to high-dimensional ones. An XCS-based classifier system, known as XCSCFC, has been shown to be scalable, through the addition of expression tree–like code fragments, to a limit beyond standard learning classifier systems. XCSCFC is especially beneficial if the target problem can be divided into a hierarchy of subproblems and each of them is solvable in a bottom-up fashion. However, if the hierarchy of subproblems is too deep, then XCSCFC becomes impractical because of the needed computational time and thus eventually hits a limit in problem size. A limitation in this technique is the lack of a cyclic representation, which is inherent in finite state machines (FSMs). However, the evolution of FSMs is a hard task owing to the combinatorially large number of possible states, connections, and interaction. Usually this requires supervised learning to minimize inappropriate FSMs, which for high-dimensional problems necessitates subsampling or incremental testing. To avoid these constraints, this work introduces a state-machine-based encoding scheme into XCS for the first time, termed XCSSMA. The proposed system has been tested on six complex Boolean problem domains: multiplexer, majority-on, carry, even-parity, count ones, and digital design verification problems. The proposed approach outperforms XCSCFA (an XCS that computes actions) and XCSF (an XCS that computes predictions) in three of the six problem domains, while the performance in others is similar. In addition, XCSSMA evolved, for the first time, compact and human readable general classifiers (i.e., solving any n-bit problems) for the even-parity and carry problem domains, demonstrating its ability to produce scalable solutions using a cyclic representation.

2021 ◽  
Author(s):  
◽  
Muhammad Iqbal

<p>Using evolutionary intelligence and machine learning techniques, a broad range of intelligent machines have been designed to perform different tasks. An intelligent machine learns by perceiving its environmental status and taking an action that maximizes its chances of success. Human beings have the ability to apply knowledge learned from a smaller problem to more complex, large-scale problems of the same or a related domain, but currently the vast majority of evolutionary machine learning techniques lack this ability. This lack of ability to apply the already learned knowledge of a domain results in consuming more than the necessary resources and time to solve complex, large-scale problems of the domain. As the problem increases in size, it becomes difficult and even sometimes impractical (if not impossible) to solve due to the needed resources and time. Therefore, in order to scale in a problem domain, a systemis needed that has the ability to reuse the learned knowledge of the domain and/or encapsulate the underlying patterns in the domain. To extract and reuse building blocks of knowledge or to encapsulate the underlying patterns in a problem domain, a rich encoding is needed, but the search space could then expand undesirably and cause bloat, e.g. as in some forms of genetic programming (GP). Learning classifier systems (LCSs) are a well-structured evolutionary computation based learning technique that have pressures to implicitly avoid bloat, such as fitness sharing through niche based reproduction. The proposed thesis is that an LCS can scale to complex problems in a domain by reusing the learnt knowledge from simpler problems of the domain and/or encapsulating the underlying patterns in the domain. Wilson’s XCS is used to implement and test the proposed systems, which is a well-tested,  online learning and accuracy based LCS model. To extract the reusable building  blocks of knowledge, GP-tree like, code-fragments are introduced, which are more  than simply another representation (e.g. ternary or real-valued alphabets). This  thesis is extended to capture the underlying patterns in a problemusing a cyclic  representation. Hard problems are experimented to test the newly developed scalable  systems and compare them with benchmark techniques. Specifically, this work develops four systems to improve the scalability of XCS-based classifier systems. (1) Building blocks of knowledge are extracted fromsmaller problems of a Boolean domain and reused in learning more complex, large-scale problems in the domain, for the first time. By utilizing the learnt knowledge from small-scale problems, the developed XCSCFC (i.e. XCS with Code-Fragment Conditions) system readily solves problems of a scale that existing LCS and GP approaches cannot, e.g. the 135-bitMUX problem. (2) The introduction of the code fragments in classifier actions in XCSCFA (i.e. XCS with Code-Fragment Actions) enables the rich representation of GP, which when couples with the divide and conquer approach of LCS, to successfully solve various complex, overlapping and niche imbalance Boolean problems that are difficult to solve using numeric action based XCS. (3) The underlying patterns in a problem domain are encapsulated in classifier rules encoded by a cyclic representation. The developed XCSSMA system produces general solutions of any scale n for a number of important Boolean problems, for the first time in the field of LCS, e.g. parity problems. (4) Optimal solutions for various real-valued problems are evolved by extending the existing real-valued XCSR system with code-fragment actions to XCSRCFA. Exploiting the combined power of GP and LCS techniques, XCSRCFA successfully learns various continuous action and function approximation problems that are difficult to learn using the base techniques. This research work has shown that LCSs can scale to complex, largescale problems through reusing learnt knowledge. The messy nature, disassociation of  message to condition order, masking, feature construction, and reuse of extracted knowledge add additional abilities to the XCS family of LCSs. The ability to use  rich encoding in antecedent GP-like codefragments or consequent cyclic representation  leads to the evolution of accurate, maximally general and compact solutions in learning  various complex Boolean as well as real-valued problems. Effectively exploiting the combined power of GP and LCS techniques, various continuous action and function approximation problems are solved in a simple and straight forward manner. The analysis of the evolved rules reveals, for the first time in XCS, that no matter how specific or general the initial classifiers are, all the optimal classifiers are converged through the mechanism ‘be specific then generalize’ near the final stages of evolution. Also that standard XCS does not use all available information or all available genetic operators to evolve optimal rules, whereas the developed code-fragment action based systems effectively use figure  and ground information during the training process. Thiswork has created a platformto explore the reuse of learnt functionality, not just terminal knowledge as present, which is needed to replicate human capabilities.</p>


2021 ◽  
Author(s):  
◽  
Muhammad Iqbal

<p>Using evolutionary intelligence and machine learning techniques, a broad range of intelligent machines have been designed to perform different tasks. An intelligent machine learns by perceiving its environmental status and taking an action that maximizes its chances of success. Human beings have the ability to apply knowledge learned from a smaller problem to more complex, large-scale problems of the same or a related domain, but currently the vast majority of evolutionary machine learning techniques lack this ability. This lack of ability to apply the already learned knowledge of a domain results in consuming more than the necessary resources and time to solve complex, large-scale problems of the domain. As the problem increases in size, it becomes difficult and even sometimes impractical (if not impossible) to solve due to the needed resources and time. Therefore, in order to scale in a problem domain, a systemis needed that has the ability to reuse the learned knowledge of the domain and/or encapsulate the underlying patterns in the domain. To extract and reuse building blocks of knowledge or to encapsulate the underlying patterns in a problem domain, a rich encoding is needed, but the search space could then expand undesirably and cause bloat, e.g. as in some forms of genetic programming (GP). Learning classifier systems (LCSs) are a well-structured evolutionary computation based learning technique that have pressures to implicitly avoid bloat, such as fitness sharing through niche based reproduction. The proposed thesis is that an LCS can scale to complex problems in a domain by reusing the learnt knowledge from simpler problems of the domain and/or encapsulating the underlying patterns in the domain. Wilson’s XCS is used to implement and test the proposed systems, which is a well-tested,  online learning and accuracy based LCS model. To extract the reusable building  blocks of knowledge, GP-tree like, code-fragments are introduced, which are more  than simply another representation (e.g. ternary or real-valued alphabets). This  thesis is extended to capture the underlying patterns in a problemusing a cyclic  representation. Hard problems are experimented to test the newly developed scalable  systems and compare them with benchmark techniques. Specifically, this work develops four systems to improve the scalability of XCS-based classifier systems. (1) Building blocks of knowledge are extracted fromsmaller problems of a Boolean domain and reused in learning more complex, large-scale problems in the domain, for the first time. By utilizing the learnt knowledge from small-scale problems, the developed XCSCFC (i.e. XCS with Code-Fragment Conditions) system readily solves problems of a scale that existing LCS and GP approaches cannot, e.g. the 135-bitMUX problem. (2) The introduction of the code fragments in classifier actions in XCSCFA (i.e. XCS with Code-Fragment Actions) enables the rich representation of GP, which when couples with the divide and conquer approach of LCS, to successfully solve various complex, overlapping and niche imbalance Boolean problems that are difficult to solve using numeric action based XCS. (3) The underlying patterns in a problem domain are encapsulated in classifier rules encoded by a cyclic representation. The developed XCSSMA system produces general solutions of any scale n for a number of important Boolean problems, for the first time in the field of LCS, e.g. parity problems. (4) Optimal solutions for various real-valued problems are evolved by extending the existing real-valued XCSR system with code-fragment actions to XCSRCFA. Exploiting the combined power of GP and LCS techniques, XCSRCFA successfully learns various continuous action and function approximation problems that are difficult to learn using the base techniques. This research work has shown that LCSs can scale to complex, largescale problems through reusing learnt knowledge. The messy nature, disassociation of  message to condition order, masking, feature construction, and reuse of extracted knowledge add additional abilities to the XCS family of LCSs. The ability to use  rich encoding in antecedent GP-like codefragments or consequent cyclic representation  leads to the evolution of accurate, maximally general and compact solutions in learning  various complex Boolean as well as real-valued problems. Effectively exploiting the combined power of GP and LCS techniques, various continuous action and function approximation problems are solved in a simple and straight forward manner. The analysis of the evolved rules reveals, for the first time in XCS, that no matter how specific or general the initial classifiers are, all the optimal classifiers are converged through the mechanism ‘be specific then generalize’ near the final stages of evolution. Also that standard XCS does not use all available information or all available genetic operators to evolve optimal rules, whereas the developed code-fragment action based systems effectively use figure  and ground information during the training process. Thiswork has created a platformto explore the reuse of learnt functionality, not just terminal knowledge as present, which is needed to replicate human capabilities.</p>


2018 ◽  
Author(s):  
Ludwig Lausser ◽  
Florian Schmid ◽  
Lea Siegle ◽  
Rolf Hühne ◽  
Malte Buchholz ◽  
...  

AbstractThe interpretability of a classification model is one of its most essential characteristics. It allows for the generation of new hypotheses on the molecular background of a disease. However, it is questionable if more complex molecular regulations can be reconstructed from such limited sets of data. To bridge the gap between complexity and interpretability, we replace the de novo reconstruction of these processes by a hybrid classification approach partially based on existing domain knowledge. Using semantic building blocks that reflect real biological processes these models were able to construct hypotheses on the underlying genetic configuration of the analysed phenotypes. As in the building process, also these hypotheses are composed of high-level biology-based terms. The semantic information we utilise from gene ontology is a vocabulary which comprises the essential processes or components of a biological system. The constructed semantic multi-classifier system consists of expert base classifiers which each select the most suitable term for characterising their assigned problems. Our experiments conducted on datasets of three distinct research fields revealed terms with well-known associations to the analysed context. Furthermore, some of the chosen terms do not seem to be obviously related to the issue and thus lead to new, hypotheses to pursue.Author summaryData mining strategies are designed for an unbiased de novo analysis of large sample collections and aim at the detection of frequent patterns or relationships. Later on, the gained information can be used to characterise diagnostically relevant classes and for providing hints to the underlying mechanisms which may cause a specific phenotype or disease. However, the practical use of data mining techniques can be restricted by the available resources and might not correctly reconstruct complex relationships such as signalling pathways.To counteract this, we devised a semantic approach to the issue: a multi-classifier system which incorporates existing biological knowledge and returns interpretable models based on these high-level semantic terms. As a novel feature, these models also allow for qualitative analysis and hypothesis generation on the molecular processes and their relationships leading to different phenotypes or diseases.


2019 ◽  
Author(s):  
Sean Lund ◽  
Taylor Courtney ◽  
Gavin Williams

Isoprenoids are a large class of natural products with wide-ranging applications. Synthetic biology approaches to the manufacture of isoprenoids and their new-to-nature derivatives are limited due to the provision in Nature of just two hemiterpene building blocks for isoprenoid biosynthesis. To address this limitation, artificial chemo-enzymatic pathways such as the alcohol-dependent hemiterpene pathway (ADH) serve to leverage consecutive kinases to convert exogenous alcohols to pyrophosphates that could be coupled to downstream isoprenoid biosynthesis. To be successful, each kinase in this pathway should be permissive of a broad range of substrates. For the first time, we have probed the promiscuity of the second enzyme in the ADH pathway, isopentenyl phosphate kinase from Thermoplasma acidophilum, towards a broad range of acceptor monophosphates. Subsequently, we evaluate the suitability of this enzyme to provide non-natural pyrophosphates and provide a critical first step in characterizing the rate limiting steps in the artificial ADH pathway.<br>


Organics ◽  
2021 ◽  
Vol 2 (2) ◽  
pp. 107-117
Author(s):  
Mattia Forchetta ◽  
Valeria Conte ◽  
Giulia Fiorani ◽  
Pierluca Galloni ◽  
Federica Sabuzi

Owing to the attractiveness of organic phosphonic acids and esters in the pharmacological field and in the functionalization of conductive metal-oxides, the research of effective synthetic protocols is pivotal. Among the others, ω-bromoalkylphosphonates are gaining particular attention because they are useful building blocks for the tailored functionalization of complex organic molecules. Hence, in this work, the optimization of Michaelis–Arbuzov reaction conditions for ω-bromoalkylphosphonates has been performed, to improve process sustainability while maintaining good yields. Synthesized ω-bromoalkylphosphonates have been successfully adopted for the synthesis of new KuQuinone phosphonate esters and, by hydrolysis, phosphonic acid KuQuinone derivatives have been obtained for the first time. Considering the high affinity with metal-oxides, KuQuinones bearing phosphonic acid terminal groups are promising candidates for biomedical and photo(electro)chemical applications.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Joaquin Caro-Astorga ◽  
Kenneth T. Walker ◽  
Natalia Herrera ◽  
Koon-Yang Lee ◽  
Tom Ellis

AbstractEngineered living materials (ELMs) based on bacterial cellulose (BC) offer a promising avenue for cheap-to-produce materials that can be programmed with genetically encoded functionalities. Here we explore how ELMs can be fabricated in a modular fashion from millimetre-scale biofilm spheroids grown from shaking cultures of Komagataeibacter rhaeticus. Here we define a reproducible protocol to produce BC spheroids with the high yield bacterial cellulose producer K. rhaeticus and demonstrate for the first time their potential for their use as building blocks to grow ELMs in 3D shapes. Using genetically engineered K. rhaeticus, we produce functionalized BC spheroids and use these to make and grow patterned BC-based ELMs that signal within a material and can sense and report on chemical inputs. We also investigate the use of BC spheroids as a method to regenerate damaged BC materials and as a way to fuse together smaller material sections of cellulose and synthetic materials into a larger piece. This work improves our understanding of BC spheroid formation and showcases their great potential for fabricating, patterning and repairing ELMs based on the promising biomaterial of bacterial cellulose.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 645
Author(s):  
Muhammad Farooq ◽  
Sehrish Sarfraz ◽  
Christophe Chesneau ◽  
Mahmood Ul Hassan ◽  
Muhammad Ali Raza ◽  
...  

Expectiles have gained considerable attention in recent years due to wide applications in many areas. In this study, the k-nearest neighbours approach, together with the asymmetric least squares loss function, called ex-kNN, is proposed for computing expectiles. Firstly, the effect of various distance measures on ex-kNN in terms of test error and computational time is evaluated. It is found that Canberra, Lorentzian, and Soergel distance measures lead to minimum test error, whereas Euclidean, Canberra, and Average of (L1,L∞) lead to a low computational cost. Secondly, the performance of ex-kNN is compared with existing packages er-boost and ex-svm for computing expectiles that are based on nine real life examples. Depending on the nature of data, the ex-kNN showed two to 10 times better performance than er-boost and comparable performance with ex-svm regarding test error. Computationally, the ex-kNN is found two to five times faster than ex-svm and much faster than er-boost, particularly, in the case of high dimensional data.


2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Chenchen Huang ◽  
Wei Gong ◽  
Wenlong Fu ◽  
Dongyu Feng

Feature extraction is a very important part in speech emotion recognition, and in allusion to feature extraction in speech emotion recognition problems, this paper proposed a new method of feature extraction, using DBNs in DNN to extract emotional features in speech signal automatically. By training a 5 layers depth DBNs, to extract speech emotion feature and incorporate multiple consecutive frames to form a high dimensional feature. The features after training in DBNs were the input of nonlinear SVM classifier, and finally speech emotion recognition multiple classifier system was achieved. The speech emotion recognition rate of the system reached 86.5%, which was 7% higher than the original method.


Author(s):  
M Shariyat

Based on the idea of double superposition, an accurate high-order global–local theoryis proposed for bending and vibration analysis of cylindrical shells subjected to thermo-mechanical loads, for the first time. The theory has many novelties, among them: (1) less computational time due to the use of the global–local technique and matrix formulations; (2) satisfaction of the complete kinematic and transverse stress continuity conditions at the layer interfaces under thermo-mechanical loads; (3) consideration of the transverse flexibility; (4) release of Love–Timoshenko assumption; and (5) capability of investigating the local phenomena. Various comparative examples are included to validate the theory and to examine its accuracy and efficiency.


2022 ◽  
Author(s):  
Zhi-Gang Yin ◽  
Xiong-Wei Liu ◽  
Hui-Juan Wang ◽  
Min Zhang ◽  
Xiong-Li Liu ◽  
...  

A highly efficient synthesis of structurally diverse ortho-acylphenol–diindolylmethane hybrids 3 using carboxylic acid-activated chromones as versatile synthetic building blocks is reported here for the first time, through 1,4-nucleophilic addition and followed by a decarboxylation and pyrone ring opening reaction process.


Sign in / Sign up

Export Citation Format

Share Document