rule optimization
Recently Published Documents


TOTAL DOCUMENTS

47
(FIVE YEARS 12)

H-INDEX

7
(FIVE YEARS 1)

2022 ◽  
Vol 54 (8) ◽  
pp. 1-35
Author(s):  
Akbar Telikani ◽  
Amirhessam Tahmassebi ◽  
Wolfgang Banzhaf ◽  
Amir H. Gandomi

Evolutionary Computation (EC) approaches are inspired by nature and solve optimization problems in a stochastic manner. They can offer a reliable and effective approach to address complex problems in real-world applications. EC algorithms have recently been used to improve the performance of Machine Learning (ML) models and the quality of their results. Evolutionary approaches can be used in all three parts of ML: preprocessing (e.g., feature selection and resampling), learning (e.g., parameter setting, membership functions, and neural network topology), and postprocessing (e.g., rule optimization, decision tree/support vectors pruning, and ensemble learning). This article investigates the role of EC algorithms in solving different ML challenges. We do not provide a comprehensive review of evolutionary ML approaches here; instead, we discuss how EC algorithms can contribute to ML by addressing conventional challenges of the artificial intelligence and ML communities. We look at the contributions of EC to ML in nine sub-fields: feature selection, resampling, classifiers, neural networks, reinforcement learning, clustering, association rule mining, and ensemble methods. For each category, we discuss evolutionary machine learning in terms of three aspects: problem formulation, search mechanisms, and fitness value computation. We also consider open issues and challenges that should be addressed in future work.


2021 ◽  
Author(s):  
Madhavi Gayathri ◽  
Amanda Ariyaratne ◽  
Sachin Kahawala ◽  
Daswin De Silva ◽  
Damminda Alahakoon ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Yiying Shi

In rule optimization, some rule characteristics were extracted to describe the uncertainty correlations of fuzzy relations, but the concrete numbers cannot express correlations with uncertainty, such as “at least 0.1 and up to 0.5.” To solve this problem, a novel definition concerning interval information content of fuzzy relation has been proposed in this manuscript to realize the fuzziness measurement of the fuzzy relation. Also, its definition and expressions have also been constructed. Meanwhile based on the interval information content, the issues of fuzzy implication ranking and clustering were analyzed. Finally, utilizing the combination of possibility’s interval comparison equations and interval value’s similarity measure, the classifications of implication operators were proved to be realizable. The achievements in the presented work will provide a reasonable index to measure the fuzzy implication operators and lay a solid foundation for further research.


2020 ◽  
Vol 177 (3-4) ◽  
pp. 275-296
Author(s):  
Manuel Bichler ◽  
Michael Morak ◽  
Stefan Woltran

State-of-the-art answer set programming (ASP) solvers rely on a program called a grounder to convert non-ground programs containing variables into variable-free, propositional programs. The size of this grounding depends heavily on the size of the non-ground rules, and thus, reducing the size of such rules is a promising approach to improve solving performance. To this end, in this paper we announce lpopt, a tool that decomposes large logic programming rules into smaller rules that are easier to handle for current solvers. The tool is specifically tailored to handle the standard syntax of the ASP language (ASP-Core) and makes it easier for users to write efficient and intuitive ASP programs, which would otherwise often require significant handtuning by expert ASP engineers. It is based on an idea proposed by Morak and Woltran (2012) that we extend significantly in order to handle the full ASP syntax, including complex constructs like aggregates, weak constraints, and arithmetic expressions. We present the algorithm, the theoretical foundations on how to treat these constructs, as well as an experimental evaluation showing the viability of our approach.


2020 ◽  
Author(s):  
Xiaojing Su ◽  
Yayi Wei ◽  
Lisong Dong ◽  
Libin Zhang ◽  
Yajuan Su ◽  
...  

Author(s):  
Kiyohiko Uehara ◽  
Kaoru Hirota ◽  

A method is proposed for reducing noise in learning data based on fuzzy inference methods called α-GEMII (α-level-set and generalized-mean-based inference with the proof of two-sided symmetry of consequences) and α-GEMINAS (α-level-set and generalized-mean-based inference with fuzzy rule interpolation at an infinite number of activating points). It is particularly effective for reducing noise in randomly sampled data given by singleton input–output pairs for fuzzy rule optimization. In the proposed method, α-GEMII and α-GEMINAS are performed with singleton input–output rules and facts defined by fuzzy sets (non-singletons). The rules are initially set by directly using the input–output pairs of the learning data. They are arranged with the facts and consequences deduced by α-GEMII and α-GEMINAS. This process reduces noise to some extent and transforms the randomly sampled data into regularly sampled data for iteratively reducing noise at a later stage. The width of the regular sampling interval can be determined with tolerance so as to satisfy application-specific requirements. Then, the singleton input–output rules are updated with consequences obtained in iteratively performing α-GEMINAS for noise reduction. The noise reduction in each iteration is a deterministic process, and thus the proposed method is expected to improve the noise robustness in fuzzy rule optimization, relying less on trial-and-error-based progress. Simulation results demonstrate that noise is properly reduced in each iteration and the deviation in the learning data is suppressed considerably.


2019 ◽  
Vol 8 (2) ◽  
pp. 4597-4604

With the advancement in the software field, diagnosing dyslexia in earlier stages among children is highly possible. It helps them to take necessary measures to rise above the problem. This paper intends to develop an uncertainty handling model using neutrosophic logic inference system. This system’s functionality is enhanced by introducing paraconsistent logic with whale behavior based optimization. Paraconsistent logic is used to discover the degree of certainty and contradiction of generated rules. Pruning the population of rules is handled by a nature inspire algorithm known as whale behavior based rule optimization. Dyslexia dataset consists of both vague and crisp values. Treating them as such will often lead to high false alarms in the detection process. To overcome this issue the neutrosophic model is used to denote them in terms of membership degree of truthiness, indeterminacy, and falsity. The paraconsistent analyzer works with the favorable and unfavorable degree of evidence of each rule to handle the inconsistency and uncertainty among dyslexia detection. The potential rules are selected by the encircle prey model of the whale optimization algorithm. The simulation results proved that the performance of the proposed model produces high detection rate in the detection of dyslexia.


Sign in / Sign up

Export Citation Format

Share Document