Separating Rule Refinement and Rule Selection Heuristics in Inductive Rule Learning

Author(s):  
Julius Stecher ◽  
Frederik Janssen ◽  
Johannes Fürnkranz
Entropy ◽  
2020 ◽  
Vol 22 (9) ◽  
pp. 969
Author(s):  
Iván Paz ◽  
Àngela Nebot ◽  
Francisco Mugica ◽  
Enrique Romero

This manuscript explores fuzzy rule learning for sound synthesizer programming within the performative practice known as live coding. In this practice, sound synthesis algorithms are programmed in real time by means of source code. To facilitate this, one possibility is to automatically create variations out of a few synthesizer presets. However, the need for real-time feedback makes existent synthesizer programmers unfeasible to use. In addition, sometimes presets are created mid-performance and as such no benchmarks exist. Inductive rule learning has shown to be effective for creating real-time variations in such a scenario. However, logical IF-THEN rules do not cover the whole feature space. Here, we present an algorithm that extends IF-THEN rules to hyperrectangles, which are used as the cores of membership functions to create a map of the input space. To generalize the rules, the contradictions are solved by a maximum volume heuristics. The user controls the novelty-consistency balance with respect to the input data using the algorithm parameters. The algorithm was evaluated in live performances and by cross-validation using extrinsic-benchmarks and a dataset collected during user tests. The model’s accuracy achieves state-of-the-art results. This, together with the positive criticism received from live coders that tested our methodology, suggests that this is a promising approach.


2021 ◽  
Vol 4 ◽  
Author(s):  
Florian Beck ◽  
Johannes Fürnkranz

Inductive rule learning is arguably among the most traditional paradigms in machine learning. Although we have seen considerable progress over the years in learning rule-based theories, all state-of-the-art learners still learn descriptions that directly relate the input features to the target concept. In the simplest case, concept learning, this is a disjunctive normal form (DNF) description of the positive class. While it is clear that this is sufficient from a logical point of view because every logical expression can be reduced to an equivalent DNF expression, it could nevertheless be the case that more structured representations, which form deep theories by forming intermediate concepts, could be easier to learn, in very much the same way as deep neural networks are able to outperform shallow networks, even though the latter are also universal function approximators. However, there are several non-trivial obstacles that need to be overcome before a sufficiently powerful deep rule learning algorithm could be developed and be compared to the state-of-the-art in inductive rule learning. In this paper, we therefore take a different approach: we empirically compare deep and shallow rule sets that have been optimized with a uniform general mini-batch based optimization algorithm. In our experiments on both artificial and real-world benchmark data, deep rule networks outperformed their shallow counterparts, which we take as an indication that it is worth-while to devote more efforts to learning deep rule structures from data.


2011 ◽  
Vol 12 (3-4) ◽  
pp. 237-248 ◽  
Author(s):  
Ute Schmid ◽  
Emanuel Kitzelmann

2019 ◽  
Author(s):  
Nadia Said ◽  
Helen Fischer

Understanding the development of non-linear processes such as economic or populationgrowth is an important prerequisite for informed decisions in those areas. In the function-learningparadigm, people’s understanding of the function rule that underlies the to-be predicted process istypically measured by means of extrapolation accuracy. Here we argue, however, that even thoughaccurate extrapolation necessitates rule-learning, the reverse does not necessarily hold: Inaccurateextrapolation does not exclude rule-learning. Experiment 1 shows that more than one third of participants who would be classified as “exemplar-based learners” based on their extrapolation accuracy were able to identify the correct function shape and slope in a rule-selection paradigm, demonstrating accurate understanding of the function rule. Experiment 2 shows that higher proportions of rule learning than rule-application in the function learning paradigm is not due to (i) higher a priori probabilities to guess the correct rule in the rule-selection paradigm; nor is it due to (ii) a lack of simultaneous access to all function values in the function-learning paradigm. We conclude that rule application is not tantamount to rule-learning, and that assessing rule-learning via extrapolation accuracy underestimates the proportion of rule learners in function-learning experiments.


Sign in / Sign up

Export Citation Format

Share Document