scholarly journals On Belief Change for Multi-Label Classifier Encodings

Author(s):  
Sylvie Coste-Marquis ◽  
Pierre Marquis

An important issue in ML consists in developing approaches exploiting background knowledge T for improving the accuracy and the robustness of learned classifiers C. Delegating the classification task to a Boolean circuit Σ exhibiting the same input-output behaviour as C, the problem of exploiting T within C can be viewed as a belief change scenario. However, usual change operations are not suited to the task of modifying the classifier encoding Σ in a minimal way, to make it complying with T. To fill the gap, we present a new belief change operation, called rectification. We characterize the family of rectification operators from an axiomatic perspective and exhibit operators from this family. We identify the standard belief change postulates that every rectification operator satisfies and those it does not. We also focus on some computational aspects of rectification and compliance.

Author(s):  
Nicolas Schwind ◽  
Sebastien Konieczny ◽  
Jean-Marie Lagniez ◽  
Pierre Marquis

Iterated belief change aims to determine how the belief state of a rational agent evolves given a sequence of change formulae. Several families of iterated belief change operators (revision operators, improvement operators) have been pointed out so far, and characterized from an axiomatic point of view. This paper focuses on the inference problem for iterated belief change, when belief states are represented as a special kind of stratified belief bases. The computational complexity of the inference problem is identified and shown to be identical for all revision operators satisfying Darwiche and Pearl's (R*1-R*6) postulates. In addition, some complexity bounds for the inference problem are provided for the family of soft improvement operators. We also show that a revised belief state can be computed in a reasonable time for large-sized instances using SAT-based algorithms, and we report empirical results showing the feasibility of iterated belief change for bases of significant sizes.


1988 ◽  
Vol 66 (3) ◽  
pp. 723-736 ◽  
Author(s):  
Takeshi Sugimura ◽  
Toyoko Inoue

Kindergarten children, aged 5 to 6 yr. (119 boys, 121 girls) were given a 2-category classification task under the “Go together” or the “Are alike” instruction and assessed the categorization modes (analytic and holistic) after reaching each of three criteria of learning (4/4, 8/8, and 8/8 + 8). The tasks with strong or weak family-resemblance structure were provided. The exemplars were schematic faces of boys (Exp. 1) and schematic figures of girls (Exp. 2). The analytic mode was used for the weak family-resemblance task and for the schematic faces more frequently than for the strong resemblance task and for the schematic girls. With increasing criteria of learning, the percentages of the subjects who used the analytic mode increased while those who used the holistic mode decreased. The findings were discussed with reference to the stimulus characteristics, the family-resemblance structure, and the developmental trend of categorization modes.


2007 ◽  
Vol 13 (2) ◽  
pp. 348-365 ◽  
Author(s):  
Régis Blache ◽  
Jean-Pierre Cherdieu ◽  
Jorge Estrada Sarlabous

2004 ◽  
Vol 22 ◽  
pp. 23-56 ◽  
Author(s):  
D. Dubois ◽  
H. Fargier ◽  
H. Prade

An accepted belief is a proposition considered likely enough by an agent, to be inferred from as if it were true. This paper bridges the gap between probabilistic and logical representations of accepted beliefs. To this end, natural properties of relations on propositions, describing relative strength of belief are augmented with some conditions ensuring that accepted beliefs form a deductively closed set. This requirement turns out to be very restrictive. In particular, it is shown that the sets of accepted belief of an agent can always be derived from a family of possibility rankings of states. An agent accepts a proposition in a given context if this proposition is considered more possible than its negation in this context, for all possibility rankings in the family. These results are closely connected to the non-monotonic 'preferential' inference system of Kraus, Lehmann and Magidor and the so-called plausibility functions of Friedman and Halpern. The extent to which probability theory is compatible with acceptance relations is laid bare. A solution to the lottery paradox, which is considered as a major impediment to the use of non-monotonic inference is proposed using a special kind of probabilities (called lexicographic, or big-stepped). The setting of acceptance relations also proposes another way of approaching the theory of belief change after the works of Gärdenfors and colleagues. Our view considers the acceptance relation as a primitive object from which belief sets are derived in various contexts.


2018 ◽  
Vol 61 ◽  
pp. 807-834 ◽  
Author(s):  
Nadia Creignou ◽  
Raïda Ktari ◽  
Odile Papini

Belief change within the framework of fragments of propositional logic is one of the main and recent challenges in the knowledge representation research area. While previous research works focused on belief revision, belief merging, and belief contraction, the problem of belief update within fragments of classical logic has not been addressed so far. In the context of revision, it has been proposed to refine existing operators so that they operate within propositional fragments, and that the result of revision remains in the fragment under consideration. This approach is not restricted to the Horn fragment but also applicable to other propositional fragments like Krom and affine fragments. We generalize this notion of refinement to any belief change operator. We then focus on a specific belief change operation, namely belief update. We investigate the behavior of the refined update operators with respect to satisfaction of the KM postulates and highlight differences between revision and update in this context.


Author(s):  
Mohamed Khadir ◽  
John Ringwood

Extension of First Order Predictive Functional Controllers to Handle Higher Order Internal ModelsPredictive Functional Control (PFC), belonging to the family of predictive control techniques, has been demonstrated as a powerful algorithm for controlling process plants. The input/output PFC formulation has been a particularly attractive paradigm for industrial processes, with a combination of simplicity and effectiveness. Though its use of a lag plus delay ARX/ARMAX model is justified in many applications, there exists a range of process types which may present difficulties, leading to chattering and/or instability. In this paper, instability of first order PFC is addressed, and solutions to handle higher order and difficult systems are proposed. The input/output PFC formulation is extended to cover the cases of internal models with zero and/or higher order pole dynamics in an ARX/ARMAX form, via a parallel and cascaded model decomposition. Finally, a generic form of PFC, based on elementary outputs, is proposed to handle a wider range of higher order oscillatory and non-minimum phase systems. The range of solutions presented are supported by appropriate examples.


Author(s):  
K. L. Datta

Mathematical models have been used to spell out development priorities and determine sectoral growth profiles in the Five Year Plans. The models used in the pre-reform period belong to the family of growth and investment model. This chapter discusses the basic features of the Mahalanobis model, which was used in the Second Plan, and describes the manner in which input–output based consistency models were used in the Fifth to the Eighth Plans. It gives an overview of the macroeconomic model and input–output model used in Plan formulation in the period of economic reform. The idea is to enable the general readers to be acquinted with the manner and method of employing these models in different stages of Plan formulation, and to understand how intuitively the targets in different areas and sectors of the economy are fixed.


Sign in / Sign up

Export Citation Format

Share Document