preference orders
Recently Published Documents


TOTAL DOCUMENTS

58
(FIVE YEARS 14)

H-INDEX

8
(FIVE YEARS 1)

Author(s):  
Ayman Elgharabawy ◽  
Mukesh Prasad ◽  
Chin-Teng Lin

Equality and incomparability multi-label ranking have not been introduced to learning before. This paper proposes new native ranker neural network to address the problem of multi-label ranking including incomparable preference orders using a new activation and error functions and new architecture. Preference Neural Network PNN solves the multi-label ranking problem, where labels may have indifference preference orders or subgroups which are equally ranked. PNN is a nondeep, multiple-value neuron, single middle layer and one or more output layers network. PNN uses a novel positive smooth staircase (PSS) or smooth staircase (SS) activation function and represents preference orders and Spearman ranking correlation as objective functions. It is introduced in two types, Type A is traditional NN architecture and Type B uses expanding architecture by introducing new type of hidden neuron has multiple activation function in middle layer and duplicated output layers to reinforce the ranking by increasing the number of weights. PNN accepts single data instance as inputs and output neurons represent the number of labels and output value represents the preference value. PNN is evaluated using a new preference mining data set that contains repeated label values which have not experimented on before. SS and PS speed-up the learning and PNN outperforms five previously proposed methods for strict label ranking in terms of accurate results with high computational efficiency.


2021 ◽  
Vol 13 (3) ◽  
pp. 1-25
Author(s):  
Robert Bredereck ◽  
Piotr Faliszewski ◽  
Rolf Niedermeier ◽  
Nimrod Talmon

Given an election, a preferred candidate  p , and a budget, the S HIFT B RIBERY problem asks whether  p can win the election after shifting  p higher in some voters’ preference orders. Of course, shifting comes at a price (depending on the voter and on the extent of the shift) and one must not exceed the given budget. We study the (parameterized) computational complexity of S HIFT B RIBERY for multiwinner voting rules where winning the election means to be part of some winning committee. We focus on the well-established SNTV, Bloc, k -Borda, and Chamberlin-Courant rules, as well as on approximate variants of the Chamberlin-Courant rule. We show that S HIFT B RIBERY tends to be harder in the multiwinner setting than in the single-winner one by showing settings where S HIFT B RIBERY is computationally easy in the single-winner cases, but is hard (and hard to approximate) in the multiwinner ones.


Author(s):  
Michael Andrew Huelsman ◽  
Miroslaw Truszczynski

Learning preferences of an agent requires choosing which preference representation to use. This formalism should be expressive enough to capture a significant part of the agent's preferences. Selecting the right formalism is generally not easy, as we have limited access to the way the agent makes her choices. It is then important to understand how ``universal" particular preference representation formalisms are, that is, whether they can perform well in learning preferences of agents with a broad spectrum of preference orders. In this paper, we consider several preference representation formalisms from this perspective: lexicographic preference models, preference formulas, sets of (ranked) preference formulas, and neural networks. We find that the latter two show a good potential as general preference representation formalisms. We show that this holds true when learning preferences of a single agent but also when learning models to represent consensus preferences of a group of agents.


Author(s):  
Alec Sandroni ◽  
Alvaro Sandroni

AbstractArrow (1950) famously showed the impossibility of aggregating individual preference orders into a social preference order (together with basic desiderata). This paper shows that it is possible to aggregate individual choice functions, that satisfy almost any condition weaker than WARP, into a social choice function that satisfy the same condition (and also Arrow’s desiderata).


Author(s):  
Ayman Elgharabawy ◽  
Mukesh Parsad ◽  
Chin-Teng Lin

Equality and incomparability multi-label ranking have not been introduced to learning before. This paper proposes new native ranker neural network to address the problem of multi-label ranking including incomparable preference orders using a new activation and error functions and new architecture. Preference Neural Network PNN solves the multi-label ranking problem, where labels may have indifference preference orders or subgroups which are equally ranked. PNN is a nondeep, multiple-value neuron, single middle layer and one or more output layers network. PNN uses a novel positive smooth staircase (PSS) or smooth staircase (SS) activation function and represents preference orders and Spearman ranking correlation as objective functions. It is introduced in two types, Type A is traditional NN architecture and Type B uses expanding architecture by introducing new type of hidden neuron has multiple activation function in middle layer and duplicated output layers to reinforce the ranking by increasing the number of weights. PNN accepts single data instance as inputs and output neurons represent the number of labels and output value represents the preference value. PNN is evaluated using a new preference mining data set that contains repeated label values which have not experimented on before. SS and PS speed-up the learning and PNN outperforms five previously proposed methods for strict label ranking in terms of accurate results with high computational efficiency.


Author(s):  
Ayman Elgharabawy ◽  
Mukesh Parsad ◽  
Nikhil R. Pal ◽  
Chin-Teng Lin

Equality and incomparability multi-label ranking have not been introduced to learning before. This paper proposes new native ranker neural network to address the problem of multi-label ranking including incomparable preference orders using a new activation and error functions and new architecture. Preference Neural Network PNN solves the multi-label ranking problem, where labels may have indifference preference orders or subgroups which are equally ranked. PNN is a nondeep, multiple-value neuron, single middle layer and one or more output layers network. PNN uses a novel positive smooth staircase (PSS) or smooth staircase (SS) activation function and represents preference orders and Spearman ranking correlation as objective functions. It is introduced in two types, Type A is traditional NN architecture and Type B uses expanding architecture by introducing new type of hidden neuron has multiple activation function in middle layer and duplicated output layers to reinforce the ranking by increasing the number of weights. PNN accepts single data instance as inputs and output neurons represent the number of labels and output value represents the preference value. PNN is evaluated using a new preference mining data set that contains repeated label values which have not experimented on before. SS and PS speed-up the learning and PNN outperforms five previously proposed methods for strict label ranking in terms of accurate results with high computational efficiency.


2020 ◽  
Vol 76 (2) ◽  
pp. 1063-1081 ◽  
Author(s):  
Rong Zhao ◽  
Maozhu Jin ◽  
Peiyu Ren ◽  
Qian Zhang
Keyword(s):  

2019 ◽  
Vol 66 ◽  
pp. 1147-1197
Author(s):  
Jan Maly ◽  
Miroslaw Truszczyński ◽  
Stefan Woltran

Lifting a preference order on elements of some universe to a preference order on subsets of this universe is often guided by postulated properties the lifted order should have. Well-known impossibility results pose severe limits on when such liftings exist if all non-empty subsets of the universe are to be ordered. The extent to which these negative results carry over to other families of sets is not known. In this paper, we consider families of sets that induce connected subgraphs in graphs. For such families, common in applications, we study whether lifted orders satisfying the well-studied axioms of dominance and (strict) independence exist for every or, in another setting, for some underlying order on elements (strong and weak orderability). We characterize families that are strongly and weakly orderable under dominance and strict independence, and obtain a tight bound on the class of families that are strongly orderable under dominance and independence.


2019 ◽  
Vol 66 ◽  
pp. 57-84 ◽  
Author(s):  
Krzysztof Magiera ◽  
Piotr Faliszewski

We provide the first polynomial-time algorithm for recognizing if a profile of (possibly weak) preference orders is top-monotonic. Top-monotonicity is a generalization of the notions of single-peakedness and single-crossingness, defined by Barbera and Moreno. Top-monotonic profiles always have weak Condorcet winners and satisfy a variant of the median voter theorem. Our algorithm proceeds by reducing the recognition problem to the SAT-2CNF problem.


Sign in / Sign up

Export Citation Format

Share Document