scholarly journals Parameterized Algorithms for Finding a Collective Set of Items

2020 ◽  
Vol 34 (02) ◽  
pp. 1838-1845
Author(s):  
Robert Bredereck ◽  
Piotr Faliszewski ◽  
Andrzej Kaczmarczyk ◽  
Dušan Knop ◽  
Rolf Niedermeier

We extend the work of Skowron et al. (AIJ, 2016) by considering the parameterized complexity of the following problem. We are given a set of items and a set of agents, where each agent assigns an integer utility value to each item. The goal is to find a set of k items that these agents would collectively use. For each such collective set of items, each agent provides a score that can be described using an OWA (ordered weighted average) operator and we seek a set with the highest total score. We focus on the parameterization by the number of agents and we find numerous fixed-parameter tractability results (however, we also find some W[1]-hardness results). It turns out that most of our algorithms even apply to the setting where each agent has an integer weight.

Author(s):  
Marko Samer ◽  
Stefan Szeider

Parameterized complexity is a new theoretical framework that considers, in addition to the overall input size, the effects on computational complexity of a secondary measurement, the parameter. This two-dimensional viewpoint allows a fine-grained complexity analysis that takes structural properties of problem instances into account. The central notion is “fixed-parameter tractability” which refers to solvability in polynomial time for each fixed value of the parameter such that the order of the polynomial time bound is independent of the parameter. This chapter presents main concepts and recent results on the parameterized complexity of the satisfiability problem and it outlines fundamental algorithmic ideas that arise in this context. Among the parameters considered are the size of backdoor sets with respect to various tractable base classes and the treewidth of graph representations of satisfiability instances.


Author(s):  
T. MARCHANT

When using the ordered weighted average operator, it can happen that one wants to optimize the variability (measured by the entropy (maximal) or by the variance (minimal)) of the weights while keeping the orness of this operator at a fixed level. This has been considered by several authors. Dually, there might be some contexts where one wishes to maximize the orness while guaranteeing some fixed variability. In this paper, we present two algorithms for finding such weights, when the variability is captured by the entropy and by the variance.


2015 ◽  
Vol 3 (1-2) ◽  
pp. 65-105 ◽  
Author(s):  
Christophe Labreuche ◽  
Brice Mayag ◽  
Bertrand Duqueroie

2013 ◽  
Vol 47 ◽  
pp. 475-519 ◽  
Author(s):  
N. Betzler ◽  
A. Slinko ◽  
J. Uhlmann

We investigate two systems of fully proportional representation suggested by Chamberlin Courant and Monroe. Both systems assign a representative to each voter so that the "sum of misrepresentations" is minimized. The winner determination problem for both systems is known to be NP-hard, hence this work aims at investigating whether there are variants of the proposed rules and/or specific electorates for which these problems can be solved efficiently. As a variation of these rules, instead of minimizing the sum of misrepresentations, we considered minimizing the maximal misrepresentation introducing effectively two new rules. In the general case these "minimax" versions of classical rules appeared to be still NP-hard. We investigated the parameterized complexity of winner determination of the two classical and two new rules with respect to several parameters. Here we have a mixture of positive and negative results: e.g., we proved fixed-parameter tractability for the parameter the number of candidates but fixed-parameter intractability for the number of winners. For single-peaked electorates our results are overwhelmingly positive: we provide polynomial-time algorithms for most of the considered problems. The only rule that remains NP-hard for single-peaked electorates is the classical Monroe rule.


2017 ◽  
Vol 28 (5) ◽  
pp. 759-776 ◽  
Author(s):  
Guiwu Wei ◽  
Mao Lu

Abstract The Hamacher product is a t-norm and the Hamacher sum is a t-conorm. They are good alternatives to the algebraic product and the algebraic sum, respectively. Nevertheless, it seems that most of the existing hesitant fuzzy aggregation operators are based on algebraic operations. In this paper, we utilize Hamacher operations to develop some Pythagorean hesitant fuzzy aggregation operators: Pythagorean hesitant fuzzy Hamacher weighted average operator, Pythagorean hesitant fuzzy Hamacher weighted geometric operator, Pythagorean hesitant fuzzy Hamacher ordered weighted average operator, Pythagorean hesitant fuzzy Hamacher ordered weighted geometric operator, Pythagorean hesitant fuzzy Hamacher hybrid average operator, and Pythagorean hesitant fuzzy Hamacher hybrid geometric operator. The prominent characteristics of these proposed operators are studied. Then, we utilize these operators to develop some approaches for solving the Pythagorean hesitant fuzzy multiple-attribute decision-making problems. Finally, a practical example is given to verify the developed approach and to demonstrate its practicality and effectiveness.


Author(s):  
H. B. MITCHELL

The OWA (Ordered Weighted Average) operator is a powerful non-linear operator for aggregating a set of inputs ai,i∈{1,2,…,M}. In the original OWA operator the inputs are crisp variables ai. This restriction was subsequently removed by Mitchell and Schaefer who by application of the extension principle defined a fuzzy OWA operator which aggregates a set of ordinary fuzzy sets Ai. We continue this process and define an intuitionistic OWA operator which aggregates a set of intuitionistic fuzzy sets Ãi. We describe a simple application of the new intuitionistic OWA operator in multiple-expert multiple-criteria decision-making.


Sign in / Sign up

Export Citation Format

Share Document