scholarly journals Parliamentary Voting Procedures: Agenda Control, Manipulation, and Uncertainty

2017 ◽  
Vol 59 ◽  
pp. 133-173 ◽  
Author(s):  
Robert Bredereck ◽  
Jiehua Chen ◽  
Rolf Niedermeier ◽  
Toby Walsh

We study computational problems for two popular parliamentary voting procedures: the amendment procedure and the successive procedure. They work in multiple stages where the result of each stage may influence the result of the next stage. Both procedures proceed according to a given linear order of the alternatives, an agenda. We obtain the following results for both voting procedures: On the one hand, deciding whether one can make a specific alternative win by reporting insincere preferences by the fewest number of voters, the Manipulation problem, or whether there is a suitable ordering of the agenda, the Agenda Control problem, takes polynomial time. On the other hand, our experimental studies with real-world data indicate that most preference profiles cannot be manipulated by only few voters and a successful agenda control is typically impossible. If the voters' preferences are incomplete, then deciding whether an alternative can possibly win is NP-hard for both procedures. Whilst deciding whether an alternative necessarily wins is coNP-hard for the amendment procedure, it is polynomial-time solvable for the successive procedure.

Author(s):  
M. A. Danilov ◽  
◽  
M. V. Drobysh ◽  
A. N. Dubovitsky ◽  
F. G. Markov ◽  
...  

Restrictions of emissions for civil aircraft engines, on the one hand, and the need in increasing the engine efficiency, on the other hand, cause difficulties during development of low-emission combustors for such engines.


Author(s):  
Federico Fabbrini

This chapter focuses on the European Union after Brexit and articulates the case for constitutional reforms. Reforms are necessary to address the substantive and institutional shortcomings that patently emerged in the context of Europe’s old and new crises. Moreover, reforms will be compelled by the exigencies of the post-Covid-19 EU recovery, which pushes the EU towards new horizons in terms of fiscal federalism and democratic governance. As a result, the chapter considers both obstacles and opportunities to reform the EU and make it more effective and legitimate. On the one hand, it underlines the difficulties connected to the EU treaty amendment procedure, owing to the requirement of unanimous approval of any treaty change, and the consequential problem of the veto. On the other hand, it emphasizes the increasing practice by Member States to use intergovernmental agreements outside the EU legal order and stresses that these have set new rules on their entry into force which overcome state veto, suggesting that this is now a precedent to consider.


2020 ◽  
Author(s):  
Yi-Chieh Huang ◽  
Kamhon Kan ◽  
Larry Y. Tzeng ◽  
Kili C. Wang

Knowing how small a violation of stochastic dominance rules would be accepted by most individuals is a prerequisite to applying almost stochastic dominance criteria. Unlike previous laboratory-experimental studies, this paper estimates an acceptable violation of stochastic dominance rules with 939,690 real world data observations on a choice of deductibles in automobile theft insurance. We find that, for all policyholders in the sample who optimally chose a low deductible, the upper bound estimate of the acceptable violation ratio is 0.0014, which is close to zero. On the other hand, considering that most decision makers, such as 99% (95%) of the policyholders in the sample, optimally chose the low deductible, the upper bound estimate of the acceptable violation ratio is 0.0405 (0.0732). Our results provide reference values for the acceptable violation ratio for applying almost stochastic dominance rules. This paper was accepted by Manel Baucells, decision analysis.


2013 ◽  
Vol 5 (1) ◽  
pp. 1-33 ◽  
Author(s):  
Marga Reimer

Recent experimental studies appear to discredit Gricean accounts of irony and metaphor. I argue that appearances are decidedly misleading here and that Gricean accounts of these figures of speech are actually confirmed by the studies in question. However, my primary aim is not so much to defend Gricean accounts of irony and metaphor as it is to motivate two related points: one substantive and one methodological. The substantive point concerns something Grice suggests in his brief remarks on irony: that the interpretation of an ironical (vs. metaphorical) utterance requires two distinct applications of second-order theory of mind (ToM). I argue that such a view has considerable explanatory power. It can explain an intuitive contrast between irony and metaphor, some interesting data on the ToM abilities of patients with schizophrenia, and some intuitive similarities between irony on the one hand and hyperbole and meiosis on the other. The methodological point concerns the relationship between the empirical psychologist’s (or experimental philosopher’s) experimental studies and the armchair philosopher’s thought-experiments. I suggest that the credibility of an experimentally supported claim is enhanced when it captures the reflective judgments captured in the armchair philosopher’s thought-experiments.


2016 ◽  
Vol 56 ◽  
pp. 269-327 ◽  
Author(s):  
Maximilian Fickert ◽  
Joerg Hoffmann ◽  
Marcel Steinmetz

Recent work has shown how to improve delete relaxation heuristics by computing relaxed plans, i.e., the hFF heuristic, in a compiled planning task PiC which represents a given set C of fact conjunctions explicitly. While this compilation view of such partial delete relaxation is simple and elegant, its meaning with respect to the original planning task is opaque, and the size of PiC grows exponentially in |C|. We herein provide a direct characterization, without compilation, making explicit how the approach arises from a combination of the delete-relaxation with critical-path heuristics. Designing equations characterizing a novel view on h+ on the one hand, and a generalized version hC of hm on the other hand, we show that h+(PiC) can be characterized in terms of a combined hcplus equation. This naturally generalizes the standard delete-relaxation framework: understanding that framework as a relaxation over singleton facts as atomic subgoals, one can refine the relaxation by using the conjunctions C as atomic subgoals instead. Thanks to this explicit view, we identify the precise source of complexity in hFF(PiC), namely maximization of sets of supported atomic subgoals during relaxed plan extraction, which is easy for singleton-fact subgoals but is NP-complete in the general case. Approximating that problem greedily, we obtain a polynomial-time hCFF version of hFF(PiC), superseding the PiC compilation, and superseding the modified PiCce compilation which achieves the same complexity reduction but at an information loss. Experiments on IPC benchmarks show that these theoretical advantages can translate into empirical ones.


2017 ◽  
Vol 50 (1) ◽  
pp. 181-203 ◽  
Author(s):  
Monika Nalepa

Abstract This paper draws on Cox and McCubbins’ comparison of floor and cartel agenda models and adapts it to the context of multi-party parliamentary regimes with the goal of clarifying some important differences between the legislative consequences of cohesion and discipline, on the one hand, and the effects of agenda setting, on the other. Internal party discipline and/or preference cohesion receives the bulk of emphasis in comparative studies of empirical patterns of legislative behavior, generally without considering the role of the agenda. In a series of stylized models, this paper highlights important differences between having more unified parties and/or coalitions as a result of discipline and/or cohesion and the successful use of agenda control. We show that cohesion or discipline - understood as the ability to achieve voting unity - does not produce the same patterns of legislative behavior as negative agenda control. Data on legislative voting in the Polish Sejm are used to illustrate some points.


Author(s):  
D. Dahlke ◽  
M. Linkiewicz

This paper compares two generic approaches for the reconstruction of buildings. Synthesized and real oblique and vertical aerial imagery is transformed on the one hand into a dense photogrammetric 3D point cloud and on the other hand into photogrammetric 2.5D surface models depicting a scene from different cardinal directions. One approach evaluates the 3D point cloud statistically in order to extract the hull of structures, while the other approach makes use of salient line segments in 2.5D surface models, so that the hull of 3D structures can be recovered. With orders of magnitudes more analyzed 3D points, the point cloud based approach is an order of magnitude more accurate for the synthetic dataset compared to the lower dimensioned, but therefor orders of magnitude faster, image processing based approach. For real world data the difference in accuracy between both approaches is not significant anymore. In both cases the reconstructed polyhedra supply information about their inherent semantic and can be used for subsequent and more differentiated semantic annotations through exploitation of texture information.


Author(s):  
D. Dahlke ◽  
M. Linkiewicz

This paper compares two generic approaches for the reconstruction of buildings. Synthesized and real oblique and vertical aerial imagery is transformed on the one hand into a dense photogrammetric 3D point cloud and on the other hand into photogrammetric 2.5D surface models depicting a scene from different cardinal directions. One approach evaluates the 3D point cloud statistically in order to extract the hull of structures, while the other approach makes use of salient line segments in 2.5D surface models, so that the hull of 3D structures can be recovered. With orders of magnitudes more analyzed 3D points, the point cloud based approach is an order of magnitude more accurate for the synthetic dataset compared to the lower dimensioned, but therefor orders of magnitude faster, image processing based approach. For real world data the difference in accuracy between both approaches is not significant anymore. In both cases the reconstructed polyhedra supply information about their inherent semantic and can be used for subsequent and more differentiated semantic annotations through exploitation of texture information.


2018 ◽  
Vol 16 (1) ◽  
pp. 48-71 ◽  
Author(s):  
Carla Canestrari ◽  
Ivana Bianchi

Abstract This paper proposes a new way of analyzing the contrast between an ironic comment and the referent context by focusing on the structure of the dimension which the contrast belongs to. This new approach was stimulated by previous experimental studies demonstrating that dimensions are perceptually made up of two opposite poles and an intermediate region consisting either of point or range properties. Applying this schema it became clear that, on the one hand what previous evidence-based literature mostly focuses on is the idea that for an ironic meaning to be detected there must be a contrast between two poles or within a pole; on the other hand, that there is room for new investigations concerning whether it is possible to make ironic comments containing poles to refer to intermediate situations (i.e. situations perceived as neither one pole nor the other) or, vice versa, to make ironic comments containing intermediates to refer to polarized situations.


2019 ◽  
Vol 946 ◽  
pp. 661-667
Author(s):  
Ivan Nikolaevich Erdakov ◽  
Vasily A. Ivanov ◽  
Alexander V. Vyboishchik

The paper presents the methods for forecasting the structure and geometrical parameters of casts by using the ProCAST system of engineering analysis. Based on experimental studies and computer simulation, a regularity between the supercooling rate of aluminium alloy on the one hand and the nucleation rates and crystal growth rates on the other has been established. There have also been established dependencies describing the change in the plasticity modulus, the coefficient of thermal linear expansion, Poisson's coefficient within the temperature range of 20 to 1000°C for cores made from α-set mixture. The computer simulation based on the experimental data of the processing of silumin casts made it possible to forecast the alloy structure with the probability level of 95%, and to calculate the accuracy of hindered contraction of the alloy with accuracy equal to ± 1.5%.


Sign in / Sign up

Export Citation Format

Share Document