scholarly journals Skeptical Reasoning with Preferred Semantics in Abstract Argumentation without Computing Preferred Extensions

Author(s):  
Matthias Thimm ◽  
Federico Cerutti ◽  
Mauro Vallati

We address the problem of deciding skeptical acceptance wrt. preferred semantics of an argument in abstract argumentation frameworks, i.e., the problem of deciding whether an argument is contained in all maximally admissible sets, a.k.a. preferred extensions. State-of-the-art algorithms solve this problem with iterative calls to an external SAT-solver to determine preferred extensions. We provide a new characterisation of skeptical acceptance wrt. preferred semantics that does not involve the notion of a preferred extension. We then develop a new algorithm that also relies on iterative calls to an external SAT-solver but avoids the costly part of maximising admissible sets. We present the results of an experimental evaluation that shows that this new approach significantly outperforms the state of the art. We also apply similar ideas to develop a new algorithm for computing the ideal extension.

Author(s):  
Corey Brettschneider

How should a liberal democracy respond to hate groups and others that oppose the ideal of free and equal citizenship? The democratic state faces the hard choice of either protecting the rights of hate groups and allowing their views to spread, or banning their views and violating citizens' rights to freedoms of expression, association, and religion. Avoiding the familiar yet problematic responses to these issues, this book proposes a new approach called value democracy. The theory of value democracy argues that the state should protect the right to express illiberal beliefs, but the state should also engage in democratic persuasion when it speaks through its various expressive capacities: publicly criticizing, and giving reasons to reject, hate-based or other discriminatory viewpoints. Distinguishing between two kinds of state action—expressive and coercive—the book contends that public criticism of viewpoints advocating discrimination based on race, gender, or sexual orientation should be pursued through the state's expressive capacities as speaker, educator, and spender. When the state uses its expressive capacities to promote the values of free and equal citizenship, it engages in democratic persuasion. By using democratic persuasion, the state can both respect rights and counter hateful or discriminatory viewpoints. The book extends this analysis from freedom of expression to the freedoms of religion and association, and shows that value democracy can uphold the protection of these freedoms while promoting equality for all citizens.


2015 ◽  
Author(s):  
Rodrigo Goulart ◽  
Juliano De Carvalho ◽  
Vera De Lima

Word Sense Disambiguation (WSD) is an important task for Biomedicine text-mining. Supervised WSD methods have the best results but they are complex and their cost for testing is too high. This work presents an experiment on WSD using graph-based approaches (unsupervised methods). Three algorithms were tested and compared to the state of the art. Results indicate that similar performance could be reached with different levels of complexity, what may point to a new approach to this problem.


Author(s):  
Tianxing Wu ◽  
Guilin Qi ◽  
Bin Luo ◽  
Lei Zhang ◽  
Haofen Wang

Extracting knowledge from Wikipedia has attracted much attention in recent ten years. One of the most valuable kinds of knowledge is type information, which refers to the axioms stating that an instance is of a certain type. Current approaches for inferring the types of instances from Wikipedia mainly rely on some language-specific rules. Since these rules cannot catch the semantic associations between instances and classes (i.e. candidate types), it may lead to mistakes and omissions in the process of type inference. The authors propose a new approach leveraging attributes to perform language-independent type inference of the instances from Wikipedia. The proposed approach is applied to the whole English and Chinese Wikipedia, which results in the first version of MulType (Multilingual Type Information), a knowledge base describing the types of instances from multilingual Wikipedia. Experimental results show that not only the proposed approach outperforms the state-of-the-art comparison methods, but also MulType contains lots of new and high-quality type information.


2008 ◽  
Vol 142 (1-2) ◽  
pp. 20-42 ◽  
Author(s):  
George D. Panagiotou ◽  
Theano Petsi ◽  
Kyriakos Bourikas ◽  
Christos S. Garoufalis ◽  
Athanassios Tsevis ◽  
...  

Author(s):  
David Bowie ◽  
Francis A. Buttle

The ideal person to write a review of books is definitely someone who has written a textbook himself. Bowie and Buttle indeed have made a promising effort to disseminate an important perspective on a subject related to hospitality. One might be quick to conclude that this text is just a dime a dozen and a window dressing of the first edition since not much space is dedicated to reflect on marketing theory and practice to the level of the state of the art. But this sort of unfair review is best left to those scholars who had experienced writing a textbook which is celebrated throughout the English speaking world, like Kotler or Drucker. The review here is a modest attempt to guide those who seek some idea and facts about the book before purchasing it.  


2011 ◽  
Vol 6 (1) ◽  
pp. 50-59
Author(s):  
Bernardo C. Vieira ◽  
Fabrício V. Andrade ◽  
Antônio O. Fernandes

The state-of-the-art SAT solvers usually share the same core techniques, for instance: the watched literals structure, conflict clause recording and non-chronological backtracking. Nevertheless, they might differ in the elimination of learnt clauses, as well as in the decision heuristic. This article presents a framework for generating configurable SAT solvers. The proposed framework is composed of the following components: a Base SAT Solver, a Perl Preprocessor, XML files (Solver Description and Heuristics Description files) to describe each heuristic as well as the set of heuristics that the generated solver uses. This solvers may use several techniques and heuristics such as those implemented in BerkMin, and in Equivalence Checking of Dissimilar Circuits, and also in Minisat. In order to demonstrate the effectiveness of the proposed framework, this article also presents three distinct SAT solver instances generated by the framework to address a complex and challenging industry problem: the Combinational Equivalence Checking problem (CEC).The first instance is a SAT solver that uses BerkMin and Dissimilar Circuits core techniques except the learnt clause elimination heuristic that has been adapted from Minisat; the second is another solver that combines BerkMin and Minisat decision heuristics at run-time; and the third is yet another SAT solver that changes the database reducing heuristic at run-time. The experiments demonstrate that the first SAT solver generated is a faster solver than state-of-the-art SAT solver BerkMin for several instances as well as for Minisat in almost every instance.


2020 ◽  
Vol 13 (7) ◽  
pp. 3909-3922
Author(s):  
Florian Tornow ◽  
Carlos Domenech ◽  
Howard W. Barker ◽  
René Preusker ◽  
Jürgen Fischer

Abstract. Shortwave (SW) fluxes estimated from broadband radiometry rely on empirically gathered and hemispherically resolved fields of outgoing top-of-atmosphere (TOA) radiances. This study aims to provide more accurate and precise fields of TOA SW radiances reflected from clouds over ocean by introducing a novel semiphysical model predicting radiances per narrow sun-observer geometry. This model was statistically trained using CERES-measured radiances paired with MODIS-retrieved cloud parameters as well as reanalysis-based geophysical parameters. By using radiative transfer approximations as a framework to ingest the above parameters, the new approach incorporates cloud-top effective radius and above-cloud water vapor in addition to traditionally used cloud optical depth, cloud fraction, cloud phase, and surface wind speed. A two-stream cloud albedo – serving to statistically incorporate cloud optical thickness and cloud-top effective radius – and Cox–Munk ocean reflectance were used to describe an albedo over each CERES footprint. Effective-radius-dependent asymmetry parameters were obtained empirically and separately for each viewing-illumination geometry. A simple equation of radiative transfer, with this albedo and attenuating above-cloud water vapor as inputs, was used in its log-linear form to allow for statistical optimization. We identified the two-stream functional form that minimized radiance residuals calculated against CERES observations and outperformed the state-of-the-art approach for most observer geometries outside the sun-glint and solar zenith angles between 20 and 70∘, reducing the median SD of radiance residuals per solar geometry by up to 13.2 % for liquid clouds, 1.9 % for ice clouds, and 35.8 % for footprints containing both cloud phases. Geometries affected by sun glint (constituting between 10 % and 1 % of the discretized upward hemisphere for solar zenith angles of 20 and 70∘, respectively), however, often showed weaker performance when handled with the new approach and had increased residuals by as much as 60 % compared to the state-of-the-art approach. Overall, uncertainties were reduced for liquid-phase and mixed-phase footprints by 5.76 % and 10.81 %, respectively, while uncertainties for ice-phase footprints increased by 0.34 %. Tested for a variety of scenes, we further demonstrated the plausibility of scene-wise predicted radiance fields. This new approach may prove useful when employed in angular distribution models and may result in improved flux estimates, in particular dealing with clouds characterized by small or large droplet/crystal sizes.


2022 ◽  
pp. 580-606
Author(s):  
Tianxing Wu ◽  
Guilin Qi ◽  
Bin Luo ◽  
Lei Zhang ◽  
Haofen Wang

Extracting knowledge from Wikipedia has attracted much attention in recent ten years. One of the most valuable kinds of knowledge is type information, which refers to the axioms stating that an instance is of a certain type. Current approaches for inferring the types of instances from Wikipedia mainly rely on some language-specific rules. Since these rules cannot catch the semantic associations between instances and classes (i.e. candidate types), it may lead to mistakes and omissions in the process of type inference. The authors propose a new approach leveraging attributes to perform language-independent type inference of the instances from Wikipedia. The proposed approach is applied to the whole English and Chinese Wikipedia, which results in the first version of MulType (Multilingual Type Information), a knowledge base describing the types of instances from multilingual Wikipedia. Experimental results show that not only the proposed approach outperforms the state-of-the-art comparison methods, but also MulType contains lots of new and high-quality type information.


2016 ◽  
Vol 4 ◽  
pp. 183-196 ◽  
Author(s):  
Ashish Vaswani ◽  
Kenji Sagae

Transition-based approaches based on local classification are attractive for dependency parsing due to their simplicity and speed, despite producing results slightly below the state-of-the-art. In this paper, we propose a new approach for approximate structured inference for transition-based parsing that produces scores suitable for global scoring using local models. This is accomplished with the introduction of error states in local training, which add information about incorrect derivation paths typically left out completely in locally-trained models. Using neural networks for our local classifiers, our approach achieves 93.61% accuracy for transition-based dependency parsing in English.


Sign in / Sign up

Export Citation Format

Share Document