scholarly journals On polynomial-time learnability in the limit of strictly deterministic automata

1995 ◽  
Vol 19 (2) ◽  
pp. 153-179 ◽  
Author(s):  
Takashi Yokomori
2021 ◽  
Vol 178 (1-2) ◽  
pp. 59-76
Author(s):  
Emmanuel Filiot ◽  
Pierre-Alain Reynier

Copyless streaming string transducers (copyless SST) have been introduced by R. Alur and P. Černý in 2010 as a one-way deterministic automata model to define transductions of finite strings. Copyless SST extend deterministic finite state automata with a set of variables in which to store intermediate output strings, and those variables can be combined and updated all along the run, in a linear manner, i.e., no variable content can be copied on transitions. It is known that copyless SST capture exactly the class of MSO-definable string-to-string transductions, and are as expressive as deterministic two-way transducers. They enjoy good algorithmic properties. Most notably, they have decidable equivalence problem (in PSpace). On the other hand, HDT0L systems have been introduced for a while, the most prominent result being the decidability of the equivalence problem. In this paper, we propose a semantics of HDT0L systems in terms of transductions, and use it to study the class of deterministic copyful SST. Our contributions are as follows: (i)HDT0L systems and total deterministic copyful SST have the same expressive power, (ii)the equivalence problem for deterministic copyful SST and the equivalence problem for HDT0L systems are inter-reducible, in quadratic time. As a consequence, equivalence of deterministic SST is decidable, (iii)the functionality of non-deterministic copyful SST is decidable, (iv)determining whether a non-deterministic copyful SST can be transformed into an equivalent non-deterministic copyless SST is decidable in polynomial time.


10.29007/c3bj ◽  
2018 ◽  
Author(s):  
Udi Boker

There are various types of automata on infinite words, differing in their acceptance conditions. The most classic ones are weak, Bu ̈chi, co-Bu ̈chi, parity, Rabin, Streett, and Muller. This is opposed to the case of automata on finite words, in which there is only one standard type. The natural question is why—Why not a single type? Why these particular types? Shall we further look into additional types?For answering these questions, we clarify the succinctness of the different automata types and the size blowup involved in performing boolean operations on them. To this end, we show that unifying or intersecting deterministic automata of the classic ω-regular- complete types, namely parity, Rabin, Streett, and Muller, involves an exponential size blowup.We argue that there are good reasons for the classic types, mainly in the case of nondeterministic and alternating automata. They admit good size and complexity bounds with respect to succinctness, boolean operations, and decision procedures, and they are closely connected to various logics.Yet, we also argue that there is place for additional types, especially in the case of deterministic automata. In particular, generalized-Rabin, which was recently introduced, as well as a disjunction of Streett conditions, which we call hyper-Rabin, where the latter further generalizes the former, are interesting to consider. They may be exponentially more succinct than the classic types, they allow for union and intersection with only a quadratic size blowup, and their nonemptiness can be checked in polynomial time.


2018 ◽  
Vol 60 (2) ◽  
pp. 360-375
Author(s):  
A. V. Vasil'ev ◽  
D. V. Churikov

10.29007/v68w ◽  
2018 ◽  
Author(s):  
Ying Zhu ◽  
Mirek Truszczynski

We study the problem of learning the importance of preferences in preference profiles in two important cases: when individual preferences are aggregated by the ranked Pareto rule, and when they are aggregated by positional scoring rules. For the ranked Pareto rule, we provide a polynomial-time algorithm that finds a ranking of preferences such that the ranked profile correctly decides all the examples, whenever such a ranking exists. We also show that the problem to learn a ranking maximizing the number of correctly decided examples (also under the ranked Pareto rule) is NP-hard. We obtain similar results for the case of weighted profiles when positional scoring rules are used for aggregation.


Author(s):  
Yishay Mor ◽  
Claudia V. Goldman ◽  
Jeffrey S. Rosenschein
Keyword(s):  

2013 ◽  
Vol 61 (16) ◽  
pp. 4127-4140 ◽  
Author(s):  
Awais Hussain Sani ◽  
Philippe Coussy ◽  
Cyrille Chavet

1986 ◽  
Vol 9 (3) ◽  
pp. 323-342
Author(s):  
Joseph Y.-T. Leung ◽  
Burkhard Monien

We consider the computational complexity of finding an optimal deadlock recovery. It is known that for an arbitrary number of resource types the problem is NP-hard even when the total cost of deadlocked jobs and the total number of resource units are “small” relative to the number of deadlocked jobs. It is also known that for one resource type the problem is NP-hard when the total cost of deadlocked jobs and the total number of resource units are “large” relative to the number of deadlocked jobs. In this paper we show that for one resource type the problem is solvable in polynomial time when the total cost of deadlocked jobs or the total number of resource units is “small” relative to the number of deadlocked jobs. For fixed m ⩾ 2 resource types, we show that the problem is solvable in polynomial time when the total number of resource units is “small” relative to the number of deadlocked jobs. On the other hand, when the total number of resource units is “large”, the problem becomes NP-hard even when the total cost of deadlocked jobs is “small” relative to the number of deadlocked jobs. The results in the paper, together with previous known ones, give a complete delineation of the complexity of this problem under various assumptions of the input parameters.


Sign in / Sign up

Export Citation Format

Share Document