pushdown automaton
Recently Published Documents


TOTAL DOCUMENTS

35
(FIVE YEARS 3)

H-INDEX

6
(FIVE YEARS 0)

2021 ◽  
Vol 50 (1) ◽  
pp. 76-88
Author(s):  
QingE Wu ◽  
Xing Wang ◽  
Zhiwu Chen ◽  
Hu Chen ◽  
Dong Sun ◽  
...  

In order to perform better recognition, tracking and control for fuzzy and uncertain thing, this paper will design a suitable fuzzy pushdown automaton (FPDA) control method to solve the problem. Firstly, the control design structure of FPDA and the decision reasoning rules in control are given. Secondly, the application of FPDA in prediction of quality control for spinning yarn is discussed in the practical problem. Finally, the comparison of FPDA and other control methods on the target control is given. The simulation results show that the control speed and the average precision of designed FPDA are faster by12ms and higher by 4.98% than that of traditional method, which its control precision is 96.87%.


2021 ◽  
Vol 55 ◽  
pp. 9
Author(s):  
František Mráz ◽  
Friedrich Otto

Here we show that for monotone RWW- (and RRWW-) automata, window size two is sufficient, both in the nondeterministic as well as in the deterministic case. For the former case, this is done by proving that each context-free language is already accepted by a monotone RWW-automaton of window size two. In the deterministic case, we first prove that each deterministic pushdown automaton can be simulated by a deterministic monotone RWW-automaton of window size three, and then we present a construction that transforms a deterministic monotone RWW-automaton of window size three into an equivalent automaton of the same type that has window size two. Furthermore, we study the expressive power of shrinking RWW- and RRWW-automata the window size of which is just one or two. We show that for shrinking RRWW-automata that are nondeterministic, window size one suffices, while for nondeterministic shrinking RWW-automata, we already need window size two to accept all growing context-sensitive languages. In the deterministic case, shrinking RWW- and RRWW-automata of window size one accept only regular languages, while those of window size two characterize the Church-Rosser languages.


2019 ◽  
Vol 1333 ◽  
pp. 032085
Author(s):  
A S Kuznetsov ◽  
R Y Tsarev ◽  
T N Yamskikh ◽  
A N Knyazkov ◽  
K Y Zhigalov ◽  
...  

2018 ◽  
Vol 29 (03) ◽  
pp. 425-446 ◽  
Author(s):  
Masaki Nakanishi

Several kinds of quantum pushdown automata models have been proposed, and their computational power has been investigated intensively. However, for some quantum pushdown automaton models, it is unknown whether quantum models are at least as powerful as their classical counterparts or not. This is due to the reversibility restriction. In this paper, we introduce a new quantum pushdown automaton model that has a garbage tape. This model can overcome the reversibility restriction by exploiting the garbage tape to store popped symbols. We show that the proposed model can simulate any quantum pushdown automaton with classical stack as well as any probabilistic pushdown automaton. We also show that our model can solve a certain promise problem exactly while deterministic pushdown automata cannot. These results imply that our model is strictly more powerful than its classical counterparts in the setting of exact, one-sided error and non-deterministic computation. Showing impossibility for a promise problem is a difficult task in general. However, by analyzing the behavior of a deterministic pushdown automaton carefully, we obtained the impossibility result. This is one of the main contributions of the paper.


2018 ◽  
Vol 29 (02) ◽  
pp. 233-250 ◽  
Author(s):  
Joey Eremondi ◽  
Oscar H. Ibarra ◽  
Ian McQuillan

A language [Formula: see text] is said to be dense if every word in the universe is an infix of some word in [Formula: see text]. This notion has been generalized from the infix operation to arbitrary word operations [Formula: see text] in place of the infix operation ([Formula: see text]-dense, with infix-dense being the standard notion of dense). It is shown here that it is decidable, for a language [Formula: see text] accepted by a one-way nondeterministic reversal-bounded pushdown automaton, whether [Formula: see text] is infix-dense. However, it becomes undecidable for both deterministic pushdown automata (with no reversal-bound), and for nondeterministic one-counter automata. When examining suffix-density, it is undecidable for more restricted families such as deterministic one-counter automata that make three reversals on the counter, but it is decidable with less reversals. Other decidability results are also presented on dense languages, and contrasted with a marked version called [Formula: see text]-marked-density. Also, new languages are demonstrated to be outside various deterministic language families after applying different deletion operations from smaller families. Lastly, bounded-dense languages are defined and examined.


2017 ◽  
Vol 28 (05) ◽  
pp. 583-601 ◽  
Author(s):  
Suna Bensch ◽  
Johanna Björklund ◽  
Martin Kutrib

We introduce and investigate stack transducers, which are one-way stack automata with an output tape. A one-way stack automaton is a classical pushdown automaton with the additional ability to move the stack head inside the stack without altering the contents. For stack transducers, we distinguish between a digging and a non-digging mode. In digging mode, the stack transducer can write on the output tape when its stack head is inside the stack, whereas in non-digging mode, the stack transducer is only allowed to emit symbols when its stack head is at the top of the stack. These stack transducers have a motivation from natural-language interface applications, as they capture long-distance dependencies in syntactic, semantic, and discourse structures. We study the computational capacity for deterministic digging and non-digging stack transducers, as well as for their non-erasing and checking versions. We finally show that even for the strongest variant of stack transducers the stack languages are regular.


2016 ◽  
Vol 9 (28) ◽  
Author(s):  
Kabir Umar ◽  
Abu Bakar Md Sultan ◽  
Hazura Zulzalil ◽  
Novia Admodisastro ◽  
Mohd Taufik Abdullah

2016 ◽  
Vol 31 (1) ◽  
pp. 249-258 ◽  
Author(s):  
Nidhi Kalra ◽  
Ajay Kumar

2016 ◽  
Vol 2 ◽  
pp. e40
Author(s):  
Abe Kazemzadeh ◽  
James Gibson ◽  
Panayiotis Georgiou ◽  
Sungbok Lee ◽  
Shrikanth Narayanan

We describe and experimentally validate a question-asking framework for machine-learned linguistic knowledge about human emotions. Using the Socratic method as a theoretical inspiration, we develop an experimental method and computational model for computers to learn subjective information about emotions by playing emotion twenty questions (EMO20Q), a game of twenty questions limited to words denoting emotions. Using human–human EMO20Q data we bootstrap a sequential Bayesian model that drives a generalized pushdown automaton-based dialog agent that further learns from 300 human–computer dialogs collected on Amazon Mechanical Turk. The human–human EMO20Q dialogs show the capability of humans to use a large, rich, subjective vocabulary of emotion words. Training on successive batches of human–computer EMO20Q dialogs shows that the automated agent is able to learn from subsequent human–computer interactions. Our results show that the training procedure enables the agent to learn a large set of emotion words. The fully trained agent successfully completes EMO20Q at 67% of human performance and 30% better than the bootstrapped agent. Even when the agent fails to guess the human opponent’s emotion word in the EMO20Q game, the agent’s behavior of searching for knowledge makes it appear human-like, which enables the agent to maintain user engagement and learn new, out-of-vocabulary words. These results lead us to conclude that the question-asking methodology and its implementation as a sequential Bayes pushdown automaton are a successful model for the cognitive abilities involved in learning, retrieving, and using emotion words by an automated agent in a dialog setting.


2015 ◽  
Author(s):  
Abe Kazemzadeh ◽  
James Gibson ◽  
Panayiotis Georgiou ◽  
Sungbok Lee ◽  
Shrikanth Narayanan

We describe and experimentally validate a question-asking framework for machine-learned linguistic knowledge about human emotions. Using the Socratic method as a theoretical inspiration, we develop an experimental method and computational model for computers to learn subjective information about emotions by playing emotion twenty questions (EMO20Q), a game of twenty questions limited to words denoting emotions. Using human-human EMO20Q data we bootstrap a sequential Bayesian model that drives a generalized pushdown automaton-based dialog agent that further learns from 300 human-computer dialogs collected on Amazon Mechanical Turk. The human-human EMO20Q dialogs show the capability of humans to use a large, rich, subjective vocabulary of emotion words. Training on successive batches of human-computer EMO20Q dialogs shows that the automated agent is able to learn from subsequent human-computer interactions. Our results show that the training procedure enables the agent to learn a large set of emotions words. The fully trained agent successfully completes EMO20Q at 67% of human performance and 30% better than the bootstrapped agent. Even when the agent fails to guess the human opponent's emotion word in the EMO20Q game, the agent's behavior of searching for knowledge makes it appear human-like, which enables the agent maintain user engagement and learn new, out-of-vocabulary words. These results lead us to conclude that the question-asking methodology and its implementation as a sequential Bayes pushdown automaton are a successful model for the cognitive abilities involved in learning, retrieving, and using emotion words by an automated agent in a dialog setting.


Sign in / Sign up

Export Citation Format

Share Document