scholarly journals Approximating probabilistic models as weighted finite automata

2021 ◽  
pp. 1-36
Author(s):  
Ananda Theertha Suresh ◽  
Brian Roark ◽  
Michael Riley ◽  
Vlad Schogol

Abstract Weighted finite automata (WFA) are often used to represent probabilistic models, such as n-gram language models, since, among other things, they are efficient for recognition tasks in time and space. The probabilistic source to be represented as a WFA, however, may come in many forms. Given a generic probabilistic model over sequences, we propose an algorithm to approximate it as a weighted finite automaton such that the Kullback-Leibler divergence between the source model and the WFA target model is minimized. The proposed algorithm involves a counting step and a difference of convex optimization step, both of which can be performed efficiently.We demonstrate the usefulness of our approach on various tasks, including distilling n-gram models from neural models, building compact language models, and building open-vocabulary character models. The algorithms used for these experiments are available in an open-source software library.

2019 ◽  
Author(s):  
Ananda Theertha Suresh ◽  
Brian Roark ◽  
Michael Riley ◽  
Vlad Schogol

2008 ◽  
Vol 04 (01) ◽  
pp. 87-106
Author(s):  
ALKET MEMUSHAJ ◽  
TAREK M. SOBH

Probabilistic language models have gained popularity in Natural Language Processing due to their ability to successfully capture language structures and constraints with computational efficiency. Probabilistic language models are flexible and easily adapted to language changes over time as well as to some new languages. Probabilistic language models can be trained and their accuracy strongly related to the availability of large text corpora. In this paper, we investigate the usability of grapheme probabilistic models, specifically grapheme n-grams models in spellchecking as well as augmentative typing systems. Grapheme n-gram models require substantially smaller training corpora and that is one of the main drivers for this thesis in which we build grapheme n-gram language models for the Albanian language. There are presently no available Albanian language corpora to be used for probabilistic language modeling. Our technique attempts to augment spellchecking and typing systems by utilizing grapheme n-gram language models in improving suggestion accuracy in spellchecking and augmentative typing systems. Our technique can be implemented in a standalone tool or incorporated in another tool to offer additional selection/scoring criteria.


Author(s):  
Vitaly Kuznetsov ◽  
Hank Liao ◽  
Mehryar Mohri ◽  
Michael Riley ◽  
Brian Roark

2020 ◽  
Author(s):  
Grant P. Strimel ◽  
Ariya Rastrow ◽  
Gautam Tiwari ◽  
Adrien Piérard ◽  
Jon Webb

2007 ◽  
Vol 18 (04) ◽  
pp. 799-811
Author(s):  
MATHIEU GIRAUD ◽  
PHILLIPE VEBER ◽  
DOMINIQUE LAVENIER

Weighted finite automata (WFA) are used with FPGA accelerating hardware to scan large genomic banks. Hardwiring such automata raises surface area and clock frequency constraints, requiring efficient ∊-transitions-removal techniques. In this paper, we present bounds on the number of new transitions for the development of acyclic WFA, which is a special case of the ∊-transitions-removal problem. We introduce a new problem, a partial removal of ∊-transitions while accepting short chains of ∊-transitions.


Author(s):  
ROMAN BERTOLAMI ◽  
HORST BUNKE

Current multiple classifier systems for unconstrained handwritten text recognition do not provide a straightforward way to utilize language model information. In this paper, we describe a generic method to integrate a statistical n-gram language model into the combination of multiple offline handwritten text line recognizers. The proposed method first builds a word transition network and then rescores this network with an n-gram language model. Experimental evaluation conducted on a large dataset of offline handwritten text lines shows that the proposed approach improves the recognition accuracy over a reference system as well as over the original combination method that does not include a language model.


Author(s):  
U.S.N. Raju ◽  
Irlanki Sandeep ◽  
Nattam Sai Karthik ◽  
Rayapudi Siva Praveen ◽  
Mayank Singh Sachan

Sign in / Sign up

Export Citation Format

Share Document