Processing Large Text Corpus Using N-Gram Language Modeling and Smoothing

Author(s):  
Sandhya Avasthi ◽  
Ritu Chauhan ◽  
Debi Prasanna Acharjya
Keyword(s):  
Author(s):  
Daoyuan Li ◽  
Tegawende F. Bissyande ◽  
Sylvain Kubler ◽  
Jacques Klein ◽  
Yves Le Traon

2007 ◽  
Vol 21 (2) ◽  
pp. 373-392 ◽  
Author(s):  
Brian Roark ◽  
Murat Saraclar ◽  
Michael Collins
Keyword(s):  

MACRo 2015 ◽  
2017 ◽  
Vol 2 (1) ◽  
pp. 1-10
Author(s):  
József Domokos ◽  
Zsolt Attila Szakács

AbstractThis paper presents a Romanian language phonetic transcription web service and application built using Java technologies, on the top of the Phonetisaurus G2P, a Word Finite State Transducer (WFST)-driven Grapheme-to-Phoneme Conversion toolkit.We used NaviRO Romanian language pronunciation dictionary for WFST model training, and MIT Language Modeling (MITLM) toolkit to estimate the needed joint sequence n-gram language model.Dictionary evaluation tests are also included in the paper.The service can be accessed for educational, research and other non-commercial usage at http://users.utcluj.ro/~jdomokos/naviro/.


2008 ◽  
Vol 04 (01) ◽  
pp. 87-106
Author(s):  
ALKET MEMUSHAJ ◽  
TAREK M. SOBH

Probabilistic language models have gained popularity in Natural Language Processing due to their ability to successfully capture language structures and constraints with computational efficiency. Probabilistic language models are flexible and easily adapted to language changes over time as well as to some new languages. Probabilistic language models can be trained and their accuracy strongly related to the availability of large text corpora. In this paper, we investigate the usability of grapheme probabilistic models, specifically grapheme n-grams models in spellchecking as well as augmentative typing systems. Grapheme n-gram models require substantially smaller training corpora and that is one of the main drivers for this thesis in which we build grapheme n-gram language models for the Albanian language. There are presently no available Albanian language corpora to be used for probabilistic language modeling. Our technique attempts to augment spellchecking and typing systems by utilizing grapheme n-gram language models in improving suggestion accuracy in spellchecking and augmentative typing systems. Our technique can be implemented in a standalone tool or incorporated in another tool to offer additional selection/scoring criteria.


1998 ◽  
Vol 24 (3) ◽  
pp. 171-192 ◽  
Author(s):  
Gerasimos Potamianos ◽  
Frederick Jelinek

2012 ◽  
Vol 10 (06) ◽  
pp. 1250016 ◽  
Author(s):  
MADHAVI K. GANAPATHIRAJU ◽  
ASIA D. MITCHELL ◽  
MOHAMED THAHIR ◽  
KAMIYA MOTWANI ◽  
SESHAN ANANTHASUBRAMANIAN

Genome sequences contain a number of patterns that have biomedical significance. Repetitive sequences of various kinds are a primary component of most of the genomic sequence patterns. We extended the suffix-array based Biological Language Modeling Toolkit to compute n-gram frequencies as well as n-gram language-model based perplexity in windows over the whole genome sequence to find biologically relevant patterns. We present the suite of tools and their application for analysis on whole human genome sequence.


2020 ◽  
Author(s):  
Hozan K. Hamarashid ◽  
Soran A. Saeed ◽  
Tarik A. Rashid

<p>Next word prediction is an input technology that simplifies the process of typing by suggesting the next word to a user to select, as typing in a conversation consumes time. A few previous studies have focused on the Kurdish language, including the use of next word prediction. However, the lack of a Kurdish text corpus presents a challenge. Moreover, the lack of a sufficient number of N-grams for the Kurdish language, for instance, five grams, is the reason for the rare use of next Kurdish word prediction. Furthermore, the improper display of several Kurdish letters in the Rstudio software is another problem. This paper provides a Kurdish corpus, creates five, and presents a unique research work on next word prediction for Kurdish Sorani and Kurmanji. The N-gram model has been used for next word prediction to reduce the amount of time while typing in the Kurdish language. In addition, little work has been conducted on next Kurdish word prediction; thus, the N-gram model is utilized to suggest text accurately. To do so, R programming and RStudio are used to build the application. The model is 96.3% accurate.</p>


Sign in / Sign up

Export Citation Format

Share Document